Welcome!

Microservices Expo Authors: Dalibor Siroky, Elizabeth White, Pat Romanski, John Katrick, Liz McMillan

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers, Containers Expo Blog, Cloud Security

@DevOpsSummit: Article

An Interview with @JPaulReed | @DevOpsSummit #DevOps #ContinuousDelivery

Intersections: DevOps, Release Engineering, and Security

Intersections: DevOps, Release Engineering, and Security

Derek: Good morning, Paul. There's a lot those pursuing DevOps can learn from Release Engineering practices. I know you've got a lot of experience to share, so let's get started.

J. Paul Reed: Good morning, it's good to be here. My background is release engineering, although these days I am actually called a DevOps consultant. I have about 15 years' experience doing that. That's what my presentation is about: sort of the intersection between DevOps, Rugged DevOps, and release engineering and wanting to explore that with the security and Rugged DevOps communities.

Derek: In your presentation, you touched on the culture between security and DevOps and also release engineering -- that a number of organizations have challenges with and that's the Culture of No. There's a lot of, "Hey, we want to move faster at higher velocity. We have new requirements that we're trying to push out to market, and we have these new practices that we're moving forward with. Can security come and play with the DevOps team?"

J. Paul: I actually put up a tweet that a lot of people liked on one of my slides: "If your answer to every question is ‘no,' do not be surprised when people start pouring effort into ways to not even ask." This idea that if your answer to everything is "no," then that is seen as a bug or a blockage like on the Internet, and organizations will just route around it. I think security found that out in a very visceral, hard way. In release engineering, it's the same thing.

_________________________________

"If your answer to every question is ‘no,' do not be surprised when people start pouring effort into ways to not even ask."

_________________________________

One of the reasons that Git became so popular is because developers didn't have to ask if there were permission to create branches. They created an entire infrastructure and ecosystem around not having to ask. I think that's one of the risks we run, and it's one of the similarities.

One of the interesting things we're finding with DevOps ... because that idea of getting new traction and people do want to move faster ... is we can frame the work that we do in the context of that pipeline. By identifying and optimizing some of the business value that is part of that pipeline, businesses are receptive. Developers are receptive. Different parts of the business are receptive in ways I've almost never seen in my career, and it's great to be a part of that. From a Rugged DevOps or security perspective, I think if we could move that work into the pipeline, not only do we make it visible in terms of the costs and trade-offs, but then also we could possibly do more. It's part of that whole. There are lots of presentations talking about this idea of shift left ... that you can shift that work from your perspective further up into the stream so that you can address it earlier and actually have a chance at fixing the problem.

In talking with Josh Corman and a lot of the Rugged DevOps people, they always talk about how at the end of that process, they would rubber stamp: "Yes, this is secure." Because even if wasn't really secure and it was bad, what were you going to do? As a release engineer, that resonated with me because we felt like that all the time. We were kind of doing a bunch of work at the end, and there was no time to do it right. So a lot of times, it was skimped on.

Derek: When you think about the way traditional security works, how early can we think about Rugged DevOps shifting left?

J. Paul: Yeah, I don't think it's so much about getting everything right at the beginning, per se. I think that the question is how far forward can we shift into that process. I think if you can shift that all the way to the beginning, that is possible. The beginning is where you define your pipeline.

A lot of people define that pipeline as commits, that is developers writing code. Some people will define it actually at the product management stage, so even earlier than that. Or that kind of agile story phase, I think you could certainly integrate it there. This is sort of what I was exploring in my presentation. I open with the slide on what is the intersection of release engineering and Rugged DevOps, and I say I don't actually know. It's a very emergent field.

_________________________________

"There's no shortcuts to production...They put the financial resources and the engineering resources into building the pipeline that moves code quickly through it."

_______________________________

I spend the next few slides talking about sort of the crossover in making that bar. There are a lot of similarities there. I think when you're talking about pushing that stuff forward, it's about the more tools that you can make part of that pipeline, like release engineering tools. So for us, that might be something like: How do we track what developers create as dependencies in the work that they're doing? So how do we make that a little bit easier in time for them to say, "Yeah, I'm using this version of that, and it's integrated here from a release engineering perspective." Then from the security perspective, you can take that information and use it to do different types of security testing or penetration testing. If you can move that earlier in the process, that's what it will do. Then how early you do that really is a function of how good you get at this sort of thing.

I don't think we've seen this with security entirely yet. We're still recognizing the value with release engineering and companies are hitting it out of the park. They just put everything in there and continue delivering pipeline. There's no shortcuts to production. There's no back door to get stuff deployed. They put the financial resources and the engineering resources into building the pipeline that moves code quickly through it. Then once you do that, you can augment that pipeline with more and more features, if you will. One of those might be moving security way forward in that process.

Derek: Are there old ways to do things that just won't work in the new universe and you have to adopt new tools or practices?

J. Paul: I do hear a lot of, "Well, we can't do X because of Y" -- "Y" being one of those cut old ways that you're talking about. One of the things we continually see at conferences is idea of the answer being, "We can't do X because of the old way." In fact, in security, you see this all the time: "I can't do X because of audit compliant stuff." But case study after case study says: If you're willing to rethink the framing on the way you do audit compliance and work with your auditor -- if you're willing to look at the problem slightly differently -- then you can achieve those results. Because we have all this proof, when people say, "Oh, we can't do X because of the old way," my question is, "Are we thinking of the problem in an old frame or in a more traditional framing that is not sewn enough?"

Now that's not to imply the concerns people bring up are invalid. That's the initial question that you had, which was about people. If they have a lot of knowledge, they might be worried, "Well, I can't automate things this way as well as I can test them." I talked in my presentation about how release engineering is undergoing a fundamental shift. I'm very upfront about the fact that if you are a release engineer and you are not building continuous delivery pipeline and involved in the support and service of that continuous delivery pipeline, your job is probably not going to be there in five to 10 years. That's just the way the world works. A lot of people think, "Oh, okay, that's unfortunate or whatever."

_______________________________

"If you are a release engineer and you are not building continuous delivery pipeline and involved in the support and service of that continuous pipeline, your job is probably not going to be there in five to ten years"

_________________________________

I'll give you a QA example that I thought was really innovative.

Organizations spend a bunch of time automating a test and the initial response is, "Well, if you automate all of those tests, what are the QA engineers going to do?" It turns out that because QA engineers are so good at looking at a product and coming up with the requirements, they need a lot of that totally valuable knowledge forwarded into the value stream. They are having those QA engineers doing requirements analysis and working with product management to firm up the actual requirements that go into the continuous delivery pipeline. What was fascinating about it was that it's not that the organization was, "We are going to automate you out of a job and then we're going to fire you, so go automate yourself into a script." People are like, "I'm a person, not a machine." You have that whole conversation, and they end up doing more interesting work.

They put them working on that continuous delivery pipeline in the requirements analysis. It's totally different than what you might expect. It's going to be the same with security and release engineering. For security especially, we're going to see a lot of that work go. There's a set of compliance work you can do in an automated fashion. Once that is automated, I see a lot of discussion about red team, blue team ... kind of wargaming type of thing. And it frees up time to do that and to work as a team in that way. Because you can't automate all those things, or at least today you can't. I think everybody in the security space would agree that it's more interesting work than running around, if you've got a huge project, with a black binder with a bunch of rules.

Derek: One of the concepts that really resonated with you was the software supply chain. How does that concept fit with doing release engineering right and doing Rugged DevOps right or incorporating security into DevOps?

J. Paul: Yeah, the supply chain idea is something that was fascinating the first time I heard it. In fact, it's one of the things that Josh and I spent a bunch of time talking about it when we first met. I think it's a great way to frame a problem. I'm sad that I didn't think of it, actually, and the reason is because release engineers think about that all the time. We've thought that was our role for 20 to 30 years, for as long as release engineering has been around. It's this idea of knowing what the dependencies are, dependency management tracking and trying to make sure that you don't pull in bad dependencies -- whether they are tainted because of the license or containing malicious software. This problem has only gotten worse with open source software, and that's also something that from a supply chain perspective we talk a lot about.

_________________________________

"I told this story about an engineer who was missing a DLL from the build. They just Googled for the DDL and downloaded it, and threw it on all the build machines That was pretty scary."

_________________________________

That was one of the things that I wouldn't think keeps release engineers up at night as much as it keeps security engineers up at night. Where is our software coming from, and what issues may it have in it? That's not something traditionally developers, for whatever reason, seem to think about and that's not to denigrate them. A lot of times they're under deadlines, like we are. They go to the Internet. They grab whatever version of the library. In fact, the one I usually see is the upgraded version because there's some API that they need or something like that. There's a concern there, when you think about it, of where that's coming from. I told this story about an engineer who was a missing DLL from the build. They just Googled for the DLL and downloaded it, and threw it on all the build machines. That was pretty scary.

One of the slides in the presentation I think is really critical is: "If you have one vulnerable library in your product, that is a security problem. If you've got multiple versions of the same library and multiple versions of those are vulnerable, that's a release engineering problem." That's one of the best ways upfront that release engineers can contribute to Rugged DevOps and contribute to the security space in terms of helping to detangle that problem. More interestingly, once you've detangled that problem, you have to figure out how to make it so that that just doesn't turn into spaghetti again.

I've detangled that problem multiple times usually, by the way, not so much in a security context but in a licensing context.

_________________________________

"If you have one vulnerable library in your product, that is a security problem. If you've got multiple versions of the same library and multiple versions of those are vulnerable, that's a release engineering problem."

_________________________________

The way you do that, again, is shifting left. Moving that forward where you have a way that as developers put libraries into the product, new code that isn't written by them because there's a dependency there that was well documented. You can do that audit in kind of a continuous fashion so that maybe an artifact that you build is a list of library conversion. Then from an automated security testing perspective, we can compare that against a list of CD use or known issues.

Derek: I did a lot of research at Sonatype on the software supply chain and one stat boggles my mind. Out of the top 100 components companies were downloading, they downloaded an average of 27 versions of each of those components in a single year. When you think about the complexity and the technical debt, and if there's security debt in that at all ... you only need 100 parts and yet you're using 2,700 parts. Why would you ever want to do that?

J. Paul: One thing I'll point out is that I think the industry's moving in some sense in the wrong direction. What mean by that is you've got your Java you've built in this in to make it really, really, really easy. From the command line, you just pick up libraries from the Internet. Who knows where they came from. Node makes this trivial. In fact, Node was built around npm, the package manager. All of that is online. In fact, it's even worse. One of the things I get called in to help with a lot these days is ... and I kind of giggle at this, just because of the dichotomy ... people were so interested in Git for so long because it was like offline Git, offline commit. It's great, right? You can build offline, and people always use the example of when I'm commuting home on the train, I can commit blah, blah, blah, and that was the big reason for doing it.

Now we've moved with Node and some of the tooling around Java so that our software builds literally require us to talk to the Internet to download packages. There's this big push for offline operations. But it's fine that no download needs 68 billion versions of libraries, and everything is "self-contained." But if you're going to look at a Node package, it's got versions of those things stocked in there. That's a feature, not a bug. Right? In certain platforms ... you see this with RubyGems, when the Ruby Gems site went down, nobody could deploy their web applications. That's a fundamentally broken engineering design in my opinion. Not that it's easy for developers to get that. But that our build processes, our deployment processes, rely on those things. And they rely on us as developers to say, "I want version 1.2.4 of that library, and that 1.2.4. is the same version that you use."

I posted a slide about versioning -- and that's a very release engineering problem. As an example, Open SSL made a mistake in their versioning and instead of bumping the version like they should have, they repackaged binary. I suspect the reason that they did that is because they published all the CVEs with that version number and everybody is like a hawk watching Open SSL. So they couldn't bump the version number easily. Open SSL can't be flexible in their release engineering anymore because they've been so traditionally horrible at it. Right? We've made it really easy to stuff all of those components into our products, but we really don't know what we're stuffing in there.

If you look at it, we end up worrying about a lot of the same things. I think a lot of the nuts to crack, if you will, in the Rugged DevOps community are maybe 50 to 80% release engineering problems. Strengthening that extra feature of security in there, to make that part of it, especially with the supply chain, will work really well.

_________________________________

"A lot of the nuts to crack, if you will, in the Rugged DevOps community are maybe 50 to 80% release engineering problems. Strengthening that extra feature of security in there, to make that part of it, especially with the supply chain, will work really well."

_________________________________

Derek: J. Paul Reed, thank you very much. It was a pleasure talking to you, I really enjoyed the conversation. We'll look forward to seeing you again soon.

J. Paul: Awesome. Thank you.

If you loved this interview and are looking for more great stuff on Rugged DevOps, I invite you to download this awesome research paper from Amy DeMartine at Forrester, "The Seven Habits of Rugged DevOps."

As Amy notes, "DevOps practices can only increase speed and quality up to a point without security and risk (S&R) pros' expertise. Old application security practices hinder speedy releases, and security vulnerabilities represent defects that can leave a company open to cyberattacks. But DevOps practitioners can leap forward with both increased speed and quality by including S&R pros in DevOps feedback loops and including security practices in the automated life cycle. These new practices are called Rugged DevOps."

More Stories By Derek Weeks

In 2015, Derek Weeks led the largest and most comprehensive analysis of software supply chain practices to date across 160,000 development organizations. He is a huge advocate of applying proven supply chain management principles into DevOps practices to improve efficiencies, reduce costs, and sustain long-lasting competitive advantages.

As a 20+ year veteran of the software industry, he has advised leading businesses on IT performance improvement practices covering continuous delivery, business process management, systems and network operations, service management, capacity planning and storage management. As the VP and DevOps Advocate for Sonatype, he is passionate about changing the way people think about software supply chains and improving public safety through improved software integrity. Follow him here @weekstweets, find me here www.linkedin.com/in/derekeweeks, and read me here http://blog.sonatype.com/author/weeks/.

@MicroservicesExpo Stories
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...