Welcome!

Microservices Expo Authors: Dalibor Siroky, Elizabeth White, Pat Romanski, John Katrick, Liz McMillan

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog, Cloud Security

@DevOpsSummit: Article

Software Supply Chain Report | @DevOpsSummit #DevOps #ContinuousTesting

Analysis of 25,000 applications reveals 6.8% of packages / components used included known defects

Analysis of 25,000 applications reveals 6.8% of packages/components used included known defects. Organizations standardizing on components between 2 - 3 years of age can decrease defect rates substantially.

Open source and third-party packages/components live at the heart of high velocity software development organizations.  Today, an average of 106 packages / components comprise 80 - 90% of a modern application, yet few organizations have visibility into what components are used where.

Use of known defective components leads to quality and security issues within applications. While developers save tremendous amounts of time by sourcing software components from outside their organizations, they often don't have time to check those component versions against known vulnerability databases or internal policies.

In Sonatype's 2016 State of the Software Supply Chain report, analysis of 25,000 scans reveals that 1 in 16 (6.8%) components being used in applications contained at least one known security vulnerability.  This finding demonstrates that defective components are making their way across the entire software supply chain -- from initial sourcing to use in finished goods.

Screen Shot 2016-08-01 at 10.53.06 AM.png

Newer components make better software
Analysis of the scanned applications also revealed that the latest versions of components had the lowest percentage of known defects. Components under three years in age represented 38% of parts used in the average application; these components had security defect rates under 5%.

By comparison, components between five and seven years old had 2x the known security defect rate. The 2016 Verizon Data Breach and Investigations Report confirms that the vast majority of successful exploits last year were from CVE's (Common Vulnerabilities and Exposures) published 1998 - 2013. Combining the Verizon data with Sonatype's analysis further demonstrates the economic value of using newer, higher quality components.

Screen Shot 2016-08-01 at 11.03.29 AM.png

In summary, components greater than two years old represent 62% of all components scanned and account for 77% of the risk. Better component selection not only improves the quality of the finished application, it also reduces the number of break-fixes and unplanned work to remediate the defects.

Older components die off
Research shows that new versions of open source components are released an average of 14x per year. The new versions deliver greater functionality, improved performance, and fewer known defects. Just as in traditional manufacturing, using the newest versions of any part typically results in a higher quality finished product.

In their 2016 report, Sonatype discovered that component versions seven years or older made up approximately 18% of the footprint of the 25,000 application scans. For the older components, analysis showed that as many as 23% were on the latest version -- meaning, the open source projects for those components were inactive, dead...or perhaps they are just incredibly stable.

Screen Shot 2016-08-01 at 11.04.16 AM.png

Discovery of components with known security vulnerabilities or other defects used in applications is not something anyone desires. Unfortunately, when these defects are discovered in older components, chances of remediating the issue by upgrading to a newer component version are greatly diminished. If a new version does not exist, only a few options exist:

  1. Keep the vulnerable component in the application

  2. Wwitch to a newer like component from another open source project

  3. Make a software change to add a mitigating control, or

  4. Code the functionality required from scratch in order to replace the defect.

None of these options comes without a significant cost.
As discussed in Cisco's 2015 Midyear Security Report, "With open-source software in place in many enterprises, security professionals need to gain a deeper understanding of where and how open-source is used in their organizations, and whether their open-source packages or libraries are up to date. This means that, moving forward, software supply chain management becomes even more critical."

More information about software supply chain management practices and open source component quality can be found in the 2016 State of the Software Supply Chain Report.

More Stories By Derek Weeks

In 2015, Derek Weeks led the largest and most comprehensive analysis of software supply chain practices to date across 160,000 development organizations. He is a huge advocate of applying proven supply chain management principles into DevOps practices to improve efficiencies, reduce costs, and sustain long-lasting competitive advantages.

As a 20+ year veteran of the software industry, he has advised leading businesses on IT performance improvement practices covering continuous delivery, business process management, systems and network operations, service management, capacity planning and storage management. As the VP and DevOps Advocate for Sonatype, he is passionate about changing the way people think about software supply chains and improving public safety through improved software integrity. Follow him here @weekstweets, find me here www.linkedin.com/in/derekeweeks, and read me here http://blog.sonatype.com/author/weeks/.

@MicroservicesExpo Stories
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...