Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Carmen Gonzalez, Pat Romanski, AppNeta Blog

Related Topics: Microservices Expo, Java IoT, Microsoft Cloud, Linux Containers, SDN Journal

Microservices Expo: Blog Feed Post

Software Engineering and Code Quality Goals You Should Nail Before 2018

Responsible IT managers need to change the way they think about software development

When applications crash due to a code quality issues, the common question is, "How could those experts have missed that?" The problem is, most people imagine software development as a room full of developers, keyboards clacking away with green, Matrix-esque code filling up the screen as they try and perfect the newest ground-breaking feature. However, in reality most of the work developers actually do is maintenance work fixing the bugs found in the production code to ensure a higher level of code quality.

Not only does this severely reduce the amount of business value IT can bring to the table, it also exponentially increases the cost in developing and maintaining quality applications. And even though the IT industry has seen this rise in cost happening for years, they've done little to stem the rising tide. The time has come to draw a line in the sand.

Capers Jones, VP and CTO of Namcook Analytics, recently released a collection of 20 goals software engineers should be aiming to reach by 2018 and we thought this was a great starting point to get software engineering focused on fixing the problems that lie before them, and not just spinning their gears.

However, having ambitious goals is only part of the challenge. In our experience, the organizations aren't equipped to meet these goals because:

  • Functional testing isn't enough
  • Code analyzers are myopic
  • Productivity measurement is manual and laborious

Responsible IT managers need to change the way they think about software development and arm their teams with better tools and processes if they want to come close to achieving any of these goals. This starts with gaining better visibility into their software risk, performance measurement, portfolio analysis, and quality improvement - and it needs to be instantaneous, not quarterly. The problems are happening now, in development, and management is wasting precious time and money waiting until testing to try and put it all together to work out all the kinks.

Once management has a transparent view into the code quality of their application portfolio, then they can shift their focus to achieving the software engineering goals outlined by Jones. They're great goals to aspire to, but let's make sure we're not putting the cart before the horse.

  1. Raise defect removal efficiency (DRE) from < 90.0% to > 99.5%. This is the most important goal for the industry. It cannot be achieved by testing alone but requires pre-test inspections and static analysis. DRE is measured by comparing all bugs found during development to those reported in the first 90 days by customers.
  2. Lower software defect potentials from > 4.0 per function point to < 2.0 per function point. Defect potentials are the sum of bugs found in requirements, design, code, user documents, and bad fixes. Requirements and design bugs often outnumber code bugs. Achieving this goal requires effective defect prevention such as joint application design (JAD), quality function deployment (QFD), certified reusable components, and others. It also requires a complete software quality measurement program. Achieving this goal also requires better training in common sources of defects found in requirements, design, and source code.
  3. Lower cost of quality (COQ) from > 45.0% of development to < 20.0% of development. Finding and fixing bugs has been the most expensive task in software for more than 50 years. A synergistic combination of defect prevention and pre-test inspections and static analysis are needed to achieve this goal.
  4. Reduce average cyclomatic complexity from > 25.0 to < 10.0. Achieving this goal requires careful analysis of software structures, and of course it also requires measuring cyclomatic complexity for all modules.
  5. Raise test coverage from < 75.0% to > 98.5% for risks, paths, and requirements. Achieving this goal requires using mathematical design methods for test case creation such as using design of experiments. It also requires measurement of test coverage.
  6. Eliminate error-prone modules in large systems. Bugs are not randomly distributed. Achieving this goal requires careful measurements of code defects during development and after release with tools that can trace bugs to specific modules. Some companies such as IBM have been doing this for many years. Error-prone modules (EPM) are usually less than 5% of total modules but receive more than 50% of total bugs. Prevention is the best solution. Existing error-prone modules in legacy applications may require surgical removal and replacement.
  7. Eliminate security flaws in all software applications. As cyber-crime becomes more common the need for better security is more urgent. Achieving this goal requires use of security inspections, security testing, and automated tools that seek out security flaws. For major systems containing valuable financial or confidential data, ethical hackers may also be needed.
  8. Reduce the odds of cyber-attacks from > 10.0% to < 0.1%. Achieving this goal requires a synergistic combination of better firewalls, continuous anti-virus checking with constant updates to viral signatures; and also increasing the immunity of software itself by means of changes to basic architecture and permission strategies.
  9. Reduce bad-fix injections from > 7.0% to < 1.0%. Not many people know that about 7% of attempts to fix software bugs contain new bugs in the fixes themselves commonly called "bad fixes."  When cyclomatic complexity tops 50 the bad-fix injection rate can soar to 25% or more. Reducing bad-fix injection requires measuring and controlling cyclomatic complexity, using static analysis for all bug fixes, testing all bug fixes, and inspections of all significant fixes prior to integration.
  10. Reduce requirements creep from > 1.5% per calendar month to < 0.25% per calendar month. Requirements creep has been an endemic problem of the software industry for more than 50 years. While prototypes, agile embedded users, and joint application design (JAD) are useful, it is technically possible to also use automated requirements models to improve requirements completeness.
  11. Lower the risk of project failure or cancellation on large 10,000 function point projects from > 35.0% to < 5.0%. Cancellation of large systems due to poor quality and cost overruns is an endemic problem of the software industry, and totally unnecessary. A synergistic combination of effective defect prevention and pre-test inspections and static analysis can come close to eliminating this far too common problem.
  12. Reduce the odds of schedule delays from > 50.0% to < 5.0%. Since the main reasons for schedule delays are poor quality and excessive requirements creep, solving some of the earlier problems in this list will also solve the problem of schedule delays. Most projects seem on time until testing starts, when huge quantities of bugs begin to stretch out the test schedule to infinity. Defect prevention combined with pre-test static analysis can reduce or eliminate schedule delays.
  13. Reduce the odds of cost overruns from > 40.0% to < 3.0%. Software cost overruns and software schedule delays have similar root causes; i.e. poor quality control combined with excessive requirements creep. Better defect prevention combined with pre-test defect removal can help to cure both of these endemic software problems.
  14. Reduce the odds of litigation on outsource contracts from > 5.0% to < 1.0%. The author of this paper has been an expert witness in 12 breach of contract cases. All of these cases seem to have similar root causes which include poor quality control, poor change control, and very poor status tracking. A synergistic combination of early sizing and risk analysis prior to contract signing plus effective defect prevention and pre-test defect removal can lower the odds of software breach of contract litigation.
  15. Lower maintenance and warranty repair costs by > 75.0% compared to 2014 values. Starting in about 2000 the number of U.S. maintenance programmers began to exceed the number of development programmers. IBM discovered that effective defect prevention and pre-test defect removal reduced delivered defects to such low levels that maintenance costs were reduced by at least 45% and sometimes as much as 75%.
  16. Improve the volume of certified reusable materials from < 15.0% to > 75.0%. Custom designs and manual coding are intrinsically error-prone and inefficient no matter what methodology is used. The best way of converting software engineering from a craft to a modern profession would be to construct applications from libraries of certified reusable material; i.e. reusable requirements, design, code, and test materials. Certification to near zero-defect levels is a precursor, so effective quality control is on the critical path to increasing the volumes of certified reusable materials.
  17. Improve average development productivity from < 8.0 function points per month to >16.0 function points per month. Productivity rates vary based on application size, complexity, team experience, methodologies, and several other factors. However when all projects are viewed in aggregate average productivity is below 8.0 function points per staff month. Doubling this rate needs a combination of better quality control and much higher volumes of certified reusable materials; probably 50% or more.
  18. Improve work hours per function point from > 16.5 to < 8.25. Goal 17 and this goal are essentially the same but use different metrics.  However there is one important difference. Work hours will be the same in every country. For example a project in Sweden with 126 work hours per month will have the same number of work hours as a project in China with 184 work hours per month. But the Chinese project will need fewer calendar months than the Swedish project.
  19. Shorten average software development schedules by > 35.0% compared to 2014 averages. The most common complaint of software clients and corporate executives at the CIO and CFO level is that big software projects take too long. Surprisingly it is not hard to make them shorter. A synergistic combination of better defect prevention, pre-test static analysis and inspections, and larger volumes of certified reusable materials can make significant reductions in schedule intervals.
  20. Raise maintenance assignment scopes from < 1,500 function points to > 5,000 function points. The metric "maintenance assignment scope" refers to the number of function points that one maintenance programmer can keep up and running during a calendar year. The range is from < 300 function points for buggy and complex software to > 5,000 function points for modern software released with effective quality control. The current average is about 1,500 function points. This is a key metric for predicting maintenance staffing for both individual projects and also for corporate portfolios. Achieving this goal requires effective defect prevention, effective pre-test defect removal, and effective testing using modern mathematically based test case design methods. It also requires low levels of cyclomatic complexity.

Read the original blog entry...

More Stories By Lev Lesokhin

Lev Lesokhin is responsible for CAST's market development, strategy, thought leadership and product marketing worldwide. He has a passion for making customers successful, building the ecosystem, and advancing the state of the art in business technology. Lev comes to CAST from SAP, where he was Director, Global SME Marketing. Prior to SAP, Lev was at the Corporate Executive Board as one of the leaders of the Applications Executive Council, where he worked with the heads of applications organizations at Fortune 1000 companies to identify best management practices.

@MicroservicesExpo Stories
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningf...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, explored HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server? Sounds magical, and it is! In his session at 20th Cloud Expo, Chris Munns, Senior Developer Advocate for Serverless Applications at Amazon Web Services, will show how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverle...
The IT industry is undergoing a significant evolution to keep up with cloud application demand. We see this happening as a mindset shift, from traditional IT teams to more well-rounded, cloud-focused job roles. The IT industry has become so cloud-minded that Gartner predicts that by 2020, this cloud shift will impact more than $1 trillion of global IT spending. This shift, however, has left some IT professionals feeling a little anxious about what lies ahead. The good news is that cloud computin...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
An overall theme of Cloud computing and the specific practices within it is fundamentally one of automation. The core value of technology is to continually automate low level procedures to free up people to work on more value add activities, ultimately leading to the utopian goal of full Autonomic Computing. For example a great way to define your plan for DevOps tool chain adoption is through this lens. In this TechTarget article they outline a simple maturity model for planning this.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might...
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Software development is a moving target. You have to keep your eye on trends in the tech space that haven’t even happened yet just to stay current. Consider what’s happened with augmented reality (AR) in this year alone. If you said you were working on an AR app in 2015, you might have gotten a lot of blank stares or jokes about Google Glass. Then Pokémon GO happened. Like AR, the trends listed below have been building steam for some time, but they’ll be taking off in surprising new directions b...
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists l...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership abi...
The essence of cloud computing is that all consumable IT resources are delivered as services. In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transi...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...