Click here to close now.

Welcome!

SOA & WOA Authors: Elizabeth White, Max Katz, Liz McMillan, Pat Romanski, Mark O'Neill

Related Topics: SOA & WOA, Java, .NET, Linux, SDN Journal

SOA & WOA: Blog Feed Post

Software Engineering and Code Quality Goals You Should Nail Before 2018

Responsible IT managers need to change the way they think about software development

When applications crash due to a code quality issues, the common question is, "How could those experts have missed that?" The problem is, most people imagine software development as a room full of developers, keyboards clacking away with green, Matrix-esque code filling up the screen as they try and perfect the newest ground-breaking feature. However, in reality most of the work developers actually do is maintenance work fixing the bugs found in the production code to ensure a higher level of code quality.

Not only does this severely reduce the amount of business value IT can bring to the table, it also exponentially increases the cost in developing and maintaining quality applications. And even though the IT industry has seen this rise in cost happening for years, they've done little to stem the rising tide. The time has come to draw a line in the sand.

Capers Jones, VP and CTO of Namcook Analytics, recently released a collection of 20 goals software engineers should be aiming to reach by 2018 and we thought this was a great starting point to get software engineering focused on fixing the problems that lie before them, and not just spinning their gears.

However, having ambitious goals is only part of the challenge. In our experience, the organizations aren't equipped to meet these goals because:

  • Functional testing isn't enough
  • Code analyzers are myopic
  • Productivity measurement is manual and laborious

Responsible IT managers need to change the way they think about software development and arm their teams with better tools and processes if they want to come close to achieving any of these goals. This starts with gaining better visibility into their software risk, performance measurement, portfolio analysis, and quality improvement - and it needs to be instantaneous, not quarterly. The problems are happening now, in development, and management is wasting precious time and money waiting until testing to try and put it all together to work out all the kinks.

Once management has a transparent view into the code quality of their application portfolio, then they can shift their focus to achieving the software engineering goals outlined by Jones. They're great goals to aspire to, but let's make sure we're not putting the cart before the horse.

  1. Raise defect removal efficiency (DRE) from < 90.0% to > 99.5%. This is the most important goal for the industry. It cannot be achieved by testing alone but requires pre-test inspections and static analysis. DRE is measured by comparing all bugs found during development to those reported in the first 90 days by customers.
  2. Lower software defect potentials from > 4.0 per function point to < 2.0 per function point. Defect potentials are the sum of bugs found in requirements, design, code, user documents, and bad fixes. Requirements and design bugs often outnumber code bugs. Achieving this goal requires effective defect prevention such as joint application design (JAD), quality function deployment (QFD), certified reusable components, and others. It also requires a complete software quality measurement program. Achieving this goal also requires better training in common sources of defects found in requirements, design, and source code.
  3. Lower cost of quality (COQ) from > 45.0% of development to < 20.0% of development. Finding and fixing bugs has been the most expensive task in software for more than 50 years. A synergistic combination of defect prevention and pre-test inspections and static analysis are needed to achieve this goal.
  4. Reduce average cyclomatic complexity from > 25.0 to < 10.0. Achieving this goal requires careful analysis of software structures, and of course it also requires measuring cyclomatic complexity for all modules.
  5. Raise test coverage from < 75.0% to > 98.5% for risks, paths, and requirements. Achieving this goal requires using mathematical design methods for test case creation such as using design of experiments. It also requires measurement of test coverage.
  6. Eliminate error-prone modules in large systems. Bugs are not randomly distributed. Achieving this goal requires careful measurements of code defects during development and after release with tools that can trace bugs to specific modules. Some companies such as IBM have been doing this for many years. Error-prone modules (EPM) are usually less than 5% of total modules but receive more than 50% of total bugs. Prevention is the best solution. Existing error-prone modules in legacy applications may require surgical removal and replacement.
  7. Eliminate security flaws in all software applications. As cyber-crime becomes more common the need for better security is more urgent. Achieving this goal requires use of security inspections, security testing, and automated tools that seek out security flaws. For major systems containing valuable financial or confidential data, ethical hackers may also be needed.
  8. Reduce the odds of cyber-attacks from > 10.0% to < 0.1%. Achieving this goal requires a synergistic combination of better firewalls, continuous anti-virus checking with constant updates to viral signatures; and also increasing the immunity of software itself by means of changes to basic architecture and permission strategies.
  9. Reduce bad-fix injections from > 7.0% to < 1.0%. Not many people know that about 7% of attempts to fix software bugs contain new bugs in the fixes themselves commonly called "bad fixes."  When cyclomatic complexity tops 50 the bad-fix injection rate can soar to 25% or more. Reducing bad-fix injection requires measuring and controlling cyclomatic complexity, using static analysis for all bug fixes, testing all bug fixes, and inspections of all significant fixes prior to integration.
  10. Reduce requirements creep from > 1.5% per calendar month to < 0.25% per calendar month. Requirements creep has been an endemic problem of the software industry for more than 50 years. While prototypes, agile embedded users, and joint application design (JAD) are useful, it is technically possible to also use automated requirements models to improve requirements completeness.
  11. Lower the risk of project failure or cancellation on large 10,000 function point projects from > 35.0% to < 5.0%. Cancellation of large systems due to poor quality and cost overruns is an endemic problem of the software industry, and totally unnecessary. A synergistic combination of effective defect prevention and pre-test inspections and static analysis can come close to eliminating this far too common problem.
  12. Reduce the odds of schedule delays from > 50.0% to < 5.0%. Since the main reasons for schedule delays are poor quality and excessive requirements creep, solving some of the earlier problems in this list will also solve the problem of schedule delays. Most projects seem on time until testing starts, when huge quantities of bugs begin to stretch out the test schedule to infinity. Defect prevention combined with pre-test static analysis can reduce or eliminate schedule delays.
  13. Reduce the odds of cost overruns from > 40.0% to < 3.0%. Software cost overruns and software schedule delays have similar root causes; i.e. poor quality control combined with excessive requirements creep. Better defect prevention combined with pre-test defect removal can help to cure both of these endemic software problems.
  14. Reduce the odds of litigation on outsource contracts from > 5.0% to < 1.0%. The author of this paper has been an expert witness in 12 breach of contract cases. All of these cases seem to have similar root causes which include poor quality control, poor change control, and very poor status tracking. A synergistic combination of early sizing and risk analysis prior to contract signing plus effective defect prevention and pre-test defect removal can lower the odds of software breach of contract litigation.
  15. Lower maintenance and warranty repair costs by > 75.0% compared to 2014 values. Starting in about 2000 the number of U.S. maintenance programmers began to exceed the number of development programmers. IBM discovered that effective defect prevention and pre-test defect removal reduced delivered defects to such low levels that maintenance costs were reduced by at least 45% and sometimes as much as 75%.
  16. Improve the volume of certified reusable materials from < 15.0% to > 75.0%. Custom designs and manual coding are intrinsically error-prone and inefficient no matter what methodology is used. The best way of converting software engineering from a craft to a modern profession would be to construct applications from libraries of certified reusable material; i.e. reusable requirements, design, code, and test materials. Certification to near zero-defect levels is a precursor, so effective quality control is on the critical path to increasing the volumes of certified reusable materials.
  17. Improve average development productivity from < 8.0 function points per month to >16.0 function points per month. Productivity rates vary based on application size, complexity, team experience, methodologies, and several other factors. However when all projects are viewed in aggregate average productivity is below 8.0 function points per staff month. Doubling this rate needs a combination of better quality control and much higher volumes of certified reusable materials; probably 50% or more.
  18. Improve work hours per function point from > 16.5 to < 8.25. Goal 17 and this goal are essentially the same but use different metrics.  However there is one important difference. Work hours will be the same in every country. For example a project in Sweden with 126 work hours per month will have the same number of work hours as a project in China with 184 work hours per month. But the Chinese project will need fewer calendar months than the Swedish project.
  19. Shorten average software development schedules by > 35.0% compared to 2014 averages. The most common complaint of software clients and corporate executives at the CIO and CFO level is that big software projects take too long. Surprisingly it is not hard to make them shorter. A synergistic combination of better defect prevention, pre-test static analysis and inspections, and larger volumes of certified reusable materials can make significant reductions in schedule intervals.
  20. Raise maintenance assignment scopes from < 1,500 function points to > 5,000 function points. The metric "maintenance assignment scope" refers to the number of function points that one maintenance programmer can keep up and running during a calendar year. The range is from < 300 function points for buggy and complex software to > 5,000 function points for modern software released with effective quality control. The current average is about 1,500 function points. This is a key metric for predicting maintenance staffing for both individual projects and also for corporate portfolios. Achieving this goal requires effective defect prevention, effective pre-test defect removal, and effective testing using modern mathematically based test case design methods. It also requires low levels of cyclomatic complexity.

Read the original blog entry...

More Stories By Lev Lesokhin

Lev Lesokhin is responsible for CAST's market development, strategy, thought leadership and product marketing worldwide. He has a passion for making customers successful, building the ecosystem, and advancing the state of the art in business technology. Lev comes to CAST from SAP, where he was Director, Global SME Marketing. Prior to SAP, Lev was at the Corporate Executive Board as one of the leaders of the Applications Executive Council, where he worked with the heads of applications organizations at Fortune 1000 companies to identify best management practices.

@ThingsExpo Stories
One of the biggest impacts of the Internet of Things is and will continue to be on data; specifically data volume, management and usage. Companies are scrambling to adapt to this new and unpredictable data reality with legacy infrastructure that cannot handle the speed and volume of data. In his session at @ThingsExpo, Don DeLoach, CEO and president of Infobright, will discuss how companies need to rethink their data infrastructure to participate in the IoT, including: Data storage: Understanding the kinds of data: structured, unstructured, big/small? Analytics: What kinds and how responsiv...
The Workspace-as-a-Service (WaaS) market will grow to $6.4B by 2018. In his session at 16th Cloud Expo, Seth Bostock, CEO of IndependenceIT, will begin by walking the audience through the evolution of Workspace as-a-Service, where it is now vs. where it going. To look beyond the desktop we must understand exactly what WaaS is, who the users are, and where it is going in the future. IT departments, ISVs and service providers must look to workflow and automation capabilities to adapt to growing demand and the rapidly changing workspace model.
Since 2008 and for the first time in history, more than half of humans live in urban areas, urging cities to become “smart.” Today, cities can leverage the wide availability of smartphones combined with new technologies such as Beacons or NFC to connect their urban furniture and environment to create citizen-first services that improve transportation, way-finding and information delivery. In her session at @ThingsExpo, Laetitia Gazel-Anthoine, CEO of Connecthings, will focus on successful use cases.
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
The Internet of Things (IoT) promises to evolve the way the world does business; however, understanding how to apply it to your company can be a mystery. Most people struggle with understanding the potential business uses or tend to get caught up in the technology, resulting in solutions that fail to meet even minimum business goals. In his session at @ThingsExpo, Jesse Shiah, CEO / President / Co-Founder of AgilePoint Inc., showed what is needed to leverage the IoT to transform your business. He discussed opportunities and challenges ahead for the IoT from a market and technical point of vie...
IoT is still a vague buzzword for many people. In his session at @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, discussed the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. He also discussed how IoT is perceived by investors and how venture capitalist access this space. Other topics discussed were barriers to success, what is new, what is old, and what the future may hold. Mike Kavis is Vice President & Principal Cloud Architect at Cloud Technology Pa...
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS solutions that provide a Hadoop flavor either make choices for customers very flexible in the name of opti...
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
Advanced Persistent Threats (APTs) are increasing at an unprecedented rate. The threat landscape of today is drastically different than just a few years ago. Attacks are much more organized and sophisticated. They are harder to detect and even harder to anticipate. In the foreseeable future it's going to get a whole lot harder. Everything you know today will change. Keeping up with this changing landscape is already a daunting task. Your organization needs to use the latest tools, methods and expertise to guard against those threats. But will that be enough? In the foreseeable future attacks w...
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize supplier management. Learn about enterprise architecture strategies for designing connected systems tha...
Dale Kim is the Director of Industry Solutions at MapR. His background includes a variety of technical and management roles at information technology companies. While his experience includes work with relational databases, much of his career pertains to non-relational data in the areas of search, content management, and NoSQL, and includes senior roles in technical marketing, sales engineering, and support engineering. Dale holds an MBA from Santa Clara University, and a BA in Computer Science from the University of California, Berkeley.
Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho...
The cloud is now a fact of life but generating recurring revenues that are driven by solutions and services on a consumption model have been hard to implement, until now. In their session at 16th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, will discuss how a top European telco has leveraged the innovative recurring revenue generating capability of the consumption cloud to enable a unique cloud monetization model to drive results.
As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will discuss how to cut costs, scale easily, and unleash insight with CommVault Simpana software, the only si...
Analytics is the foundation of smart data and now, with the ability to run Hadoop directly on smart storage systems like Cloudian HyperStore, enterprises will gain huge business advantages in terms of scalability, efficiency and cost savings as they move closer to realizing the potential of the Internet of Things. In his session at 16th Cloud Expo, Paul Turner, technology evangelist and CMO at Cloudian, Inc., will discuss the revolutionary notion that the storage world is transitioning from mere Big Data to smart data. He will argue that today’s hybrid cloud storage solutions, with commodity...
Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, will focus on how to set up a cloud data governance program and s...
Every innovation or invention was originally a daydream. You like to imagine a “what-if” scenario. And with all the attention being paid to the so-called Internet of Things (IoT) you don’t have to stretch the imagination too much to see how this may impact commercial and homeowners insurance. We’re beyond the point of accepting this as a leap of faith. The groundwork is laid. Now it’s just a matter of time. We can thank the inventors of smart thermostats for developing a practical business application that everyone can relate to. Gone are the salad days of smart home apps, the early chalkb...
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been involved at the beginning of four IT industries: EDA, Open Systems, Computer Security and now SOA.
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...