Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Yeshim Deniz, Zakia Bouachraoui

Related Topics: Microservices Expo

Microservices Expo: Article

SOA & Web Services - Is It Done Yet?

Three steps to checking in your code with confidence

It's difficult to determine how much time to spend reviewing and testing your code before checking it in to the team's shared code base. On the one hand, you want to complete and check in code as rapidly as possible so you can meet deadlines and move on to developing new code or getting started on other projects. After all, you went into software development to develop, not to test.

Yet, if you move too fast, you might end up checking in code that causes bugs-immediately upon integration, or later on when the code is reused, extended, or maintained. In that case, any time that you originally saved by checking in the code prematurely is significantly outweighed by the time you need to spend diagnosing the problem, correcting the responsible code (and possibly code that has been layered upon that code), and verifying the correction- not to mention the hassle of having to interrupt whatever you're currently working on and return to something you previously wrote off as "done" and forgot about.
This article explains three steps you can take to significantly reduce the risk that code will come back to haunt you after you check it in:

  1. As you write code, comply with best development rules to prevent reliability, security, performance, and maintainability problems in the code.
  2. Immediately after each piece of code is completed or modified, use unit-level reliability testing to ensure that it's reliable and secure.
  3. Immediately after each piece of code is completed or modified, use unit-level functional testing to verify that it's implemented correctly and functions properly.
All three steps can be automated with commercial and/or open source tools so you can gain their benefit without disrupting your development efforts or adding overhead to your already hectic schedule.

Step 1: Comply with development rules to improve code reliability, security, performance, and maintainability of the code
The first step in determining if your code is done is to ensure that it complies with applicable development rules as you write it. Many developers think that complying with development rules involves just beautifying code. However, there is actually a wealth of available development rules available for every modern development language that have been proven to improve code robustness, security, performance, and maintainability. In addition, each team's experienced developers have typically developed their own (often informal) rules that codify the application-specific lessons they've learned over the course of the project.

Even if your team is not already following a set of formal or informal development rules, we strongly recommend that you make rule compliance a team effort. If you are the only developer on the team following the rules, your code will undoubtedly improve. But if all your team members aren't on the same page, dangerous code could still enter the code base, and your own efforts might conflict with (or be overwritten by) those of your teammates. Having a development team inconsistently apply software development standards and best practices as it implements code is like having a team of electricians wire a new building's electrical system with multiple voltages and incompatible outlets. In both cases, the team members' work will interact to form a single system. Consequently, any hazards, problems, or even quirks introduced by one "free spirit" team member who ignores the applicable guidelines and best practices can make the entire system unsafe, unreliable, or difficult to maintain and upgrade.

Why bother?
The key benefits of complying with applicable development rules are:

It cuts development time and cost by reducing the number of problems that need to be identified, diagnosed, and corrected later in the process.

Complying with meaningful development rules prevents serious functionality, security, and performance problems. Each defect that is prevented by complying with development rules means one less defect that the team needs to identify, diagnose, correct, and recheck later in the development process (when it's exponentially more time-consuming, difficult, and costly to do so). Or, if testing does not expose every defect, each prevented defect could mean one less defect that will impact the released/deployed application. On average, one defect is introduced for each ten lines of code (A. Ricadela, "The State of Software", InformationWeek, May 2001) and over half of a project's defects can be prevented by complying with development rules (R.G. Dromey, "Software Quality - Prevention Versus Cure", Software Quality Institute, April 2003). Do the math for a typical program with millions of lines of code, and it's clear that preventing errors with development rules can save a significant amount of resources. And considering that it takes only 3 to 4 defects per 1,000 lines of code to affect the application's reliability (A. Ricadela, "The State of Software", InformationWeek, May 2001), it's clear that ignoring the defects is not an option.

It makes code easier to understand, maintain, and reuse
Different developers naturally write code in different styles. Code with stylistic quirks and undocumented assumptions probably makes perfect sense to the developer as he's writing it, but may confuse other developers who later modify or reuse that code-or even the same developer, when his original intentions are no longer fresh in his mind. When all team members write code in a standard manner, it's easier for each developer to read and understand code. This not only prevents the introduction of errors during modifications and reuse, but also helps developers work faster and reduces the learning curve for new team members.

How do I do it?
Decide which development rules to comply with
First, as a team, review industry-standard development rules for the language and technologies you are working with and decide which ones are most applicable to your project and will prevent the most common or serious defects. The rules implemented by automated static analysis tools offer a convenient place to start. If needed, you can supplement these rules with the ones listed in books and articles by experts in the language or technology you are working with.

Next, consider practices and conventions that are unique to your organization, team, and project (for instance, an informal list of lessons learned from past experiences). Do your most experienced team developers have an informal list of lessons learned from past experiences? Have you encountered a specific bug that can be abstracted into a rule so that the bug never occurs in your code stream again? Are there explicit rules for formatting or naming conventions that your team is expected to comply with?

Configure all team tools to check the designated rules consistently
To fully reap the potential benefits of complying with development rules, the entire development team must check the designated set of rules consistently. Consistency is required because even a slight variation in tool settings among team members could allow non-compliant code to enter the team's shared code base. Just one overlooked rule violation could cause serious problems. For instance, assume that your team member is not checking the same rules as everyone else, and consequently checks in code that does not comply with rules for closing external resources. If your application keeps temporary files open until it exits, normal testing-which can last a few minutes or run overnight-won't detect any problems. However, when the deployed application runs for a month, you can end up with enough temporary files to overflow your file system, and then your application will crash.

Check and correct new/modified code as its written.
Study after study has shown that the earlier a problem is found, the faster, easier, and cheaper it is to fix. That's why the best time to check whether code complies with development rules is as soon as it's written or updated. If you check whether each piece of code complies with the designated development rules immediately, while the code is still fresh in your mind, you can then quickly resolve any problems found and add it to source control with increased confidence.

Step 2: Use reliability testing to verify that each piece of code is reliable and secure
The next step toward reliable and secure code is to perform unit-level reliability testing (also known as white-box testing or construction testing). This involves exercising each function/method as thoroughly as possible and checking for unexpected exceptions.

Why bother?
If your unit testing only checks whether the unit functions as expected, you can't predict what could happen when untested paths are taken by well-meaning users exercising the application in unanticipated ways-or taken by attackers trying to gain control of your application or access to privileged data. It's hardly practical to try to identify and verify every possible user path and input. However, it's critical to identify the possible paths and inputs that could cause unexpected exceptions because:

Unexpected exceptions can cause application crashes and other serious runtime problems
If unexpected exceptions surface in the field, they could cause instability, unexpected results, or crashes. In fact, Parasoft has worked with many development teams who had trouble with applications crashing for unknown reasons. Once these teams started identifying and correcting the unexpected exceptions that they previously overlooked, their applications stopped crashing.

Unexpected exceptions can open the door to security attacks
Many developers don't realize that unexpected exceptions can also create significant security vulnerabilities. For instance, an exception in login code could allow an attacker to completely bypass the login procedure.

How do I do it?
Design, implement, and execute reliability test cases.
To identify potential uncaught runtime exceptions in newly added/modified code, you test each class's methods with a large number and range of potential inputs, then check whether uncaught runtime exceptions are thrown.

Manually developing the required number, scope, and variety of unit test cases that would expose exceptions is impractical. Achieving the scope of coverage required for effective white-box testing mandates that a significant number of paths are executed. For example, in a typical 10,000 line program, there are approximately 100 million possible paths; manually generating input that would exercise all of those paths is infeasible and practically impossible.

When trying to expose exceptions, a tool that automatically generates test cases is essential. If test design and generation is automated, the only user intervention required is to review the findings and address the reported exceptions.

Review and address all reported exceptions.
After the first test run completes, review the coverage. If any classes received less than 75% coverage, we recommend that you customize the automated test case generation settings (for instance, by modifying automatically-generated stubs, adding realistic objects or stubs, or modifying test generation settings) so that the automated test case generation can cover a larger portion of that class during the next test run.

After you rerun the test, review all exceptions exposed by the tests, then address them before proceeding. Each method should be able to handle any valid input without throwing an exception. If code should not throw an exception for a given input, the code should be corrected. If the exception is expected or if the test inputs are not expected/permissible, document those requirements in the code and tell the tool that they are expected. This prevents most unit testing tools from reporting these problems again in future test runs. Moreover, when other developers who are extending or reusing the code see documentation that explains that the exception is expected behavior, they will be less likely to misunderstand the code and introduce bugs.

Step 3: Use functional testing to verify that each piece of code is implemented correctly and operates properly
Next, extend your reliability test cases to verify each unit's functionality. The goal of unit-level functional testing is to verify that each unit is implemented according to specification before that unit is added to the team's shared code base.

Why bother?
The key benefit of verifying functionality at the unit level is that it allows you to identify and correct functionality problems as soon as they are introduced, reducing the number of problems that need to be identified, diagnosed, and corrected later in the process. Finding and fixing a unit-level functional error immediately after coding is easier, faster, and from 10 to 100 times less costly than finding and fixing that same error later in the development process. When you perform functional testing at the unit level, you can quickly identify simple functionality problems, such as a "++" in a prefix notation substituted for a "++" in postfix, because you are verifying the unit directly. If the same problem entered the shared code base and became part of a multi-million line application, it might surface only as strange behavior during application testing. Here, finding the problem's cause would be like searching for a needle in a haystack. Even worse, the problem might not be exposed during application testing and remain in the released/deployed application.

How do I do it?
Add and execute more functional test cases as needed to fully verify the specification
After you've worked through the exceptions reported by the automatically generated test cases, check if the code you've written actually functions as desired. Functional unit tests are meant to do just that. Without regard to internal function behavior, this means specifying function inputs and checking if the output is as expected. Such tests should be created based on the class API specification, or class use cases.

Functional tests can be written using whatever version of the xUnit framework is appropriate for the language you are using (JUnit for Java, NUnit for .NET languages, CppUnit for C++, etc.).

How do you know when you've completed sufficient functional testing on a piece of code? When 1) you have developed a functional test suite that is rich enough to verify that the specified functionality is implemented correctly and 2) the code passes that test suite with no failures.

Can I Check It In Now?
Yes! Of course, there's no guarantee that if you follow these three steps before you check in your code, you will never see an annoying bug report again. But you will notice that you eventually spend less time finding and fixing bugs, which means fewer interruptions, less "crunch time" at the end of the project, and more time for more challenging and interesting tasks such as developing and implementing new technologies. The key to saving time in the long run is to automate these steps as much as possible so flushing errors out of your code before check in can become an essential yet unobtrusive part of your normal day-to-day work.

More Stories By Adam Kolawa

Adam Kolawa is the co-founder and CEO of Parasoft, leading provider of solutions and services that deliver quality as a continuous process throughout the SDLC. In 1983, he came to the United States from Poland to pursue his PhD. In 1987, he and a group of fellow graduate students founded Parasoft to create value-added products that could significantly improve the software development process. Adam's years of experience with various software development processes has resulted in his unique insight into the high-tech industry and the uncanny ability to successfully identify technology trends. As a result, he has orchestrated the development of numerous successful commercial software products to meet growing industry needs to improve software quality - often before the trends have been widely accepted. Adam has been granted 10 patents for the technologies behind these innovative products.

Kolawa, co-author of Bulletproofing Web Applications (Hungry Minds 2001), has contributed to and written over 100 commentary pieces and technical articles for publications including The Wall Street Journal, Java Developer's Journal, SOA World Magazine, AJAXWorld Magazine; he has also authored numerous scientific papers on physics and parallel processing. His recent media engagements include CNN, CNBC, BBC, and NPR. Additionally he has presented on software quality, trends and development issues at various industry conferences. Kolawa holds a Ph.D. in theoretical physics from the California Institute of Technology. In 2001, Kolawa was awarded the Los Angeles Ernst & Young's Entrepreneur of the Year Award in the software category.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
SOA Web Services Journal News 12/27/06 12:05:43 AM EST

It's difficult to determine how much time to spend reviewing and testing your code before checking it in to the team's shared code base. On the one hand, you want to complete and check in code as rapidly as possible so you can meet deadlines and move on to developing new code or getting started on other projects. After all, you went into software development to develop, not to test.

Microservices Articles
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.