Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Yeshim Deniz, Zakia Bouachraoui

Related Topics: @DevOpsSummit

@DevOpsSummit: Blog Feed Post

Continuous Delivery, Real-Time Test Analysis By @XebiaLabs | @DevOpsSummit [#DevOps]

You have to plan your Continuous Delivery pipeline with quality in mind from the outset

Why Continuous Delivery is Nothing Without Real-Time Test Analysis

By Victor Clerc

TL;DR: There is no point shipping fast if everything is broken!

Pushing frequent releases of high quality software to customers is beneficial for everyone. There’s a great deal of business value in delivering software to the market faster than your competitors. Once realized, ideas can be rejected or pursued quickly. Customers can see their feedback shaping the product. Flaws, defects, and omissions can be addressed without delay.

But, setting up a Continuous Delivery pipeline is about more than speed. How do you ensure that things don’t start breaking all over the place?

It’s an established fact that the later we encounter a defect or a problem in software development, the more expensive and difficult it is to fix. Accurately measuring quality and building it into the pipeline may seem like a lot of upfront effort, but it will pay dividends quickly.

Step 1: Build solid foundations

You have to plan your Continuous Delivery pipeline with quality in mind from the outset. The only way to effectively do that is to design tests before development really begins, to continually collect metrics, and to build a test automation architecture integrated into your Continuous Delivery pipeline. The test automation architecture defines the setup of test tools and tests throughout the pipeline and should support flexibility and adaptability to help you meet your business objectives.

This test automation architecture should facilitate easy selection of just the right tests to be performed to match the flow of your software through your pipeline as it gets closer to release. From unit tests and code level security, through functional and component testing, to integration and performance testing. The further the software progresses along the pipeline, the greater the number of dependencies on other systems, data, and infrastructure components – and, the more difficult it is to harmonize these variables. Making sane selections of tests to run at each moment in the pipeline becomes more and more important in order to focus on the area of interest.

Step 2: Fail fast

If we wait until functionality has been developed to write automated functional tests, we are creating a bottleneck. Designing tests during or at the end of a sprint is delaying feedback that we need right now. Why not apply the same test-driven development mindset we employ for unit tests to functionality? So, design all tests as soon as possible: refining the backlog for subsequent sprints should include discussing and designing tests.

The development team is greatly helped when for each piece of functionality a set of examples of expected system behavior is available. These examples, with some syntactic guidelines and using readily available tools, can serve as automated acceptance tests. Starting with failing tests and working to eliminate them maintains a laser focus on what developers should be doing. For this to work well, all stakeholders must get together at the beginning to design the acceptance tests.

The alternative is a drip-feed of failure as new stakeholders climb on board with fresh concerns as the software gets closer to release. The later a failure occurs, the more disruptive it is, and the more difficult it is to do anything to alleviate the relevant stakeholder’s concerns. Pull everyone in at the start, design the tests together, run the most important tests first, and start collecting the feedback you need about software quality from day one.

Step 3: Automate and optimize the process

A concrete set of acceptance criteria in the form of automated tests gives you a clear signal about the health of your software. The easier it is to run these tests, the better, which is why you should automate where possible. There will be some intangibles that will prove tough to automate, but you may need them for a true measure of quality. Not all acceptance tests can or should be automated, but automating functional tests makes a great deal of sense.

It will be necessary from time to time to add new example test cases in to augment the existing functionality. It will also be necessary to optimize the tests available in the regression set to prevent this set from growing and containing superfluous tests.

You can’t test everything all the time. Be pragmatic when you define which tests to run in any given situation. You need flexibility to select the right tests for inclusion and exclusion according to a variety of quality attributes.

Step 4: Maintain a real-time quality overview

If a test is going to fail, it’s better to know about it now. All the stakeholders should know about failures as soon as possible. If the software is passing your tests and meeting the acceptance criteria then you can feel confident about the overall quality and proceed with the release.

Keeping things specific with examples and creating these acceptance tests as early as possible will increase the efficiency of your software development process. Furthermore, when possible reasons for failing tests can be identified through adequate monitoring, this may help to correct defects as soon as possible. The predictive value of trend analysis can also help you to take adequate measures before quality thresholds, such as a maximum duration for test execution, are breached.

Automated continuous qualification of all relevant test results (across all relevant test tools) is what enables you to provide real-time quality feedback. With this real-time quality feedback built into your Continuous Delivery pipeline, you can ensure that your standards never slip.

Join the XL Test beta programme!

We have built XL Test to allow you to achieve these steps. If you’re interested and would like to join our beta programme, please drop us an email. Start making sense of all your test data and get real-time quality feedback built into your Continuous Delivery pipeline today!

The post Why Continuous Delivery is Nothing Without Real-Time Test Analysis appeared first on XebiaLabs.

Read the original blog entry...

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

Microservices Articles
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.