Welcome!

Microservices Expo Authors: Kelly Burford, Scott Davis, Elizabeth White, Pat Romanski, PagerDuty Blog

Related Topics: Microservices Expo, @CloudExpo, @DevOpsSummit

Microservices Expo: Blog Post

Continuous Delivery and Release Automation for Microservices By @Anders_Wallgren | @DevOpsSummit #Microservices

In microservices the business functionality is decomposed into a set of independent, self-contained services

As software organizations continue to invest in achieving Continuous Delivery (CD) of their applications, we see increased interest in microservices architectures, which-on the face of it-seem like a natural fit for enabling CD.

In microservices (or its predecessor, "SOA"), the business functionality is decomposed into a set of independent, self-contained services that communicate with each other via an API. Each of the services has their own application release cycle, and are developed and deployed independently often using different languages, technology stacks and tools that best fit the job.

By splitting the monolithic application into smaller services and decoupling interdependencies (between apps, dev teams, technologies, environments and tooling), microservices allow for more flexibility and agility as you scale your organization's productivity.

While things may move faster on the Dev side, microservices do introduce architectural complexities and management overhead, particularly on the testing and Ops side. What was once one application, with self-contained processes, is now a complex set of orchestrated services that connect via the network. This has impact on your automated testing, monitoring, governance and compliance of all the disparate apps, and more.

A key prerequisite for achieving Continuous Delivery is automating your entire pipeline- from code check-in, through Build, Test and Deployments across the different environments-all the way to the production Release. In order to support better manageability of this complex process, it's important to leverage a platform that can serve as a layer above any infrastructure or specific tool/technology and enable centralized management and orchestration of your toolchain, environments and applications. I'd like to focus on some of the implications microservices have on your pipeline(s), and some best practices for enabling CD for your microservices-driven application.

The Challenges of Managing Delivery Pipelines for Microservices
The "Mono/Micro" Hybrid State
It's very hard to design for microservices from scratch. As you're starting with microservices (as a tip: only look into microservices if you already have solid CI, test and deployment automation), the recommendation is to start with a monolithic application, then gradually carve out its different functions into separate services. Keep in mind that you'll likely need to support this Mono/Micro hybrid state for a while. This is particularly true if you're rearchitecting a legacy application or are working for an organization with established processes and requirements for ensuring security and regulatory compliance.

As you re-architect your application, you will also need to architect your delivery pipeline to support CD (I'll expand more on this later on). It's important that your DevOps processes and automation be able to handle and keep track of both the "traditional" (more long-term) application release processes of the monolith, as well as the smaller-batch, microservices/CD releases. Furthermore, you need to be able to manage multiple microservices - both independently and as a collection - to enable not only Continuous Delivery of each separate service, but of the entire offering.

Increase in Pipeline Variations
One of the key benefits of microservices is that they give developers more freedom to choose the best language or technology to get the job done. For example, your shopping cart might be written in Java, but the enterprise messaging bus uses Erlang. While this enables developers to ‘go fast' and encourages team ownership, the multitude of services and the different technologies that your pipeline automation would need to support grows considerably.

This need for flexibility creates challenges in terms of the complexity of your pipeline, and its reusability and repeatability. How do you maximize your efforts and reuse established automation workflows across different technologies and tools?

Ensuring Governance and Compliance
With so many independent teams and services, and the diversity of tools and processes, large organizations struggle to standardize delivery pipelines and release approval processes to bring microservices into the fold with regards to security, compliance and auditability.

How do you verify that all your microservices are in compliance? If there's a breach or failure, which service is the culprit? How do you keep track of who checked-in what, to which environment, for which service, and under whose approval? How do you pass your next audit?

Integration Testing Between Services Becomes More Complex
When testing a service in isolation, things are fairly simple: you do unit testing and verify that you support the APIs that you expose. Testing of microservices is more complicated and requires more coordination. You need to decide how you're handling downstream testing with other services: do you test against the versions of the other services that are currently in production? Do you test against the latest versions of the other services which are not yet in production? Your pipeline needs to allow you to coordinate between services to make sure that you don't test against a version of the service that is about to become obsolete.

Supporting the Proliferation of Heterogeneous Environments
Microservices often result in a spike in deployments that you now need to manage. This is caused by the independent deployment pipelines for each service across different stacks/environments, an increase in the number of environments throughout the pipeline, and the need to employ modern deployment patterns such as Blue/Green, Canary, etc.

Deployment automation is one of the key prerequisites for CD, and microservices require that you do a lot of it. You don't only need to support the volume of deployments; your pipeline must also verify that the environment and version are compatible, that no connected services are affected and that the infrastructure is properly managed and monitored. While not ideal, at times you will need to run multiple versions of your service simultaneously, so that if a service requires a certain version and is not compatible with the newer one, it can continue to operate (if you're not always backwards compatible).

In addition, microservices seem to lend themselves well to Docker and container technologies. As Dev teams become more independent and deploy their services in a container, Ops teams are challenged to manage the sprawl of containers, and to have visibility into what exactly goes on inside that box.

System-level View and Release Management
System-level visibility is critical not only for compliance, but also for effective release management on both the technical and business sides. With complex releases for today's microservices-driven apps, you need a single pane of glass into the real-time status of the entire pathof theapplication release process. That way, you ensure you're on schedule, on budget and with the right set of functionality. Knowing your shopping cart service will be delivered on time does you no good if you can't also easily pin-point the status of the critical ERP service and all related apps that are required for launch.

Best Practices for Designing CD Pipelines for Microservices
You want to embrace microservices as a means to scale and release updates more frequently, while giving Operations people the platform to not only support developers, but to also operationalize the code in production and be able to manage it. Because microservices are so fragmented, it is more difficult to track and manage all the independent, yet interconnected components of the app. Your goal should be to automate the releases of microservices-driven apps so these are reliable, repeatable and as painless as possible.

When Constructing Your Pipeline, Keep in Mind

  1. Use one repository per service. This isolation will reduce the engineer's ability to cross-populate code into different services.
  2. Each service should have independent CI and Deployment pipelines so you can independently build, verify and deploy. This will make set-up easier, require less tool integration, provide faster feedback and require less testing.
  3. Plug in all of your tool chain into your DevOps Automation platform so you can orchestrate and
  4. Your solution must be tools/environment agnostic so you can support each team's workflow and tool chain, no matter what they are.
  5. Your solution needs to be flexible to support any workflow - from the simplest two-step web front-end deployment to the most complex ones (such as in the case of a complex testing matrix or embedded software processes).
  6. Your system needs to scale to serve the myriad services and pipelines.
  7. Continuous Delivery and microservices require a fair amount of testing to ensure quality. Make sure your automation platform integrates with all of your test automation tools and service virtualization.
  8. Auditability needs to be built into your pipeline automatically so you always record in the background the log of each artifact as it makes its way through the pipeline. You also need to know who checked-in the code, what tests were run, pass/fail results, on which environment it was deployed, which configuration was used, who approved it and so on.
  9. Your automation platform needs to enable you to normalize your pipelines as much as possible. Therefore, use parameters and modeling of the applications/environment and pipeline processes so you can reuse pipeline models and processes between services/teams. To enable reusability, planning of your release pipeline and any configuration or modeling should be offered via a unified UI.
  10. Bake in compliance into the pipeline by binding certain security checks and acceptance tests, and use infrastructure services to promote a particular service through the pipeline.
  11. Allow for both automatic and manual approval gates to support regulatory requirements or general governance processes.
  12. Your solution should provide a real-time view of all the pipelines' statuses and any dependencies or exceptions.
  13. Consistent logging and monitoring across all services provides the feedback loop to your pipeline. Make sure your pipeline automation plugs into your monitoring so that alerts can trigger automatic processes such as rolling back a service, switching between blue/green deployments, scaling and so on.

Keep in Mind
For both Continuous Delivery and microservices, a highly focused and streamlined automation pipeline is critical to reduce bottlenecks, mitigate risk and improve quality and time-to-market. While some may choose to cobble together a DIY pipeline, many organizations have opted for a DevOps Automation or Continuous Delivery platform that can automate and orchestrate the entire end-to-end software delivery pipeline. This is particularly useful as you scale to support the complexities of multitudes of microservices and technologies. You don't build your own email server, so why build this yourself?

Want More?
For more tips on microservices, to learn if it is right for you and how to start decomposing your monolith, checkout the video below of my talk on "Microservices: Patterns and Processes" from the recent DevOps Enterprise Summit.

Now, do it yourself:

Download the community edition of ElectricFlow to build your pipelines and deploy any application, to any environment- for free.

More Stories By Anders Wallgren

Anders Wallgren is Chief Technology Officer of Electric Cloud. Anders brings with him over 25 years of in-depth experience designing and building commercial software. Prior to joining Electric Cloud, Anders held executive positions at Aceva, Archistra, and Impresse. Anders also held management positions at Macromedia (MACR), Common Ground Software and Verity (VRTY), where he played critical technical leadership roles in delivering award winning technologies such as Macromedia’s Director 7 and various Shockwave products.

@MicroservicesExpo Stories
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
Many IT organizations have come to learn that leveraging cloud infrastructure is not just unavoidable, it’s one of the most effective paths for IT organizations to become more responsive to business needs. Yet with the cloud comes new challenges, including minimizing downtime, decreasing the cost of operations, and preventing employee burnout to name a few. As companies migrate their processes and procedures to their new reality of a cloud-based infrastructure, an incident management solution...
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things ...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
Cloud Governance means many things to many people. Heck, just the word cloud means different things depending on who you are talking to. While definitions can vary, controlling access to cloud resources is invariably a central piece of any governance program. Enterprise cloud computing has transformed IT. Cloud computing decreases time-to-market, improves agility by allowing businesses to adapt quickly to changing market demands, and, ultimately, drives down costs.
Recent survey done across top 500 fortune companies shows almost 70% of the CIO have either heard about IAC from their infrastructure head or they are on their way to implement IAC. Yet if you look under the hood while some level of automation has been done, most of the infrastructure is still managed in much tradition/legacy way. So, what is Infrastructure as Code? how do you determine if your IT infrastructure is truly automated?
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.
As people view cloud as a preferred option to build IT systems, the size of the cloud-based system is getting bigger and more complex. As the system gets bigger, more people need to collaborate from design to management. As more people collaborate to create a bigger system, the need for a systematic approach to automate the process is required. Just as in software, cloud now needs DevOps. In this session, the audience can see how people can solve this issue with a visual model. Visual models ha...