Microservices Expo Authors: Elizabeth White, Pat Romanski, Scott Davis, Stackify Blog, Kelly Burford

Related Topics: Microservices Expo

Microservices Expo: Article

Service Versioning For SOA

Policy-based version control for SOA services

(Found in a blog, "Versioning is as inevitable as security.") SOA development practice isn't much different from other software development practices except for design and maintenance. Multiple self-containing and aggregated services that interact with others have their own lifecycle and evolution. The loosely coupling model of SOA services significantly simplifies design but creates additional difficulties in maintenance, especially in the interoperability of different service versions.

To better understand the requirements of SOA service versioning, let me ask several questions and see if we can answer them easily:

  1. Is SOA a structure of interfaces such as Web Services or it is a structure of services with interfaces?
  2. Who is the master in SOA - the client or service (provider)?
  3. Is an immutable interface more important to a client than the service behind the interface?
  4. What does a version of a service interface mean in case if it's backward compatible or if it's not?
  5. Should a client know if the nature of the service has been changed behind its immutable interface?
  6. If multiple versions of a SOA service are available, how can a client choose which one it wants or can use?
As you can see, I make a distinction between a service provider, calling it just a service, and service interface. If the service interface is a Web Service and we consider versioning, people ask what, actually, constitutes a new Web Service (WS) versus a new version of the same Web Service. Here I'll try to get some answers looking at the questions from the perspective of the SOA service client. I believe it will help us to "see the forest behind the trees."

Why Is Version So Important?
For some people this isn't a question at all. For others, it's a burden that's usually skipped in the development phase and a "sudden" nightmare during the maintenance. Let's analyze a real-life example to validate the importance of versioning in a distributed Service Oriented Architecture.

Following best practices, we had a SOA service with a coarse granular Web Service interface at one of my previous jobs. The Web Service transferred XML documents that defined "commands" (recall the well-known Command Pattern). The service provider promised to log an audit message for every command received for our review.

The provider's team wanted to upgrade its audit database and needed to block the insertion of messages for a while. The service provider built a temporary service without a logging function and silently substituted the service component under the same interface. As it happened the upgrade took longer time than planned and a lot of commands were done without audit logging. We found the problem during our audit reconciliation process but it was too late.

Now, assume that a version control is in place. A substitute (new) service component might be released only under the new version. The version-controlling utility would have to recognize this fact and either block communication or immediately notify the client about the change. You'd say that such version control defeats the interface concept of Web Services and directly affects the client even when an interface isn't changed. And you'd be right. However, in our example the Service Level Agreement (SLA) was violated by the service provider, i.e., the business of the client was affected, though the interface was preserved.

Thus, the OO practice of developing an immutable interface applied to a SOA can easily lead to a business problem. Versioning is one of the cheapest mechanisms to avoid such problems. It's not versioning the interface (a SOA service isn't an object and it has a nature, which is not addressed in object-oriented architecture) but versioning the contract between client and provider. That is, version control works for the client interests and SOA service has to honor it.

What Can Be Versioned?
There are two approaches to versioning: down-to-top and top-to-down. The former is more familiar to developers and the latter is usually used by architects and managers.

In the down-to-top approach, the attention is put mostly on the interface versioning, in particular, the Web Service. The OASIS WSDM-MOWS 1.0 draft specification says that Web Service's description, interface, service, and endpoint can be separately versioned and defined in their own individual namespaces. It's interesting to note that the final 1.0 release (as well as the 1.1 draft) don't have a version control section. That is, separate versions of Web Service's parts may be not a good idea after all.

As we know, a Web Service is defined by its WSDL. The WSDL 2.0 (draft) gets Web Services closer to the notion of a SOA service due to a new component-based concept, operational stile with restrictions and a Feature option. Following the spirit of WSDL 2.0, particular combinations of versions of WSDL elements can constitute an overall version of WSDL. Another combination of element versions leads to another version of WSDL. Unfortunately, even an overall version of WSDL doesn't answer the old question about a new version of the WSDL versus a new WSDL (i.e., a new Web Service).

For example, consider a Web Service in a document/literal style where XML Schema is imported for the message. Changes in the XML Schema don't reflect in the WSDL. So, if changes happen in the XML Schema, do we get a new Web Service? If changes are optional (e.g., new elements with minOccurs="0" is added), is it really a new Web Service for the client? There are too many questions to this version model; that's why I call them "trees."

In the top-down approach, two things are versioned: the service component and the service interface (e.g., the Web Service). The WSDM-MOWS 1.0 draft also recognizes the version of the service component and calls it a revision. According to the specification, "Revisions are related to each other via changes that happened between revisions. Each revision will be associated with a versioned component. Each change indicates a predecessor and successor (if any) revisions. Each change may aggregate multiple change descriptions."

Actually, a service component may have its own lifecycle even outside of the service realm. That is, component versioning is a standalone issue. So a compound SOA service version consists of a combination of an overall version of the service interface and a version of the service component. Our task is to come up with a definition and structure of a compound versioning model.

I've found that a single compound version concept is argumentative even for some people who agree with the aforementioned reasons. They usually refer to the "realistic requirements" stating "people care about particular method [signature] changes rather than changes in other methods." This results in dealing with individual method versions. They also refer to the practice in Java programming where the object version has "low importance" to the "people" in comparison to the version of the object's API. It's interesting though, I'm afraid, a misleading approach.

First of all, the "people" are mostly developers, not users of the service, i.e., service development cycles are the center of attention, not maintenance and integration with the clients. We, on the contrary, are concentrating on the service "face" to the clients. Second, if we follow that "realistic" approach, why bother with multi-method services when a single-method model is much simpler? The aggregation of versioned single-method services can be viewed as a container, like EJB or JMS containers, with no overall versions at all. Third, it's not quite clear (to me) what methods are meant in a case where the service interface is a Web Service in document/literal style. Are we going to version XML elements in the message? If, instead, the message is defined by a version of the XML Schema, the latter addresses multiple methods/elements together. So, we get back to the single version for all methods/elements, don't we?

I have a strong feeling the proponents of per-method versioning are trying to screw SOA services into the object-oriented realm. SOA is not a structure of RPC calls in OOA; Web Services and SOAP are no more than SOA-enabling elements but developers and vendors frequently overlook this fact. One of the major principles of SOA is business agility, i.e., agility with business processes and functions. So SOA service versioning has to be adequate to the SOA concept while the details of the interface versions or service component versions are taken care of by the service provider (e.g., via a hidden mapping of the compound version to the service elements' versions).

Compound Version Identifier
To simplify the observation of the compound version identifier (CVI) structure, let's discriminate between the version visible to the clients and the "assembly" of the versions of the individual service's components, interfaces, and elements. Nevertheless, considering the complexity of the SOA service internals, I'd like to propose that the following CVI structure be available to the services' clients (that include other services as well):



  • srv is an element reflecting the major version of the service as a whole. Changes in this element represent significant changes in the service lifecycle that may be not backward compatible. For example, a security entitlement service adds a control at the level of the individual application function; it doesn't necessarily mean that access to the application has been changed but the final result may be changed without backward compatibility for particular clients;
  • nbc is an element that represents a version state of the major version that's is not backward compatible for some or all of the service functions. For example, one previously deprecated function has finally been removed in that version while other functions were unchanged;
  • bwc is an element indicating an extension or modification in service functionality. It's strictly backward compatible. For example, new functionality like a new type of message or a new method has been added that doesn't affect clients using old functions;
  • rel is an element showing little backward-compatible changes like bug fixes. It may also be a release version correlated to the build and/or a source save repository such as ClearCase, CVS, and the like.
How this version structure helps clients recognize whether a backward-incompatible change affects them? Moreover, if the change is backward compatible or unrelated to the functionality used, should the client continuously modify connectivity code to assimilate a new version of the service? The answers to these and similar questions are in the version control procedure defining how the CVI can be used in conjunction with a client-provider contract.

More Stories By Michael Poulin

Michael Poulin works as an enterprise-level solution architect in the financial industry in the UK. He is a Sun Certified Architect for Java Technology, certified TOGAF Practitioner, and Licensed ZapThink SOA Architect. Michael specializes in distributed computing, SOA, and application security.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way. The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tool...
DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies? We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measure...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
With continuous delivery (CD) almost always in the spotlight, continuous integration (CI) is often left out in the cold. Indeed, it's been in use for so long and so widely, we often take the model for granted. So what is CI and how can you make the most of it? This blog is intended to answer those questions. Before we step into examining CI, we need to look back. Software developers often work in small teams and modularity, and need to integrate their changes with the rest of the project code b...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud4U builds software services that help people build DevOps platforms for cloud-based software and using our platform people can draw a picture of the system, network, software," explained Kihyeon Kim, CEO and Head of R&D at Cloud4U, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...