Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Kelly Burford, Karthick Viswanathan, Scott Davis

Related Topics: Microservices Expo

Microservices Expo: Article

Pragmatic SOA Interoperability

Architecture, not new middleware

A well-planned Web Service interoperability environment begins by clearly defining who your Web Service consumers are now and in the future. There was a time not so long ago when you could count on a fairly homogenous consumer population. This was about the same time that you were happy just to be able to get a Web Service running in the first place and finding a consumer who could actually interact with your Web Service was cause for celebration. Those days have changed however and Web Services interoperability, once a "fancy" addition to your SOA design, is now a key and indispensable requirement in most SOA scenarios.

Today, SOA architects must contend with complex scenarios that assume a variety of Web Service consumers or in many cases are asked to create Web Services that are generic enough to be interoperable with just about any known consumer. If you're serious about implementing SOA in your worldwide enterprise you'll have to forget about the luxury of being able to dictate the configuration of all of your consumers and instead build fully interoperable Web Services. Most robust Internet Web Service APIs, familiar to all of us such as Amazon.com, eBay, or Salesforce.com, have learned this lesson already with very successful results. Those companies know that Web Service interoperability lies in the architecture approach and not in the implementation of new middleware.

What Makes Interoperability So Challenging?

Standards Proliferation and Complexity
The interaction between Web Services and consumers is rooted on a set of standards developed by standards organization committees at OASIS, W3C, and WS-I. These standards such as XML Schema, SOAP, WSDL, and WS-* protocols are intended to provide a technology-agnostic level of abstraction over the service implementation which should theoretically guarantee interoperability. However, it's not the standards themselves, but their implementation by individual Web Service technology vendors that must be interoperable. In a perfect world, every vendor would implement Web Services standards in exactly the same way, which would guarantee interoperability out-of-the-box. As you must know, that's just too good to be true. In the real world, every vendor implements these standards sometimes with just slight variations and other times entirely different versions. In many other cases vendors choose which aspects of a standard to implement, or maybe even choose not to implement a given standard at all. This proliferation and complexity is most obvious when we look at the WS-* protocols.

When this article was written, up to four different WS-Addressing versions were in use. Three versions of the specification are named by their release date: the March 2003 version, the March 2004 version, and the August 2004 version, developed before the specification moved to W3C. The fourth version, 1.0, was completed in May 2006 and developed after the specification went under the W3C umbrella. And if that's not confusing enough, after moving to W3C, the specification split into multiple parts: a core specification and two other specifications that describe bindings for SOAP and WSDL.

There are also different WS-* protocols that address very similar scenarios such as WS-Eventing and WS-Notifications, WS-MessageDelivery and WS-Addressing or WS-Reliability and WS-ReliableMessaging. Committees from OASIS and W3C are working to unify those overlapped protocols into a single set of standards. Vendors can and often implement different versions of the same WS-* protocol or implement one of multiple similar standards such as WS-ReliableMessaging instead of WS-Reliability.

Although the combination of SOAP and WS-* protocols provide a solution for some of the most interesting challenges in distributed computing, the complexity of this approach makes it impractical and nearly impossible to implement in real-world interoperability scenarios. Just a small subset of the over 100 WS-* standards specifications available today have been implemented by vendors. One last, but certainly not least, challenge involved in using SOAP and WS-* protocols is that they can also limit the service availability to clients such as script applications or Web browsers that don't support that generation of SOAP messages.

Best Practice: Use WS-I Profiles
Some WS-* protocols have a WS-I profile available that contains some of the principles and a subset of features of the protocols that should be implemented to guarantee interoperability. Making the services compatible with a WS-I profile (is this one exists) is a standard and globally accepted approach to interoperability. However for some protocols, a WS-I profile isn't available or doesn't address the interoperability requirements.

Best Practice: Implement Multi-Binding Services
To support multiple versions of the same WS-* protocol it's recommended to design the services with multiple bindings per specific versions of the protocol. This approach segments the different types of interactions at the binding level improving aspects like versioning, management, etc. The code in Listings 1 and 2 illustrates a sample Microsoft Windows Communication Foundation (WCF) Service configured to support multiple types of interactions using bindings. Specifically, this service supports secure and reliable interactions as well as basic SOAP interactions using two different bindings.

Best Practice: Exposes SOAP and REST Interfaces
Resource State Transfer (REST) provides a simpler alternative to the use of SOAP and the WS-* protocols for some scenarios. The fact that REST is based on XML messages over HTTP makes it accessible to most of the client technologies on the market including browsers and script languages. Some services can expose both SOAP and REST interfaces and offer a broader set of options to consumers. The example in Listing 3 illustrates that approach using Oracle Application Server.

Aligning Code and Contract
When designing a service you must first consider where to start. Do you create your code first or your contract first? Or do you give them each equal importance by creating them in parallel. However, even if you were extremely diligent in developing perfect synergy between your code and your contract you would still find that the limitations of the basic standards themselves, such as XSD and WSDL, could easily do you in. For example, the XML Schema model presents severe limitations in terms of composability compared with most programming languages data structures. This is often reflected in non-optimal translations between XML Schema structures and programming data structures. Similarly, WSDL 1.0 and 2.0 are both too abstract for describing services that can be interpreted by Web Services frameworks in a consistent way.

There's a lot of debate in the Web Service community around whether to use a contract-first versus a code-first approach to develop Web Services. One of the common arguments in favor of a contract-first approach is that it facilitates interoperability. Whether that's arguably true, the reality is that just a few mortals know WSDL and XSD well enough to design solid service contracts. Given the complexities of both standards developers often end up designing non-optimal WSDLs and XSDs that are translated into poor service implementations. On the other hand, a code-first approach is more familiar to developers but can produce contracts that aren't interoperable.

Best Practice Recommendation
Some of the most successful Web Service implementations have been designed using a hybrid approach that combines the agility of a code-first approach with the flexibility of a XSD/WSDL-first approach. Following these technique developers can leverage their existing skills on a particular development platform to guarantee an optimal service implementation while the WSDL-XSD experts verify that the contract is suited to meeting the interoperability needs. Figure 1 illustrates this approach.

Multi-Transport Services
SOAP, WSDL, and the different WS-* protocols are transport-agnostic specifications. Theoretically it's possible to host the same service using multiple transports such as HTTP, TCP, JMS, etc. Although this feature is supported by some of the Web Services technology frameworks on the market such as Windows Communication Foundation (WCF) or Apache Axis, only HTTP has been adopted widely enough to be considered for interoperability scenarios. Another factor to consider when implementing multi-transport services in real-world scenarios is that certain transports require a specific behavior of the service. For instance, a Web Service that uses JMS as a transport probably implements one-way, long-running, and asynchronous operations. That behavior is fundamentally different from a Web Service that uses HTTP as a transport on which it makes sense to implement atomic operations using different multiple exchange patterns. Hosting the same Web Services using JMS and HTTP makes little or no sense in most of the scenarios.


More Stories By Jesus Rodriguez

Jesus Rodriguez is a co-founder and CEO of KidoZen, an enterprise mobile-first platform as a service redefining the future of enterprise mobile solutions. He is also the co-founder to Tellago, an award-winning professional services firm focused on big enterprise software trends. Under his leadership, KidoZen and Tellago have been recognized as an innovator in the areas of enterprise software and solutions achieving important awards like the Inc 500, Stevie Awards’ American and International Business Awards.

A software scientist by background, Jesus is an internationally recognized speaker and author with contributions that include hundreds of articles and sessions at industry conferences. He serves as an advisor to several software companies such as Microsoft and Oracle, sits at the board of different technology companies. Jesus is a prolific blogger on all subjects related to software technology and entrepreneurship. You can gain valuable insight on business and software technology through his blogs at http://jrodthoughts.com and http://weblogs.asp.net/gsusx .

More Stories By Javier Mariscal

Javier Mariscal is the President and Founder of TwoConnect, Inc, a highly renowned consulting and systems integration company based in Miami, Florida with subsidiary offices in New York City, and San Francisco. After nearly 15 years, Javier is still mostly responsible for guaranteeing that TwoConnect’s innovative integration solutions deliver real and immediate results to its clients. As an author and frequent speaker on SOA and related technologies, Javier constantly reaffirms the need for “practical SOA” which focuses not on pushing a brand or a platform but on delivering immediate business rewards. Under his leadership, TwoConnect has been the first to market with multiple SOA related products and solutions such as the Web Services Enhancements 3.0 Adapter for Microsoft BizTalk Server and the SQL Server Service Broker Enhancements, both currently in production at companies worldwide. In 2006, TwoConnect announced a new line of products and services under its AdapterWorx brand, which has been hugely successful in accelerating its SOA solutions delivered. Following the success of AdapterWorx and a banner year in 2006 in general, Javier was nominated for Entrepreneur of the Year by Hispanic Business Magazine.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way. The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tool...
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies? We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measure...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
With continuous delivery (CD) almost always in the spotlight, continuous integration (CI) is often left out in the cold. Indeed, it's been in use for so long and so widely, we often take the model for granted. So what is CI and how can you make the most of it? This blog is intended to answer those questions. Before we step into examining CI, we need to look back. Software developers often work in small teams and modularity, and need to integrate their changes with the rest of the project code b...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud4U builds software services that help people build DevOps platforms for cloud-based software and using our platform people can draw a picture of the system, network, software," explained Kihyeon Kim, CEO and Head of R&D at Cloud4U, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...