Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, PagerDuty Blog, Derek Weeks, Liz McMillan

Related Topics: Microservices Expo

Microservices Expo: Article

Introducing WS-Transaction Part II

Introducing WS-Transaction Part II

In July 2002, BEA, IBM, and Microsoft released a trio of specifications designed to support business transactions over Web services. BPEL4WS, WS-Transaction, and WS-Coordination together form the bedrock for reliably choreographing Web services-based applications.

In our previous articles (WSJ, Vol. 3, issues 5 and 6), we introduced WS-Coordination, a generic coordination framework for Web services, and showed how the WS-Coordination protocol can be augmented to provide atomic transactionality for Web services via the WS-Transaction Atomic Transaction model.

This article looks at support for extended transactions across Web services. We also show how these can be used to provide the basis for higher-level business process management and workflow technology.

Business Activities
Most business-to-business applications require transactional support in order to guarantee consistent outcome and correct execution. These applications often involve long-running computations, loosely coupled systems, and components that don't share data, location, or administration. It's difficult to incorporate atomic transactions within such architectures. For example, an online bookshop may reserve books for an individual for a specific period of time, but if the individual doesn't purchase the books within that period they will be "put back onto the shelf" for others to buy. Furthermore, because it is impossible for anyone to have an infinite supply of stock, some online shops may appear to reserve items, but in fact may allow others to preempt that reservation (i.e., the same book may be "reserved" for multiple users concurrently); a user may subsequently find that the item is no longer available, or has to be reordered for them.

A business activity (BA) is designed specifically for these long-duration interactions, where exclusively locking resources is impossible or impractical. In this model, services are requested to do work, and where those services have the ability to undo any work, they inform the BA so that if the BA later decides to cancel the work (i.e., if the business activity suffers a failure), it can instruct the service to execute its undo behavior. The key point for business activities is that how services do their work and provide compensation mechanisms is not the domain of the WS-Transaction specification, but an implementation decision for the service provider.

The BA defines a protocol for Web services-based applications to enable existing business processing and workflow systems to wrap their proprietary mechanisms and interoperate across implementations and business boundaries.

A BA may be partitioned into scopes - business tasks or units of work using a collection of Web services. Scopes can be nested to arbitrary degrees, forming parent and child relationships, where a parent scope can select which child tasks to include in the overall outcome protocol for a specific business activity, so nonatomic outcomes are possible. In a manner similar to traditional nested transactions, if a child task experiences an error it can be caught by the parent, who may be able to compensate and continue processing.

When a child task completes it can either leave the business activity or signal to the parent that the work it has done can be compensated later. In the latter case, the compensation task may be called by the parent should it ultimately need to undo the work performed by the child.

Unlike the atomic transaction protocol model, where participants inform the coordinator of their state only when asked, a task within a BA can specify its outcome to the parent directly without waiting for a request. When tasks fail, the notification can be used by the business activity exception handler to modify the goals and drive processing forward without waiting meekly until the end of the transaction to admit to having failed - a well-designed BA should be proactive if it is to be performant.

Underpinning all of this are three fundamental assumptions:

  • All state transitions are reliably recorded, including application state and coordination metadata (the record of sent and received messages).
  • All request messages are acknowledged, so problems are detected as early as possible. This eliminates unnecessary tasks and can detect a problem earlier, when rectifying it is simpler and less expensive.
  • As with atomic transactions, a response is defined as a separate operation and not as the output of the request. Message input-output implementations will typically have timeouts that are too short for some business activity responses. If the response is not received after a timeout, it is sent again. This is repeated until a response is received. The request receiver discards all but one identical request received.

    The business activity model has multiple protocols: BusinessAgreement and BusinessAgreementWithComplete. However, unlike the AT protocol, which is driven from the coordinator down to participants, this protocol is driven from the participants upwards.

    Under the BusinessAgreement protocol, a child activity is initially created in the Active state; if it finishes the work it was created to do and no more participation is required within the scope of the BA (such as when the activity operates on immutable data), the child can unilaterally send an exited message to the parent. However, if the child task finishes and wishes to continue in the BA, then it must be able to compensate for the work it has performed. In this case it sends a completed message to the parent and waits to receive the final outcome of the BA from the parent. This outcome will be either a close message - the BA has completed successfully - or a compensate message - the parent activity requires that the child task reverse its work.

    The BusinessAgreementWithComplete protocol is identical to the BusinessAgreement protocol with the exception that the child cannot autonomously decide to end its participation in the business activity, even if it can be compensated. Rather, the child task relies upon the parent to inform it when the child has received all requests for it to perform work. The parent does this by sending the complete message to the child, which then acts as it does in the BusinessAgreement protocol.

    The crux of the BA model, compared to the AT model, is that it allows the participation of services that cannot or will not lock resources for extended periods.

    While the full ACID semantics are not maintained by a BA, consistency can be maintained through compensation, although writing correct compensating actions (and thus overall system consistency) is delegated to the developers of the services controlled by the BA. Such compensations may use backward error recovery, but typically employ forward recovery.

    Coordinating Business Activities on the Web
    However, the real beauty of the Web services model is that it is highly modular. Capitalizing on that modularity, consider the case shown in Figure 1, where a shopping portal uses several suppliers to deliver a richer shopping experience to the customer.

     

    In this case, a BA is used since there is no close trust relationship between any of the suppliers (indeed they are probably competitors), and purchases are committed immediately as per the BA model. In the non-failure case, things are straightforward and each child BA reports back that it has completed to the coordinator via a completed message.

    The failure case, however, is a little more interesting (see Figure 2). Let's assume that Supplier 2 could not source the tie that the customer wanted and its corresponding BA fails. It reports the failure back to the coordinator through a faulted message. On receiving this message, the logic driving the BA, which we assume to be a workflow script residing in the portal service, is invoked to deal with the fault. In this case, the logic uses forward error recovery to try to obtain the item from an alternative supplier.

     

    If the forward error recovery works, and the alternate supplier's Web service confirms that it is able to source the desired item, then the BA proceeds normally, executing subsequent child BAs until completion. If, however, the BA cannot make forward progress and it thus has no option but to go backwards and compensate previous successfully completed activities. Note that failed activities are not compensated because their state is, by definition, unknown.

    Once the compensation has taken place successfully (remember that an added complexity is that compensations can themselves fail), the system should be in a state that is semantically equivalent to the state it was in before the purchase operations were carried out. The shopping portal service knows the status of the transaction from the coordinator, and can then report back to the customer application that the order didn't complete.

    Business Activities and BPEL4WS
    During the execution of a business process, like our shopping portal example, data in the various systems that the process encompasses changes. Normally such data is held in mission-critical enterprise databases and queues, which have ACID transactional properties to ensure data integrity. This can lead to a situation whereby a number of valid commits to databases could have been made during the course of a process, but where the overall process might fail, leaving work partially completed. In such situations the reversal of partial work cannot rely on backward error recovery mechanisms - rollback - supported by the databases since the updates to the database will have been long since committed. Instead, we must compensate at the application level by performing the logical reverse of each activity that was executed as part of our process, from the most recently executed scope back to the earliest executed scope. This model is known as a saga, and is the default compensation model supported by BPEL4WS.

    The BPEL4WS specification suggests WS-Transaction Business Activity as the protocol of choice for managing transactions that support the interactions of process instances running within different enterprise systems. A business activity is used both as the means of grouping distributed activities into a single logical unit of work and the dissemination of the outcome of that unit of work - whether all scopes completed successfully or need to be compensated.

    If each of the Web services in our shopping portal example were implemented as BPEL4WS workflow scripts, the messages from the BA protocol messages from the coordinator could be consumed by those workflow scripts and used to instigate any compensating activities for those activities. The execution of compensating activities caused by the coordinator sending compensate messages to the participants returns the process as a whole to the same state logically as it was before the process executed.

    Relationship to OASIS BTP
    The OASIS Business Transactions Protocol (BTP) was developed by a consortium of companies, including Hewlett-Packard, Oracle, and BEA, to tackle a similar problem to WS-Transaction: business-to-business transactions in loosely coupled domains. BTP was designed with loose coupling of services in mind and integration with existing enterprise transaction systems was not a high priority. Web services were also not the only deployment environment considered by the BTP developers so the specification only defines an XML protocol message set, and leaves the binding of this message set to specific deployment domains.

    BTP defines two transaction models: atoms, which guarantee atomicity of decision among participants; and cohesions, which allow relaxed atomicity such that subsets of participants can see different outcomes in a controlled manner. Both models use a two-phase completion protocol, which deliberately does not require ACID semantics: although it is similar to the 2PC protocol used by WS-Transaction Atomic Transactions, it is used purely to attain consensus and no semantics can be inferred from higher-level services that use atoms. An implementer of a BTP participant is free to use compensation techniques in the second-phase operations to guarantee atomicity if that model best suits the business.

    Both atoms and cohesions also use the open-top coordination protocol, whereby both phases of the two-phase protocol must be explicitly executed by users. Because no time limit is implied between the two phases of the completion protocol, this explicit separation of the phases is intended to allow businesses to better model their business processes.

    Although at least in theory WS-Transaction and BTP are intended to address the same problem domain, there are significant differences between them. BTP allows business-level negotiation to occur during many points in the protocol in its Qualifier mechanism; WS-Transaction does not have such a capability.

    Summary
    Over the course of these articles, we've seen both the atomic AT protocol and the non-ACID BA designed to support long-running transactions. While both the AT and BA models will be available to Web services developers directly through toolkits, it is the BA model that is supported by the BPEL4WS standard to provide distributed transaction support for business processes.

  • More Stories By Jim Webber

    Dr. Jim Webber is a senior researcher from the University of Newcastle
    upon Tyne, currently working in the convergence of Web Services and Grid
    technologies at the University of Sydney, Australia. Jim was previously
    Web Services architect with Arjuna Technologies where he worked on Web
    Services transactioning technology, including being one of the original
    authors of the WS-CAF specification. Prior to Arjuna, Jim was the lead
    developer with Hewlett-Packard on the industry's first Web Services
    Transaction solution. Co-author of "Developing Enterprise Web Services -
    An Architect's Guide," Jim is an active speaker and author in the Web
    Services space. Jim's home on the web is http://jim.webber.name

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @MicroservicesExpo Stories
    The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
    SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
    Many IT organizations have come to learn that leveraging cloud infrastructure is not just unavoidable, it’s one of the most effective paths for IT organizations to become more responsive to business needs. Yet with the cloud comes new challenges, including minimizing downtime, decreasing the cost of operations, and preventing employee burnout to name a few. As companies migrate their processes and procedures to their new reality of a cloud-based infrastructure, an incident management solution...
    Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
    Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
    Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
    Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
    You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
    Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
    Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things ...
    The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
    The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
    Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
    Cloud Governance means many things to many people. Heck, just the word cloud means different things depending on who you are talking to. While definitions can vary, controlling access to cloud resources is invariably a central piece of any governance program. Enterprise cloud computing has transformed IT. Cloud computing decreases time-to-market, improves agility by allowing businesses to adapt quickly to changing market demands, and, ultimately, drives down costs.
    For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
    Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
    The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
    Recent survey done across top 500 fortune companies shows almost 70% of the CIO have either heard about IAC from their infrastructure head or they are on their way to implement IAC. Yet if you look under the hood while some level of automation has been done, most of the infrastructure is still managed in much tradition/legacy way. So, what is Infrastructure as Code? how do you determine if your IT infrastructure is truly automated?
    Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.
    As people view cloud as a preferred option to build IT systems, the size of the cloud-based system is getting bigger and more complex. As the system gets bigger, more people need to collaborate from design to management. As more people collaborate to create a bigger system, the need for a systematic approach to automate the process is required. Just as in software, cloud now needs DevOps. In this session, the audience can see how people can solve this issue with a visual model. Visual models ha...