Welcome!

Microservices Expo Authors: Dalibor Siroky, Pat Romanski, Liz McMillan, Stackify Blog, Elizabeth White

Related Topics: Microservices Expo

Microservices Expo: Article

Introducing WS-Transaction Part II

Introducing WS-Transaction Part II

In July 2002, BEA, IBM, and Microsoft released a trio of specifications designed to support business transactions over Web services. BPEL4WS, WS-Transaction, and WS-Coordination together form the bedrock for reliably choreographing Web services-based applications.

In our previous articles (WSJ, Vol. 3, issues 5 and 6), we introduced WS-Coordination, a generic coordination framework for Web services, and showed how the WS-Coordination protocol can be augmented to provide atomic transactionality for Web services via the WS-Transaction Atomic Transaction model.

This article looks at support for extended transactions across Web services. We also show how these can be used to provide the basis for higher-level business process management and workflow technology.

Business Activities
Most business-to-business applications require transactional support in order to guarantee consistent outcome and correct execution. These applications often involve long-running computations, loosely coupled systems, and components that don't share data, location, or administration. It's difficult to incorporate atomic transactions within such architectures. For example, an online bookshop may reserve books for an individual for a specific period of time, but if the individual doesn't purchase the books within that period they will be "put back onto the shelf" for others to buy. Furthermore, because it is impossible for anyone to have an infinite supply of stock, some online shops may appear to reserve items, but in fact may allow others to preempt that reservation (i.e., the same book may be "reserved" for multiple users concurrently); a user may subsequently find that the item is no longer available, or has to be reordered for them.

A business activity (BA) is designed specifically for these long-duration interactions, where exclusively locking resources is impossible or impractical. In this model, services are requested to do work, and where those services have the ability to undo any work, they inform the BA so that if the BA later decides to cancel the work (i.e., if the business activity suffers a failure), it can instruct the service to execute its undo behavior. The key point for business activities is that how services do their work and provide compensation mechanisms is not the domain of the WS-Transaction specification, but an implementation decision for the service provider.

The BA defines a protocol for Web services-based applications to enable existing business processing and workflow systems to wrap their proprietary mechanisms and interoperate across implementations and business boundaries.

A BA may be partitioned into scopes - business tasks or units of work using a collection of Web services. Scopes can be nested to arbitrary degrees, forming parent and child relationships, where a parent scope can select which child tasks to include in the overall outcome protocol for a specific business activity, so nonatomic outcomes are possible. In a manner similar to traditional nested transactions, if a child task experiences an error it can be caught by the parent, who may be able to compensate and continue processing.

When a child task completes it can either leave the business activity or signal to the parent that the work it has done can be compensated later. In the latter case, the compensation task may be called by the parent should it ultimately need to undo the work performed by the child.

Unlike the atomic transaction protocol model, where participants inform the coordinator of their state only when asked, a task within a BA can specify its outcome to the parent directly without waiting for a request. When tasks fail, the notification can be used by the business activity exception handler to modify the goals and drive processing forward without waiting meekly until the end of the transaction to admit to having failed - a well-designed BA should be proactive if it is to be performant.

Underpinning all of this are three fundamental assumptions:

  • All state transitions are reliably recorded, including application state and coordination metadata (the record of sent and received messages).
  • All request messages are acknowledged, so problems are detected as early as possible. This eliminates unnecessary tasks and can detect a problem earlier, when rectifying it is simpler and less expensive.
  • As with atomic transactions, a response is defined as a separate operation and not as the output of the request. Message input-output implementations will typically have timeouts that are too short for some business activity responses. If the response is not received after a timeout, it is sent again. This is repeated until a response is received. The request receiver discards all but one identical request received.

    The business activity model has multiple protocols: BusinessAgreement and BusinessAgreementWithComplete. However, unlike the AT protocol, which is driven from the coordinator down to participants, this protocol is driven from the participants upwards.

    Under the BusinessAgreement protocol, a child activity is initially created in the Active state; if it finishes the work it was created to do and no more participation is required within the scope of the BA (such as when the activity operates on immutable data), the child can unilaterally send an exited message to the parent. However, if the child task finishes and wishes to continue in the BA, then it must be able to compensate for the work it has performed. In this case it sends a completed message to the parent and waits to receive the final outcome of the BA from the parent. This outcome will be either a close message - the BA has completed successfully - or a compensate message - the parent activity requires that the child task reverse its work.

    The BusinessAgreementWithComplete protocol is identical to the BusinessAgreement protocol with the exception that the child cannot autonomously decide to end its participation in the business activity, even if it can be compensated. Rather, the child task relies upon the parent to inform it when the child has received all requests for it to perform work. The parent does this by sending the complete message to the child, which then acts as it does in the BusinessAgreement protocol.

    The crux of the BA model, compared to the AT model, is that it allows the participation of services that cannot or will not lock resources for extended periods.

    While the full ACID semantics are not maintained by a BA, consistency can be maintained through compensation, although writing correct compensating actions (and thus overall system consistency) is delegated to the developers of the services controlled by the BA. Such compensations may use backward error recovery, but typically employ forward recovery.

    Coordinating Business Activities on the Web
    However, the real beauty of the Web services model is that it is highly modular. Capitalizing on that modularity, consider the case shown in Figure 1, where a shopping portal uses several suppliers to deliver a richer shopping experience to the customer.

     

    In this case, a BA is used since there is no close trust relationship between any of the suppliers (indeed they are probably competitors), and purchases are committed immediately as per the BA model. In the non-failure case, things are straightforward and each child BA reports back that it has completed to the coordinator via a completed message.

    The failure case, however, is a little more interesting (see Figure 2). Let's assume that Supplier 2 could not source the tie that the customer wanted and its corresponding BA fails. It reports the failure back to the coordinator through a faulted message. On receiving this message, the logic driving the BA, which we assume to be a workflow script residing in the portal service, is invoked to deal with the fault. In this case, the logic uses forward error recovery to try to obtain the item from an alternative supplier.

     

    If the forward error recovery works, and the alternate supplier's Web service confirms that it is able to source the desired item, then the BA proceeds normally, executing subsequent child BAs until completion. If, however, the BA cannot make forward progress and it thus has no option but to go backwards and compensate previous successfully completed activities. Note that failed activities are not compensated because their state is, by definition, unknown.

    Once the compensation has taken place successfully (remember that an added complexity is that compensations can themselves fail), the system should be in a state that is semantically equivalent to the state it was in before the purchase operations were carried out. The shopping portal service knows the status of the transaction from the coordinator, and can then report back to the customer application that the order didn't complete.

    Business Activities and BPEL4WS
    During the execution of a business process, like our shopping portal example, data in the various systems that the process encompasses changes. Normally such data is held in mission-critical enterprise databases and queues, which have ACID transactional properties to ensure data integrity. This can lead to a situation whereby a number of valid commits to databases could have been made during the course of a process, but where the overall process might fail, leaving work partially completed. In such situations the reversal of partial work cannot rely on backward error recovery mechanisms - rollback - supported by the databases since the updates to the database will have been long since committed. Instead, we must compensate at the application level by performing the logical reverse of each activity that was executed as part of our process, from the most recently executed scope back to the earliest executed scope. This model is known as a saga, and is the default compensation model supported by BPEL4WS.

    The BPEL4WS specification suggests WS-Transaction Business Activity as the protocol of choice for managing transactions that support the interactions of process instances running within different enterprise systems. A business activity is used both as the means of grouping distributed activities into a single logical unit of work and the dissemination of the outcome of that unit of work - whether all scopes completed successfully or need to be compensated.

    If each of the Web services in our shopping portal example were implemented as BPEL4WS workflow scripts, the messages from the BA protocol messages from the coordinator could be consumed by those workflow scripts and used to instigate any compensating activities for those activities. The execution of compensating activities caused by the coordinator sending compensate messages to the participants returns the process as a whole to the same state logically as it was before the process executed.

    Relationship to OASIS BTP
    The OASIS Business Transactions Protocol (BTP) was developed by a consortium of companies, including Hewlett-Packard, Oracle, and BEA, to tackle a similar problem to WS-Transaction: business-to-business transactions in loosely coupled domains. BTP was designed with loose coupling of services in mind and integration with existing enterprise transaction systems was not a high priority. Web services were also not the only deployment environment considered by the BTP developers so the specification only defines an XML protocol message set, and leaves the binding of this message set to specific deployment domains.

    BTP defines two transaction models: atoms, which guarantee atomicity of decision among participants; and cohesions, which allow relaxed atomicity such that subsets of participants can see different outcomes in a controlled manner. Both models use a two-phase completion protocol, which deliberately does not require ACID semantics: although it is similar to the 2PC protocol used by WS-Transaction Atomic Transactions, it is used purely to attain consensus and no semantics can be inferred from higher-level services that use atoms. An implementer of a BTP participant is free to use compensation techniques in the second-phase operations to guarantee atomicity if that model best suits the business.

    Both atoms and cohesions also use the open-top coordination protocol, whereby both phases of the two-phase protocol must be explicitly executed by users. Because no time limit is implied between the two phases of the completion protocol, this explicit separation of the phases is intended to allow businesses to better model their business processes.

    Although at least in theory WS-Transaction and BTP are intended to address the same problem domain, there are significant differences between them. BTP allows business-level negotiation to occur during many points in the protocol in its Qualifier mechanism; WS-Transaction does not have such a capability.

    Summary
    Over the course of these articles, we've seen both the atomic AT protocol and the non-ACID BA designed to support long-running transactions. While both the AT and BA models will be available to Web services developers directly through toolkits, it is the BA model that is supported by the BPEL4WS standard to provide distributed transaction support for business processes.

  • More Stories By Jim Webber

    Dr. Jim Webber is a senior researcher from the University of Newcastle
    upon Tyne, currently working in the convergence of Web Services and Grid
    technologies at the University of Sydney, Australia. Jim was previously
    Web Services architect with Arjuna Technologies where he worked on Web
    Services transactioning technology, including being one of the original
    authors of the WS-CAF specification. Prior to Arjuna, Jim was the lead
    developer with Hewlett-Packard on the industry's first Web Services
    Transaction solution. Co-author of "Developing Enterprise Web Services -
    An Architect's Guide," Jim is an active speaker and author in the Web
    Services space. Jim's home on the web is http://jim.webber.name

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @MicroservicesExpo Stories
    The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
    You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
    Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
    It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
    The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
    As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
    For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
    Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
    Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
    While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
    From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way. The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tool...
    DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies? We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measure...
    DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
    "As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
    With continuous delivery (CD) almost always in the spotlight, continuous integration (CI) is often left out in the cold. Indeed, it's been in use for so long and so widely, we often take the model for granted. So what is CI and how can you make the most of it? This blog is intended to answer those questions. Before we step into examining CI, we need to look back. Software developers often work in small teams and modularity, and need to integrate their changes with the rest of the project code b...
    "I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    "Cloud4U builds software services that help people build DevOps platforms for cloud-based software and using our platform people can draw a picture of the system, network, software," explained Kihyeon Kim, CEO and Head of R&D at Cloud4U, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
    DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...