Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Don MacVittie, Derek Weeks

Related Topics: Microservices Expo

Microservices Expo: Article

Introducing WS-Transaction Part 1

Introducing WS-Transaction Part 1

In July 2002, BEA, IBM, and Microsoft released a trio of specifications designed to support business transactions over Web services. These specifications, BPEL4WS, WS-Transaction, and WS-Coordination, together form the bedrock for reliably choreographing Web services-based applications, providing business process management, transactional integrity, and generic coordination facilities respectively.

In our previous article (WSJ, Volume 3, issue 5), we introduced WS-Coordination, a generic coordination framework for Web services, and showed how the WS-Coordination protocol can be augmented to support coordination in arbitrary application domains. This article introduces the first publicly available WS-Coordination-based protocol - Web Services Transaction - and shows how WS-Transaction provides atomic transactional coordination for Web services.

Transactions
Distributed systems pose reliability problems that are not frequently encountered in centralized systems. A distributed system consisting of a number of computers connected by a network can be subject to independent failure of any of its components, such as the computers themselves, network links, operating systems, or individual applications. Decentralization allows parts of the system to fail while other parts remain functioning, which leads to the possibility of abnormal behavior of executing applications.

Consider the case of a distributed system where the individual computers provide a selection of useful services that can be utilized by an application. It is natural that an application that uses a collection of these services requires that they behave consistently, even in the presence of failures. A very simple consistency requirement is that of failure atomicity: the application either terminates normally, producing the intended results, or is aborted, producing no results at all. This failure atomicity property is supported by atomic transactions, which have the following familiar ACID properties:

  • Atomicity: The transaction completes successfully (commits) or if it fails (aborts) all of its effects are undone (rolled back);
  • Consistency: Transactions produce consistent results and preserve application specific invariants;
  • Isolation: Intermediate states produced while a transaction is executing are not visible to other transactions. Furthermore, transactions appear to execute serially, even if they are actually executed concurrently. This is typically achieved by locking resources for the duration of the transaction so that they cannot be acquired in a conflicting manner by another transaction;
  • Durability: The effects of a committed transaction are never lost (except by a catastrophic failure).

    A transaction can be terminated in two ways: committed or aborted (rolled back). When a transaction is committed, all changes made within it are made durable (forced onto stable storage such as disk). When a transaction is aborted, all changes made during the lifetime of the transaction are undone. In addition, it is possible to nest atomic transactions, where the effects of a nested action are provisional upon the commit/abort of the outermost (top-level) atomic transaction.

    Why ACID Transactions May Be Too Strong
    Traditional transaction processing systems are sufficient to meet requirements if an application function can be represented as a single top-level transaction. However, this is frequently not the case. Top-level transactions are most suitably viewed as short-lived entities, performing stable state changes to the system; they are less well suited for structuring long-lived application functions that run for minutes, hours, days, or longer. Long-lived, top-level transactions may reduce the concurrency in the system to an unacceptable level by holding on to resources (usually by locking) for a long time. Furthermore, if such a transaction aborts, much valuable work already performed will be undone.

    Given that the industry is moving toward a loosely coupled, coarse-grained, B2B interaction model supported by Web services, it has become clear that the semantics of traditional ACID transactions are unsuitable for Web-scale deployment. Web services-based transactions differ from traditional transactions in that they execute over long periods, they require commitments to the transaction to be negotiated at runtime, and isolation levels have to be relaxed.

    WS-Transaction
    In the past, making traditional transaction systems talk to one another was a rarely achieved holy grail. With the advent of Web services, we have an opportunity to leverage an unparalleled interoperability technology to splice together existing transaction-processing systems that already form the backbone of enterprise-level applications.

    WS-Coordination Foundations
    An important aspect of WS-Transaction that differentiates it from traditional transaction protocols is that a synchronous request/response model is not assumed. This model derives from the fact that WS-Transaction (see Figure 1) is layered upon the WS-Coordination protocol whose own communication patterns are asynchronous by default.

     

    In our last article, we looked at how WS-Coordination provides a generic framework for specific coordination protocols, like WS-Transaction, to be plugged in. Remember that WS-Coordination provides only context management - it allows contexts to be created and activities to be registered with those contexts. WS-Transaction leverages the context management framework provided by WS-Coordination in two ways. First, it extends the WS-Coordination context to create a transaction context. Second, it augments the activation and registration services with a number of additional services (Completion, Completion WithAck, PhaseZero, 2PC, Outcome Notification, BusinessAgreement, and BusinessAgreementWithComplete) and two protocol message sets (one for each of the transaction models supported in WS-Transaction) to build a full-fledged transaction coordinator on top the WS-Coordination protocol infrastructure.

    WS-Transaction Architecture
    In common with other transaction protocols (like OTS and BTP), WS-Transaction supports the notion of the service and participant as distinct roles, making the distinction between a transaction-aware service and the participants that act on behalf of the service during a transaction: transactional services deal with business-level protocols, while the participants handle the underlying WS-Transaction protocols, as shown in Figure 2.

     

    A transaction-aware service encapsulates the business logic or work that is required to be conducted within the scope of a transaction. This work cannot be confirmed by the application unless the transaction also commits and so control is ultimately removed from the application and placed into the transaction's domain.

    The participant is the entity that, under the dictates of the transaction coordinator, controls the outcome of the work performed by the transaction-aware Web service. In Figure 2 each service is shown with one associated participant that manages the transaction protocol messages on behalf of its service, while in Figure 3, we see a close-up view of a single service, and a client application with their associated participants.

     

    The transaction-aware Web service and its participant both serve a shared transactional resource, and there is a control relationship between them through some API - which on the Java platform is JAXTX. In the example in Figure 3, we assume that the database is accessed through a transactional JDBC database driver, where SQL statements are sent to the database for processing via that driver, but where those statements will be tentative and only commit if the transaction does. In order to do this, the driver/database will associate a participant with the transaction which will inform the database of the transaction outcome. Since all transactional invocations on the Web service carry a transaction context, the participant working with the database is able to identify the work that the transactional service did within the scope of a specific transaction and either commit or roll back the work.

    At the client end, things are less complex. Through its API, the client application registers a participant with the transaction through which it controls transaction termination.

    WS-Transaction Models
    Given that we've already seen that traditional transaction models are not appropriate for Web services, we must pose the question, "What type of model or protocol is appropriate?" The answer to that question is that that no one specific protocol is likely to be sufficient, given the wide range of situations that Web service transactions are likely to be deployed within. Hence the WS-Transaction specification proposes two distinct models, each supporting the semantics of a particular kind of B2B interaction. In the following sections we'll discuss these two models, but for the sake of brevity we ignore possible failure cases.

    Note: as with WS-Coordination, the two WS-Transaction models are extensible, allowing implementations to tailor the protocols as they see fit (e.g., to suit their deployment environments). For clarity, we'll discuss only the "vanilla" protocols and leave proprietary extensions out of the picture.

    Atomic Transactions (AT)
    An atomic transaction, or AT, is similar to traditional ACID transactions and intended to support short-duration interactions where ACID semantics are appropriate.

    Within the scope of an AT, services typically enroll transaction-aware resources, such as databases and message queues, indirectly as participants under the control of the transaction. When the transaction terminates, the outcome decision of the AT is then propagated to each enlisted resource via the participant, and the appropriate commit or rollback actions are taken.

    This protocol is similar to those employed by traditional transaction systems that already form the backbone of an enterprise. It is assumed that all services (and associated participants) provide ACID semantics and that any use of atomic transactions occurs in environments and situations where this is appropriate: in a trusted domain, over short durations.

    To begin an atomic transaction, the client application first locates a WS-Coordination coordinator Web service that supports WS-Transaction. Once found, the client sends a WS-Coordination CreateCoordinationContext message to the activation service specifying http://schemas.xmlsoap.org/ws/2002/08/wstx as its coordination type and will get back an appropriate WS-Transaction context from the activation service. The response to the CreateCoordinationContext message, the transaction context, has its CoordinationType element set to the WS-Transaction at namespace, http://schemas.xmlsoap.org/ws/2002/08/wstx, and also contains a reference to the atomic transaction coordinator endpoint (the WS-Coordination registration service) where participants can be enlisted, as shown in Listing 1 (the code for this article can be found online at www.sys-con.com/webservices/sourcec.cfm.

    After obtaining a transaction context from the coordinator, the client application then proceeds to interact with Web services to accomplish its business-level work. With each invocation on a business Web service, the client inserts the transaction context into a SOAP header block, such that the each invocation is implicitly scoped by the transaction - the toolkits that support WS-Transaction-aware Web services provide facilities to correlate contexts found in SOAP header blocks with back-end operations.

    Once all the necessary application-level work has been completed, the client can terminate the transaction, with the intent of making any changes to the service state permanent. To do this, the client application first registers its own participant for the Completion or CompletionWithAck protocol. Once registered, the participant can instruct the coordinator either to try to commit or roll back the transaction. When the commit or rollback operation has completed, a status is returned to the participant to indicate the outcome of the transaction. The CompletionWithAck protocol goes one step further and insists that the coordinator must remember the outcome until it has received acknowledgment of the notification from the participant.

    While the completion protocols are straightforward, they hide the fact that in order to resolve to an outcome several other protocols need to be executed.

    The first of these protocols is the optional PhaseZero. The PhaseZero protocol is typically executed where a Web service needs to flush volatile (cached) state, which may be in use to improve performance of an application to a database prior to the transaction committing. Once flushed, the data will then be controlled by a two-phase aware participant.

    All PhaseZero participants are told that the transaction is about to complete (via the PhaseZero message) and they can respond with either the PhaseZeroCompleted or Error message; any failures at this stage will cause the transaction to roll back. The corresponding interfaces through which the participant and transaction coordinator exchange PhaseZero messages are shown in Listing 2.

    After PhaseZero, the next protocol to execute in WS-Transaction is 2PC. The 2PC (two-phase commit) protocol is at the heart of WS-Transaction atomic transactions and is used to bring about the consensus between participants in a transaction such that the transaction can be terminated safely.

    The 2PC protocol is used to ensure atomicity between participants, and is based on the classic two-phase commit with presumed abort technique. During the first phase, when the coordinator sends the prepare message, a participant must make durable any state changes that occurred during the scope of the transaction, such that these changes can either be rolled back or committed later. That is, any original state must not be lost at this point as the atomic transaction could still roll back. If the participant cannot prepare then it must inform the coordinator (via the aborted message) and the transaction will ultimately roll back. If the participant is responsible for a service that did not do any work during the course of the transaction, or at least did not do any work that modified any state, it can return the read-only message and it will be omitted from the second phase of the commit protocol. Otherwise, the prepared message is sent by the participant.

    Assuming no failures occurred during the first phase, in the second phase the coordinator sends the commit message to participants, who will make permanent the tentative work done by their associated services.

    If a transaction involves only a single participant, WS-Transaction supports a one-phase commit optimization. Since there is only one participant, its decisions implicitly reach consensus, and the coordinator need not drive the transaction through both phases. In the optimized case, the participant will simply be told to commit and the transaction coordinator need not record information about the decision since the outcome of the transaction is solely for that single participant.

    To place the 2PC protocol concepts into a Web services context, the interfaces of the transaction coordinator and corresponding 2PC participant are defined by the WSDL shown in Listing 3. The two WSDL portType declarations are complementary; for instance, where the 2PCParticipantPortType exposes the prepare operation to allow a coordinator to put it into the prepared state; the 2PCCoordinatorPortType has the prepared operation to allow participants to inform the coordinator that they have indeed moved to the prepared state. Figure 4 (redrawn from the WS-Transaction specification http://msdn.microsoft.com/library/ default.asp?url=/library/en-us/dnglobspec/ html/ws-transaction.asp) shows the state transitions of a WS-Transaction atomic transaction and the message exchanges between coordinator and participant; the coordinator generated messages are shown in the solid line, whereas the participant messages are shown by dashed lines.

     

    Once the 2PC protocol has finished, the Completion or CompletionWithAck protocol that originally began the termination of the transaction can complete, and inform the client application whether the transaction was committed or rolled back. In addition, some services may have registered an interest in the completion of a transaction, and they will be informed via the OutcomeNotificatonProtocol.

    Like the PhaseZero protocol, the OutcomeNotificatonProtocol is an optional protocol that some services will register for so that they can be informed when the transaction has completed, typically so that they can release resources (e.g., put a database connection back into the pool of connections).

    Any registered OutcomeNotification participants are invoked after the transaction has terminated and are told the state in which the transaction completed (the coordinator sends either the Committed or Aborted message). Since the transaction has terminated, any failures of participants at this stage are ignored - OutcomeNotification is essentially a courtesy and has no bearing on the outcome of the transaction.

    Finally, after having gone through each of the stages in an AT, we can now see the intricate interweaving of individual protocols that goes to make up the AT as a whole in Figure 5.

     

    Coordinating Atomic Transactions on the Web
    Transactions come to the fore when computational work with real-world financial implications must be executed. That being said, what better place to demonstrate the use of WS-Transaction than in online retail, where organizations live and die based on the quality of their customer service?

    Take the situation where a customer needs to purchase a new set of formalwear items, including a suit, tie, and shoes. Obviously it wouldn't be advisable for the customer to go into a formal situation without any of these, so the purchase of all three is a prerequisite for the completion of a business transaction.

    In the first instance, let's consider the situation where a single retailer can offer a choice of all three items (see Figure 6).

     

    In Figure 6 the retailer's Web service acts as a gateway to some back-end services that it also hosts. In this case, since the trust domain is entirely within one organization it's safe to use an Atomic Transaction to scope the purchases that the client application makes into a single logical unit of work.

    A typical use case for the architecture shown in Figure 6 is:
    1.   The client application begins its interaction with the online store, which creates an AT at the back end.
    2.   The client purchases items, which are then locked and other transactions cannot see them.
    3.   When the client application decides to buy the items, the AT is committed and its tentative work is made permanent, unless there are faults, in which case the work is rolled back.
    4.   The termination status of the transaction is reported back to the customer as a purchase successful/unsuccessful message.

    Aside from the fact that we are using Web services to host application logic, this is a textbook transactions example, which goes to strengthen the view that ATs are meant to be used within the kinds of close trust domains that traditional transaction processing infrastructure operates within. The Web services aspects of the protocol simply mean that proprietary transaction processing systems can interoperate, but this does not change their fundamental trust characteristics - which must be borne in mind by developers lest they expose lockable resources to the Web!

    Summary
    In this article we've seen how WS-Coordination has been used to provide the basis of the WS-Transaction protocol. We have also discussed the first transaction model that WS-Transaction supports: Atomic Transaction. This protocol is suitable for supporting short-lived transactions between trusted Web services where the possibility for malicious locking of resources is low.

    In our next article, we'll introduce the Business Activity protocol and show how it can provide the basis for higher-level business process management and workflow technology.

  • More Stories By Mark Little

    Mark Little was Chief Architect, Transactions for Arjuna Technologies Ltd, a UK-based company specialising in the development of reliable middleware that was recently acquired by JBoss, Inc. Before Arjuna, Mark was a Distinguished Engineer/Architect within HP Arjuna Labs in Newcastle upon Tyne, England, where he led the HP-TS and HP-WST teams, developing J2EE and Web services transactions products respectively. He is one of the primary authors of the OMG Activity Service specification and is on the expert group for the same work in J2EE (JSR 95). He is also the specification lead for JSR 156: Java API for XML Transactions. He's on the OTS Revision Task Force and the OASIS Business Transactions Protocol specification. Before joining HP he was for over 10 years a member of the Arjuna team within the University of Newcastle upon Tyne (where he continues to have a Visiting Fellowship). His research within the Arjuna team included replication and transactions support, which include the construction of an OTS/JTS compliant transaction processing system. Mark has published extensively in the Web Services Journal, Java Developer's Journal and other journals and magazines. He is also the co-author of several books including “Java and Transactions for Systems Professionals” and “The J2EE 1.4 Bible.”

    More Stories By Jim Webber

    Dr. Jim Webber is a senior researcher from the University of Newcastle
    upon Tyne, currently working in the convergence of Web Services and Grid
    technologies at the University of Sydney, Australia. Jim was previously
    Web Services architect with Arjuna Technologies where he worked on Web
    Services transactioning technology, including being one of the original
    authors of the WS-CAF specification. Prior to Arjuna, Jim was the lead
    developer with Hewlett-Packard on the industry's first Web Services
    Transaction solution. Co-author of "Developing Enterprise Web Services -
    An Architect's Guide," Jim is an active speaker and author in the Web
    Services space. Jim's home on the web is http://jim.webber.name

    Comments (1) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Most Recent Comments
    mukhias 09/01/08 09:03:30 AM EDT

    i am sorry.. i can't see the source code for the aritcle..

    @MicroservicesExpo Stories
    As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
    SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
    DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
    Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things c...
    There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
    Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
    Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
    "I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
    The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
    Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task ...
    "We started a Master of Science in business analytics - that's the hot topic. We serve the business community around San Francisco so we educate the working professionals and this is where they all want to be," explained Judy Lee, Associate Professor and Department Chair at Golden Gate University, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    The past few years have seen a huge increase in the amount of critical IT services that companies outsource to SaaS/IaaS/PaaS providers, be it security, storage, monitoring, or operations. Of course, along with any outsourcing to a service provider comes a Service Level Agreement (SLA) to ensure that the vendor is held financially responsible for any lapses in their service which affect the customer’s end users, and ultimately, their bottom line. SLAs can be very tricky to manage for a number ...
    In a recent post, titled “10 Surprising Facts About Cloud Computing and What It Really Is”, Zac Johnson highlighted some interesting facts about cloud computing in the SMB marketplace: Cloud Computing is up to 40 times more cost-effective for an SMB, compared to running its own IT system. 94% of SMBs have experienced security benefits in the cloud that they didn’t have with their on-premises service
    The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
    Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
    The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
    Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
    The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
    Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...