Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, PagerDuty Blog, Derek Weeks, Liz McMillan

Related Topics: Microservices Expo

Microservices Expo: Article

Web Services Infrastructure, part II

Web Services Infrastructure, part II

In part 1 of this article (WSJ, Vol. 2, issue 10), you saw how simply BTP toolkits can support the creation of applications that drive transactional Web services with consummate ease. This article covers the other side of the story: how the same technology impacts Web services developers.

In this article, I'll address this aspect and show how BTP can be used to create transaction-aware Web services and how those services can be consumed by transactional applications.

Transactionalizing Web Services
To transactionalize a Web service with BTP is something of a misnomer, since BTP doesn't deal with transactional Web services per se, choosing instead to partition Web services into two distinct types to enable clear separation of business services and their associated participants.

Business services are similar to client applications in that there is no inherent transactionality associated directly with them - they simply exist to host and expose business logic. On the other hand, the participants associated with business services are essentially business-logic agnostic and deal only with the transactional aspects of service invocations. This is quite useful, since it means that existing services can be given transactional support without necessarily performing any invasive procedures on them. The fact that nontransactional Web services can be given transactionality without having to rebuild the service is a real plus. It also means that transactional and business aspects of a system can be evaluated and implemented independently. With these additional pieces of the puzzle, you can now reshape the global BTP architecture shown in Figure 1.

 

Figure 1 typifies a logical BTP rollout, showing how the endpoint of each BTP actor fits into the global model. You can see that the services that expose business logic to the Web are supported by other services, called participants, that deal with the transaction management of those business-oriented services and how, importantly, there is a clean separation between the two kinds of service. There's clearly some overlap, even at this level, since application messages carry BTP contexts whenever service invocations are made within the scope of a transaction. It is here that the business logic and transaction domains begin to collide, albeit gently.

Supporting Infrastructure
For business Web services, most of the interesting work from a transactional perspective happens under the covers. Like the client application, Web services benefits from advances in SOAP server technology that support header processing before the application payload of a SOAP message is delivered. For BTP-aware Web services, you can utilize SOAP header processing to insert and extract BTP contexts on behalf of Web services in a fashion reciprocal to how header processing is performed at the client application side. Since header processing is noninvasive to the service-level business logic, you can see how the impact of making a service transactional with BTP is minimal. Figures 2 and 3 show exactly how the service is supported.

Figure 2 demonstrates what happens when a Web service receives a request. If the request doesn't carry a BTP context, it's simply passed through the incoming context handler to other handlers and will eventually deliver its payload to the service. If, however, the request carries a BTP context, then the context is stripped out of the header of the incoming message and associated with the thread of execution within which the service's work will be executed. To achieve this, the handler resumes the transaction, using elements from the transaction manager part of the API we saw in the first article, which effectively associates (or reassociates, if this isn't the first time the same context has been received) the work performed by the service with a BTP transaction.

 

When returning from a service invocation, the reverse process occurs, as shown in Figure 3. The message from the service passes through the outgoing context handler, which checks to see if there is a transaction associated with the work that took place to produce the message. If the work was performed within the scope of a transaction, then the BTP context is inserted into the header of the message and the transaction is suspended, which effectively pauses its work for the service until additional messages with a matching context are received.

 

While none of this is rocket science, it does serve to reiterate that BTP-enabling Web services is a noninvasive procedure, or at least it can be if you choose to adopt a noninvasive strategy. However, at some point every BTP deployment has to interact with existing infrastructure, and it's here that you enter a more intricate phase of development and system integration.

Participants
Participants are the last piece of the puzzle in the BTP architecture (though not quite the last piece of implementation!). You've seen how participants fit into the global BTP architecture, but I haven't yet covered the anatomy of a participant. Participants are the entities that act on behalf of business Web services in matters regarding transactionality, and they're equipped to deal with message exchanges with the transaction manager.

Participants are simply Web services that manage details of distributed transactions on behalf of their associated business services, handling the BTP messages involved in the termination phase of the transaction. While this might sound like hard work, a toolkit will typically simplify matters by offering an interface that your participant will implement in order to become part of the participant stack. The participant stack is shown in Figure 4; the interface that constitutes the API for the stack from the developer's point of view is shown in Listing 1.

Figure 4 shows the conceptual view of a participant (minus the back-end plumbing, which you'll see later). It's a straightforward document exchange-based Web service in which the messaging layer understands BTP messages. It invokes methods on the user-defined participant (which has a known interface) in accordance with the type and content of the messages it receives. Any returns from the participant are shoehorned into BTP messages and sent back through the SOAP infrastructure.

 

The participant API effectively shields participant developers from having to understand the BTP messages that participants consume, but this shielding isn't entirely "bulletproof," since some understanding of how and when the methods in a participant are called is still required. Listing 1 shows the more important methods that an implementer has to write in order to create a participant. As you might expect, these methods correspond to the messages exchanged between transaction manager and participant (which is itself identified by a unique ID or Uid in the API). As such, if you have an understanding of BTP (which you must have in order to write a decent participant) then the methods are self-explanatory. For everyone else, here's a brief overview:

  • prepare(...): The prepare method delimits the start of BTP's two-phase confirm protocol. During this method a participant typically checks to see whether it can proceed with a transaction on behalf of the service it's representing, and returns a vote that causes an appropriate response to be propagated down the stack and ultimately to the transaction manager. Note that if the participant votes to cancel, it may not receive further messages from the transaction manager.
  • confirm(...): Confirming a participant causes the second phase of the two-phase confirm to occur. At confirm time the participant typically tries to make any changes that the business service has made during its execution durable, for example, by issuing a commit instruction to any underlying data sources.
  • cancel(...): Cancel is the opposite of confirm, whereby a participant will typically try to undo, forget, or otherwise reverse any changes that have been made to system state by the service.
  • contradiction(...): If a service back end finds itself in a situation where it has done the opposite of what it has been asked to do by the transaction manager (e.g., it has canceled when it should have confirmed, or vice versa), and cannot mask the fault, it will send an exception to the transaction manager. The transaction manager will evaluate the situation from a global perspective and may need to inform other participants of the contradiction that has occurred. If this is the case, a participant will learn about contradictions that have occurred when its contradiction method is invoked. At that point a participant typically tries to instigate compensative action; however, to fully recover from a contradictory situation, help from outside the system (even human help!) may be required.

    One final intricacy for participants is the sending and receiving of qualifiers. Qualifiers are a neat feature of BTP, derived from the fact that the BTP transaction manager is not as godlike as its equivalents in other transaction management models, but instead accepts the possibility that other parts of the system might justifiably want to help in the decision-making process. Qualifiers support this bilateral exchange of "small print." In essence, each BTP message allows the sender to tag qualifiers that describe such things as, "I will be prepared for the next 10 minutes, and after that I will unilaterally cancel" and "You must be available for at least the next 24 hours to participate in this transaction." In the API, qualifiers are delivered through the Qualifier[] qualifiers parameter (where the transaction manager gets the chance to state its additional terms and conditions) and are returned from the prepare(...) method as part of the vote (where the participant then gets to respond with its own terms and conditions). Qualifiers are a real help when it comes to Web services transactions because in a loosely coupled environment, knowing from the client side that the party you're communicating with will only be around for so long, or being able to specify from the participant side that your party won't hang around while others procrastinate, is invaluable.

    Integrating Participants and Services
    If context/work association is where the BTP and Web services worlds collide gently, then the integration of participants and services is the real crunch issue. Unlike service-side context handling, sadly, there are no stock answers to the problem of participant-service integration because the strategy adopted will depend on the existing transactional back-end infrastructure that the service itself relies upon. However, you can mitigate this by providing useful tools to the back-end developer in the form of an API that takes care of at least the common tasks. In the same spirit as the client API, two further verbs deal with enlisting and removing participating services from a transaction. Supported by the TransactionManager API, which may be used from the service's back end, they are:

  • Enroll: Enlists a specific participant with the current transaction.
  • Resign: Removes a specific participant from the current transaction.

    Using this service-side API and in keeping with the theme of noninvasiveness that's so much in the spirit of BTP, it would be ideal to deploy systems that don't disturb existing (working!) Web services. Fortunately, there are ways and means of doing this.

    Figure 5 depicts the back end of a Web service, and is simply the continuation of the diagrams shown in Figures 2 and 3. You can assume that there will be some kind of transactional infrastructure in the back end of most enterprise-class Web services. For the sake of simplicity, here you can assume it's something like a database.

     

    The good news is that even without BTP transactions thrown into the mix, the exposed Web service will still need to talk to its own back-end systems. It's therefore possible to hijack the interactions between the service and the back end to suit your own purposes. A useful strategy in this situation is to wrap the service's database provider within your own provider that supports the same interface, but is also aware of the BTP infrastructure.

    In this example, the database provider wrapper has access to BTP context information from the header processing logic embedded in the service's stack and is aware of the participant service, which performs BTP work on behalf of the business service. Armed with such knowledge, the database wrapper can enroll a participant in the BTP transaction through the enroll operation supported by the API, which causes a BTP Enroll message exchange to occur with the transaction manager. Where there are no upsets during the enrollment of the participant, BTP messages can now be exchanged between the transaction manager and the participant, ensuring that the participant knows exactly what's happening in the BTP transaction at all times.

    This knowledge allows the participant to arbitrate between the transactional semantics (if any) of the service's database access and the activity in the BTP transaction. Such arbitration may not be trivial and will certainly require some domain expertise, since the participant implementation will have to reconcile BTP semantics with those of the service's own back-end transaction processing model. For example, a participant implementation might choose to perform a simple mapping of BTP messages to the database, queue, or workflow system equivalents, or the participant might choose to take an optimistic approach and immediately commit all changes to the database and perform a compensating action in the event of a failure. What implementers must remember is that there is no absolute right or wrong, just participant implementations that work well for a given system and those that don't. Time spent analyzing use cases up front will pay dividends in the long run.

    The Transaction Manager
    Given the transaction manager's importance in the architecture, it might seem strange to mention it at such a late stage and in such little detail, especially since the transaction manager is, after all, the component upon which all other components depend. The paradox is that the transaction manager is simply the least interesting part of the architecture from a developer's point of view. It's a SOAP document exchange- based Web service that implements the BTP abstract state machine and suitable recovery mechanisms such that transactions aren't compromised in the event of failure of the transaction manager. From a development point of view, a BTP transaction manager is simply a black box, deployed somewhere on the network to enable the rest of the infrastructure to work, and only those who have chosen to implement BTP toolkits must worry about its internals.

    Bringing It All Together:
    A Cohesive Example

    You've seen the BTP architecture and the SOAP plumbing, and I've touched on the transaction model that BTP supports. Now the many different aspects can be drawn together to form a more powerful example. This example revisits the night out example shown in Figure 1. In part 1 of this article, I showed you some code that interacted with a number of Web services within the context of a BTP atom, thus ensuring a consistent outcome for the services in the transaction. I'll use a similar pattern for this cohesion example, although since cohesions are more powerful than atoms, I'll have to work just a little harder. I'll use approximately the same use case, spicing things up a little by allowing either the theatre or the restaurant service to fail - as long as we get a night out, it's not important what we actually do! Listing 2 shows how to program with cohesions.

    The code in Listing 2 follows a pattern similar to the atom example in part 1. You start the transaction and interact with your services via proxies in the normal way. The important difference in the cohesion scenario as compared to the atom example is that the application becomes concerned with the participants that support transaction management on behalf of the services, whereas with atoms the details of any service's participants remain encapsulated by the transaction manager.

    There are various ways in which you can obtain the names of the participants that the services enroll into the transaction, but unfortunately the BTP specification doesn't provide any definitive means of obtaining that information (though it does allow participants to be given "friendly names" through the use of a qualifier to help out in such situations). In this example, the services themselves are allowed to report on the names of their participants through their participantID() methods, though any form of a priori knowledge via human or automated means (like UDDI-based discovery) could be feasibly substituted. With time and experience, patterns for this kind of work will no doubt emerge and be embraced by the BTP community.

    Once you have the names under which the participants are enrolled in the cohesion, you can then use them to ascertain what decisions the services' participants make when the transaction is terminated. In this example, iterate over the decisions that were returned by the prepare_inferiors(...) call to see whether the taxi and at least one other service are indeed prepared to satisfy the request. If the conditions are met, confirm the transaction with the confirm() method, which confirms all those services' participants that agreed that they could meet your requirements (those that voted to confirm) and cancels any services that couldn't meet your requirements (those that voted to cancel). Conversely, if your overall requirements can't be met, then you can immediately cancel the transaction and the participants will all be instructed to undo the work of their associated Web services.

    The power of cohesions therefore arises from the fact that you're at liberty to make choices about who will participate in your transaction right up until the point that you try to confirm it. In fact, you could have used several taxi services in this example to ensure that you got at least one travel option and simply cancelled those that you didn't want before you came to confirm the cohesion.

    Similarly, as implementers of the client application, you're at liberty to structure your transactions as you see fit to suit the problem domain. For instance, if you knew that you absolutely had to take a taxi to meet friends at the restaurant, but were not sure whether or not you wanted to go to a show afterwards, you could wrap the taxi and restaurant booking operations within an atom (or indeed wrap several independent taxi and restaurant bookings into several atoms) and enroll that atom in a cohesion along with the participant for the theatre Web service. In this case you're guaranteed to get the taxi and restaurant bookings together (or not at all), while you have some leeway in terms of whether or not you decide to go to the theatre, qualifiers allowing, of course.

    Conclusion
    Though BTP itself is a sophisticated protocol, from the perspective of an implementer much of its detail is handled by supporting toolkits. As illustrated in the previous article, creating applications that drive BTP transactions is straightforward because of the traditional-looking and intuitive API. In this article, you saw that making Web services transactional is a little trickier, though the toolkits will help to simplify things to a great extent by providing much of the Web service-side infrastructure. The only tough challenge for implementers is the construction of participants, which does require a more thorough understanding of transactional architectures to get right.

    This begs the final question: Is BTP a viable technology to roll out with your Web services strategy? The short answer is yes. Since the APIs exposed by BTP implementations are similar to traditional transaction APIs, the learning curve for developers is reasonably gentle. Furthermore, because BTP is among the more mature Web services standards, it has a relatively broad coalition of vendor support. This means that there should be plenty of choices when it comes to picking your toolkit, and a similarly broad range of support from those vendors. All that's left for you to decide is whether the business you conduct over your Web services infrastructure is valuable enough to require transactional support, and when (not if) you will begin your own BTP rollout.

  • More Stories By Jim Webber

    Dr. Jim Webber is a senior researcher from the University of Newcastle
    upon Tyne, currently working in the convergence of Web Services and Grid
    technologies at the University of Sydney, Australia. Jim was previously
    Web Services architect with Arjuna Technologies where he worked on Web
    Services transactioning technology, including being one of the original
    authors of the WS-CAF specification. Prior to Arjuna, Jim was the lead
    developer with Hewlett-Packard on the industry's first Web Services
    Transaction solution. Co-author of "Developing Enterprise Web Services -
    An Architect's Guide," Jim is an active speaker and author in the Web
    Services space. Jim's home on the web is http://jim.webber.name

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @MicroservicesExpo Stories
    The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
    SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
    Many IT organizations have come to learn that leveraging cloud infrastructure is not just unavoidable, it’s one of the most effective paths for IT organizations to become more responsive to business needs. Yet with the cloud comes new challenges, including minimizing downtime, decreasing the cost of operations, and preventing employee burnout to name a few. As companies migrate their processes and procedures to their new reality of a cloud-based infrastructure, an incident management solution...
    Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
    Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
    Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
    Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
    You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
    Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
    Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things ...
    The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
    The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
    Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
    Cloud Governance means many things to many people. Heck, just the word cloud means different things depending on who you are talking to. While definitions can vary, controlling access to cloud resources is invariably a central piece of any governance program. Enterprise cloud computing has transformed IT. Cloud computing decreases time-to-market, improves agility by allowing businesses to adapt quickly to changing market demands, and, ultimately, drives down costs.
    For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
    Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
    The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
    Recent survey done across top 500 fortune companies shows almost 70% of the CIO have either heard about IAC from their infrastructure head or they are on their way to implement IAC. Yet if you look under the hood while some level of automation has been done, most of the infrastructure is still managed in much tradition/legacy way. So, what is Infrastructure as Code? how do you determine if your IT infrastructure is truly automated?
    Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.
    As people view cloud as a preferred option to build IT systems, the size of the cloud-based system is getting bigger and more complex. As the system gets bigger, more people need to collaborate from design to management. As more people collaborate to create a bigger system, the need for a systematic approach to automate the process is required. Just as in software, cloud now needs DevOps. In this session, the audience can see how people can solve this issue with a visual model. Visual models ha...