Welcome!

Microservices Expo Authors: Stackify Blog, Aruna Ravichandran, Dalibor Siroky, Kevin Jackson, PagerDuty Blog

Related Topics: Microservices Expo

Microservices Expo: Article

Web Services Infrastructure, part II

Web Services Infrastructure, part II

In part 1 of this article (WSJ, Vol. 2, issue 10), you saw how simply BTP toolkits can support the creation of applications that drive transactional Web services with consummate ease. This article covers the other side of the story: how the same technology impacts Web services developers.

In this article, I'll address this aspect and show how BTP can be used to create transaction-aware Web services and how those services can be consumed by transactional applications.

Transactionalizing Web Services
To transactionalize a Web service with BTP is something of a misnomer, since BTP doesn't deal with transactional Web services per se, choosing instead to partition Web services into two distinct types to enable clear separation of business services and their associated participants.

Business services are similar to client applications in that there is no inherent transactionality associated directly with them - they simply exist to host and expose business logic. On the other hand, the participants associated with business services are essentially business-logic agnostic and deal only with the transactional aspects of service invocations. This is quite useful, since it means that existing services can be given transactional support without necessarily performing any invasive procedures on them. The fact that nontransactional Web services can be given transactionality without having to rebuild the service is a real plus. It also means that transactional and business aspects of a system can be evaluated and implemented independently. With these additional pieces of the puzzle, you can now reshape the global BTP architecture shown in Figure 1.

 

Figure 1 typifies a logical BTP rollout, showing how the endpoint of each BTP actor fits into the global model. You can see that the services that expose business logic to the Web are supported by other services, called participants, that deal with the transaction management of those business-oriented services and how, importantly, there is a clean separation between the two kinds of service. There's clearly some overlap, even at this level, since application messages carry BTP contexts whenever service invocations are made within the scope of a transaction. It is here that the business logic and transaction domains begin to collide, albeit gently.

Supporting Infrastructure
For business Web services, most of the interesting work from a transactional perspective happens under the covers. Like the client application, Web services benefits from advances in SOAP server technology that support header processing before the application payload of a SOAP message is delivered. For BTP-aware Web services, you can utilize SOAP header processing to insert and extract BTP contexts on behalf of Web services in a fashion reciprocal to how header processing is performed at the client application side. Since header processing is noninvasive to the service-level business logic, you can see how the impact of making a service transactional with BTP is minimal. Figures 2 and 3 show exactly how the service is supported.

Figure 2 demonstrates what happens when a Web service receives a request. If the request doesn't carry a BTP context, it's simply passed through the incoming context handler to other handlers and will eventually deliver its payload to the service. If, however, the request carries a BTP context, then the context is stripped out of the header of the incoming message and associated with the thread of execution within which the service's work will be executed. To achieve this, the handler resumes the transaction, using elements from the transaction manager part of the API we saw in the first article, which effectively associates (or reassociates, if this isn't the first time the same context has been received) the work performed by the service with a BTP transaction.

 

When returning from a service invocation, the reverse process occurs, as shown in Figure 3. The message from the service passes through the outgoing context handler, which checks to see if there is a transaction associated with the work that took place to produce the message. If the work was performed within the scope of a transaction, then the BTP context is inserted into the header of the message and the transaction is suspended, which effectively pauses its work for the service until additional messages with a matching context are received.

 

While none of this is rocket science, it does serve to reiterate that BTP-enabling Web services is a noninvasive procedure, or at least it can be if you choose to adopt a noninvasive strategy. However, at some point every BTP deployment has to interact with existing infrastructure, and it's here that you enter a more intricate phase of development and system integration.

Participants
Participants are the last piece of the puzzle in the BTP architecture (though not quite the last piece of implementation!). You've seen how participants fit into the global BTP architecture, but I haven't yet covered the anatomy of a participant. Participants are the entities that act on behalf of business Web services in matters regarding transactionality, and they're equipped to deal with message exchanges with the transaction manager.

Participants are simply Web services that manage details of distributed transactions on behalf of their associated business services, handling the BTP messages involved in the termination phase of the transaction. While this might sound like hard work, a toolkit will typically simplify matters by offering an interface that your participant will implement in order to become part of the participant stack. The participant stack is shown in Figure 4; the interface that constitutes the API for the stack from the developer's point of view is shown in Listing 1.

Figure 4 shows the conceptual view of a participant (minus the back-end plumbing, which you'll see later). It's a straightforward document exchange-based Web service in which the messaging layer understands BTP messages. It invokes methods on the user-defined participant (which has a known interface) in accordance with the type and content of the messages it receives. Any returns from the participant are shoehorned into BTP messages and sent back through the SOAP infrastructure.

 

The participant API effectively shields participant developers from having to understand the BTP messages that participants consume, but this shielding isn't entirely "bulletproof," since some understanding of how and when the methods in a participant are called is still required. Listing 1 shows the more important methods that an implementer has to write in order to create a participant. As you might expect, these methods correspond to the messages exchanged between transaction manager and participant (which is itself identified by a unique ID or Uid in the API). As such, if you have an understanding of BTP (which you must have in order to write a decent participant) then the methods are self-explanatory. For everyone else, here's a brief overview:

  • prepare(...): The prepare method delimits the start of BTP's two-phase confirm protocol. During this method a participant typically checks to see whether it can proceed with a transaction on behalf of the service it's representing, and returns a vote that causes an appropriate response to be propagated down the stack and ultimately to the transaction manager. Note that if the participant votes to cancel, it may not receive further messages from the transaction manager.
  • confirm(...): Confirming a participant causes the second phase of the two-phase confirm to occur. At confirm time the participant typically tries to make any changes that the business service has made during its execution durable, for example, by issuing a commit instruction to any underlying data sources.
  • cancel(...): Cancel is the opposite of confirm, whereby a participant will typically try to undo, forget, or otherwise reverse any changes that have been made to system state by the service.
  • contradiction(...): If a service back end finds itself in a situation where it has done the opposite of what it has been asked to do by the transaction manager (e.g., it has canceled when it should have confirmed, or vice versa), and cannot mask the fault, it will send an exception to the transaction manager. The transaction manager will evaluate the situation from a global perspective and may need to inform other participants of the contradiction that has occurred. If this is the case, a participant will learn about contradictions that have occurred when its contradiction method is invoked. At that point a participant typically tries to instigate compensative action; however, to fully recover from a contradictory situation, help from outside the system (even human help!) may be required.

    One final intricacy for participants is the sending and receiving of qualifiers. Qualifiers are a neat feature of BTP, derived from the fact that the BTP transaction manager is not as godlike as its equivalents in other transaction management models, but instead accepts the possibility that other parts of the system might justifiably want to help in the decision-making process. Qualifiers support this bilateral exchange of "small print." In essence, each BTP message allows the sender to tag qualifiers that describe such things as, "I will be prepared for the next 10 minutes, and after that I will unilaterally cancel" and "You must be available for at least the next 24 hours to participate in this transaction." In the API, qualifiers are delivered through the Qualifier[] qualifiers parameter (where the transaction manager gets the chance to state its additional terms and conditions) and are returned from the prepare(...) method as part of the vote (where the participant then gets to respond with its own terms and conditions). Qualifiers are a real help when it comes to Web services transactions because in a loosely coupled environment, knowing from the client side that the party you're communicating with will only be around for so long, or being able to specify from the participant side that your party won't hang around while others procrastinate, is invaluable.

    Integrating Participants and Services
    If context/work association is where the BTP and Web services worlds collide gently, then the integration of participants and services is the real crunch issue. Unlike service-side context handling, sadly, there are no stock answers to the problem of participant-service integration because the strategy adopted will depend on the existing transactional back-end infrastructure that the service itself relies upon. However, you can mitigate this by providing useful tools to the back-end developer in the form of an API that takes care of at least the common tasks. In the same spirit as the client API, two further verbs deal with enlisting and removing participating services from a transaction. Supported by the TransactionManager API, which may be used from the service's back end, they are:

  • Enroll: Enlists a specific participant with the current transaction.
  • Resign: Removes a specific participant from the current transaction.

    Using this service-side API and in keeping with the theme of noninvasiveness that's so much in the spirit of BTP, it would be ideal to deploy systems that don't disturb existing (working!) Web services. Fortunately, there are ways and means of doing this.

    Figure 5 depicts the back end of a Web service, and is simply the continuation of the diagrams shown in Figures 2 and 3. You can assume that there will be some kind of transactional infrastructure in the back end of most enterprise-class Web services. For the sake of simplicity, here you can assume it's something like a database.

     

    The good news is that even without BTP transactions thrown into the mix, the exposed Web service will still need to talk to its own back-end systems. It's therefore possible to hijack the interactions between the service and the back end to suit your own purposes. A useful strategy in this situation is to wrap the service's database provider within your own provider that supports the same interface, but is also aware of the BTP infrastructure.

    In this example, the database provider wrapper has access to BTP context information from the header processing logic embedded in the service's stack and is aware of the participant service, which performs BTP work on behalf of the business service. Armed with such knowledge, the database wrapper can enroll a participant in the BTP transaction through the enroll operation supported by the API, which causes a BTP Enroll message exchange to occur with the transaction manager. Where there are no upsets during the enrollment of the participant, BTP messages can now be exchanged between the transaction manager and the participant, ensuring that the participant knows exactly what's happening in the BTP transaction at all times.

    This knowledge allows the participant to arbitrate between the transactional semantics (if any) of the service's database access and the activity in the BTP transaction. Such arbitration may not be trivial and will certainly require some domain expertise, since the participant implementation will have to reconcile BTP semantics with those of the service's own back-end transaction processing model. For example, a participant implementation might choose to perform a simple mapping of BTP messages to the database, queue, or workflow system equivalents, or the participant might choose to take an optimistic approach and immediately commit all changes to the database and perform a compensating action in the event of a failure. What implementers must remember is that there is no absolute right or wrong, just participant implementations that work well for a given system and those that don't. Time spent analyzing use cases up front will pay dividends in the long run.

    The Transaction Manager
    Given the transaction manager's importance in the architecture, it might seem strange to mention it at such a late stage and in such little detail, especially since the transaction manager is, after all, the component upon which all other components depend. The paradox is that the transaction manager is simply the least interesting part of the architecture from a developer's point of view. It's a SOAP document exchange- based Web service that implements the BTP abstract state machine and suitable recovery mechanisms such that transactions aren't compromised in the event of failure of the transaction manager. From a development point of view, a BTP transaction manager is simply a black box, deployed somewhere on the network to enable the rest of the infrastructure to work, and only those who have chosen to implement BTP toolkits must worry about its internals.

    Bringing It All Together:
    A Cohesive Example

    You've seen the BTP architecture and the SOAP plumbing, and I've touched on the transaction model that BTP supports. Now the many different aspects can be drawn together to form a more powerful example. This example revisits the night out example shown in Figure 1. In part 1 of this article, I showed you some code that interacted with a number of Web services within the context of a BTP atom, thus ensuring a consistent outcome for the services in the transaction. I'll use a similar pattern for this cohesion example, although since cohesions are more powerful than atoms, I'll have to work just a little harder. I'll use approximately the same use case, spicing things up a little by allowing either the theatre or the restaurant service to fail - as long as we get a night out, it's not important what we actually do! Listing 2 shows how to program with cohesions.

    The code in Listing 2 follows a pattern similar to the atom example in part 1. You start the transaction and interact with your services via proxies in the normal way. The important difference in the cohesion scenario as compared to the atom example is that the application becomes concerned with the participants that support transaction management on behalf of the services, whereas with atoms the details of any service's participants remain encapsulated by the transaction manager.

    There are various ways in which you can obtain the names of the participants that the services enroll into the transaction, but unfortunately the BTP specification doesn't provide any definitive means of obtaining that information (though it does allow participants to be given "friendly names" through the use of a qualifier to help out in such situations). In this example, the services themselves are allowed to report on the names of their participants through their participantID() methods, though any form of a priori knowledge via human or automated means (like UDDI-based discovery) could be feasibly substituted. With time and experience, patterns for this kind of work will no doubt emerge and be embraced by the BTP community.

    Once you have the names under which the participants are enrolled in the cohesion, you can then use them to ascertain what decisions the services' participants make when the transaction is terminated. In this example, iterate over the decisions that were returned by the prepare_inferiors(...) call to see whether the taxi and at least one other service are indeed prepared to satisfy the request. If the conditions are met, confirm the transaction with the confirm() method, which confirms all those services' participants that agreed that they could meet your requirements (those that voted to confirm) and cancels any services that couldn't meet your requirements (those that voted to cancel). Conversely, if your overall requirements can't be met, then you can immediately cancel the transaction and the participants will all be instructed to undo the work of their associated Web services.

    The power of cohesions therefore arises from the fact that you're at liberty to make choices about who will participate in your transaction right up until the point that you try to confirm it. In fact, you could have used several taxi services in this example to ensure that you got at least one travel option and simply cancelled those that you didn't want before you came to confirm the cohesion.

    Similarly, as implementers of the client application, you're at liberty to structure your transactions as you see fit to suit the problem domain. For instance, if you knew that you absolutely had to take a taxi to meet friends at the restaurant, but were not sure whether or not you wanted to go to a show afterwards, you could wrap the taxi and restaurant booking operations within an atom (or indeed wrap several independent taxi and restaurant bookings into several atoms) and enroll that atom in a cohesion along with the participant for the theatre Web service. In this case you're guaranteed to get the taxi and restaurant bookings together (or not at all), while you have some leeway in terms of whether or not you decide to go to the theatre, qualifiers allowing, of course.

    Conclusion
    Though BTP itself is a sophisticated protocol, from the perspective of an implementer much of its detail is handled by supporting toolkits. As illustrated in the previous article, creating applications that drive BTP transactions is straightforward because of the traditional-looking and intuitive API. In this article, you saw that making Web services transactional is a little trickier, though the toolkits will help to simplify things to a great extent by providing much of the Web service-side infrastructure. The only tough challenge for implementers is the construction of participants, which does require a more thorough understanding of transactional architectures to get right.

    This begs the final question: Is BTP a viable technology to roll out with your Web services strategy? The short answer is yes. Since the APIs exposed by BTP implementations are similar to traditional transaction APIs, the learning curve for developers is reasonably gentle. Furthermore, because BTP is among the more mature Web services standards, it has a relatively broad coalition of vendor support. This means that there should be plenty of choices when it comes to picking your toolkit, and a similarly broad range of support from those vendors. All that's left for you to decide is whether the business you conduct over your Web services infrastructure is valuable enough to require transactional support, and when (not if) you will begin your own BTP rollout.

  • More Stories By Jim Webber

    Dr. Jim Webber is a senior researcher from the University of Newcastle
    upon Tyne, currently working in the convergence of Web Services and Grid
    technologies at the University of Sydney, Australia. Jim was previously
    Web Services architect with Arjuna Technologies where he worked on Web
    Services transactioning technology, including being one of the original
    authors of the WS-CAF specification. Prior to Arjuna, Jim was the lead
    developer with Hewlett-Packard on the industry's first Web Services
    Transaction solution. Co-author of "Developing Enterprise Web Services -
    An Architect's Guide," Jim is an active speaker and author in the Web
    Services space. Jim's home on the web is http://jim.webber.name

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @MicroservicesExpo Stories
    How is DevOps going within your organization? If you need some help measuring just how well it is going, we have prepared a list of some key DevOps metrics to track. These metrics can help you understand how your team is doing over time. The word DevOps means different things to different people. Some say it a culture and every vendor in the industry claims that their tools help with DevOps. Depending on how you define DevOps, some of these metrics may matter more or less to you and your team.
    For many of us laboring in the fields of digital transformation, 2017 was a year of high-intensity work and high-reward achievement. So we’re looking forward to a little breather over the end-of-year holiday season. But we’re going to have to get right back on the Continuous Delivery bullet train in 2018. Markets move too fast and customer expectations elevate too precipitously for businesses to rest on their laurels. Here’s a DevOps “to-do list” for 2018 that should be priorities for anyone w...
    If testing environments are constantly unavailable and affected by outages, release timelines will be affected. You can use three metrics to measure stability events for specific environments and plan around events that will affect your critical path to release.
    In a recent post, titled “10 Surprising Facts About Cloud Computing and What It Really Is”, Zac Johnson highlighted some interesting facts about cloud computing in the SMB marketplace: Cloud Computing is up to 40 times more cost-effective for an SMB, compared to running its own IT system. 94% of SMBs have experienced security benefits in the cloud that they didn’t have with their on-premises service
    DevOps failure is a touchy subject with some, because DevOps is typically perceived as a way to avoid failure. As a result, when you fail in a DevOps practice, the situation can seem almost hopeless. However, just as a fail-fast business approach, or the “fail and adjust sooner” methodology of Agile often proves, DevOps failures are actually a step in the right direction. They’re the first step toward learning from failures and turning your DevOps practice into one that will lead you toward even...
    DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
    While walking around the office I happened upon a relatively new employee dragging emails from his inbox into folders. I asked why and was told, “I’m just answering emails and getting stuff off my desk.” An empty inbox may be emotionally satisfying to look at, but in practice, you should never do it. Here’s why. I recently wrote a piece arguing that from a mathematical perspective, Messy Desks Are Perfectly Optimized. While it validated the genius of my friends with messy desks, it also gener...
    The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
    The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
    The enterprise data storage marketplace is poised to become a battlefield. No longer the quiet backwater of cloud computing services, the focus of this global transition is now going from compute to storage. An overview of recent storage market history is needed to understand why this transition is important. Before 2007 and the birth of the cloud computing market we are witnessing today, the on-premise model hosted in large local data centers dominated enterprise storage. Key marketplace play...
    The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
    Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task ...
    Following a tradition dating back to 2002 at ZapThink and continuing at Intellyx since 2014, it’s time for Intellyx’s annual predictions for the coming year. If you’re a long-time fan, you know we have a twist to the typical annual prediction post: we actually critique our predictions from the previous year. To make things even more interesting, Charlie and I switch off, judging the other’s predictions. And now that he’s been with Intellyx for more than a year, this Cortex represents my first ...
    "Grape Up leverages Cloud Native technologies and helps companies build software using microservices, and work the DevOps agile way. We've been doing digital innovation for the last 12 years," explained Daniel Heckman, of Grape Up in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    The Toyota Production System, a world-renowned production system is based on the "complete elimination of all waste". The "Toyota Way", grounded on continuous improvement dates to the 1860s. The methodology is widely proven to be successful yet there are still industries within and tangential to manufacturing struggling to adopt its core principles: Jidoka: a process should stop when an issue is identified prevents releasing defective products
    We seem to run this cycle with every new technology that comes along. A good idea with practical applications is born, then both marketers and over-excited users start to declare it is the solution for all or our problems. Compliments of Gartner, we know it generally as “The Hype Cycle”, but each iteration is a little different. 2018’s flavor will be serverless computing, and by 2018, I mean starting now, but going most of next year, you’ll be sick of it. We are already seeing people write such...
    Defining the term ‘monitoring’ is a difficult task considering the performance space has evolved significantly over the years. Lately, there has been a shift in the monitoring world, sparking a healthy debate regarding the definition and purpose of monitoring, through which a new term has emerged: observability. Some of that debate can be found in blogs by Charity Majors and Cindy Sridharan.
    It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
    Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
    "Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.