|By Jim Webber||
|October 21, 2002 12:00 AM EDT||
In part 1 of this article (WSJ, Vol. 2, issue 10), you saw how simply BTP toolkits can support the creation of applications that drive transactional Web services with consummate ease. This article covers the other side of the story: how the same technology impacts Web services developers.
In this article, I'll address this aspect and show how BTP can be used to create transaction-aware Web services and how those services can be consumed by transactional applications.
In this article, I'll address this aspect and show how BTP can be used to create transaction-aware Web services and how those services can be consumed by transactional applications.
Transactionalizing Web Services
To transactionalize a Web service with BTP is something of a misnomer, since BTP doesn't deal with transactional Web services per se, choosing instead to partition Web services into two distinct types to enable clear separation of business services and their associated participants.
Business services are similar to client applications in that there is no inherent transactionality associated directly with them - they simply exist to host and expose business logic. On the other hand, the participants associated with business services are essentially business-logic agnostic and deal only with the transactional aspects of service invocations. This is quite useful, since it means that existing services can be given transactional support without necessarily performing any invasive procedures on them. The fact that nontransactional Web services can be given transactionality without having to rebuild the service is a real plus. It also means that transactional and business aspects of a system can be evaluated and implemented independently. With these additional pieces of the puzzle, you can now reshape the global BTP architecture shown in Figure 1.
Figure 1 typifies a logical BTP rollout, showing how the endpoint of each BTP actor fits into the global model. You can see that the services that expose business logic to the Web are supported by other services, called participants, that deal with the transaction management of those business-oriented services and how, importantly, there is a clean separation between the two kinds of service. There's clearly some overlap, even at this level, since application messages carry BTP contexts whenever service invocations are made within the scope of a transaction. It is here that the business logic and transaction domains begin to collide, albeit gently.
For business Web services, most of the interesting work from a transactional perspective happens under the covers. Like the client application, Web services benefits from advances in SOAP server technology that support header processing before the application payload of a SOAP message is delivered. For BTP-aware Web services, you can utilize SOAP header processing to insert and extract BTP contexts on behalf of Web services in a fashion reciprocal to how header processing is performed at the client application side. Since header processing is noninvasive to the service-level business logic, you can see how the impact of making a service transactional with BTP is minimal. Figures 2 and 3 show exactly how the service is supported.
Figure 2 demonstrates what happens when a Web service receives a request. If the request doesn't carry a BTP context, it's simply passed through the incoming context handler to other handlers and will eventually deliver its payload to the service. If, however, the request carries a BTP context, then the context is stripped out of the header of the incoming message and associated with the thread of execution within which the service's work will be executed. To achieve this, the handler resumes the transaction, using elements from the transaction manager part of the API we saw in the first article, which effectively associates (or reassociates, if this isn't the first time the same context has been received) the work performed by the service with a BTP transaction.
When returning from a service invocation, the reverse process occurs, as shown in Figure 3. The message from the service passes through the outgoing context handler, which checks to see if there is a transaction associated with the work that took place to produce the message. If the work was performed within the scope of a transaction, then the BTP context is inserted into the header of the message and the transaction is suspended, which effectively pauses its work for the service until additional messages with a matching context are received.
While none of this is rocket science, it does serve to reiterate that BTP-enabling Web services is a noninvasive procedure, or at least it can be if you choose to adopt a noninvasive strategy. However, at some point every BTP deployment has to interact with existing infrastructure, and it's here that you enter a more intricate phase of development and system integration.
Participants are the last piece of the puzzle in the BTP architecture (though not quite the last piece of implementation!). You've seen how participants fit into the global BTP architecture, but I haven't yet covered the anatomy of a participant. Participants are the entities that act on behalf of business Web services in matters regarding transactionality, and they're equipped to deal with message exchanges with the transaction manager.
Participants are simply Web services that manage details of distributed transactions on behalf of their associated business services, handling the BTP messages involved in the termination phase of the transaction. While this might sound like hard work, a toolkit will typically simplify matters by offering an interface that your participant will implement in order to become part of the participant stack. The participant stack is shown in Figure 4; the interface that constitutes the API for the stack from the developer's point of view is shown in Listing 1.
Figure 4 shows the conceptual view of a participant (minus the back-end plumbing, which you'll see later). It's a straightforward document exchange-based Web service in which the messaging layer understands BTP messages. It invokes methods on the user-defined participant (which has a known interface) in accordance with the type and content of the messages it receives. Any returns from the participant are shoehorned into BTP messages and sent back through the SOAP infrastructure.
The participant API effectively shields participant developers from having to understand the BTP messages that participants consume, but this shielding isn't entirely "bulletproof," since some understanding of how and when the methods in a participant are called is still required. Listing 1 shows the more important methods that an implementer has to write in order to create a participant. As you might expect, these methods correspond to the messages exchanged between transaction manager and participant (which is itself identified by a unique ID or Uid in the API). As such, if you have an understanding of BTP (which you must have in order to write a decent participant) then the methods are self-explanatory. For everyone else, here's a brief overview:
One final intricacy for participants is the sending and receiving of qualifiers. Qualifiers are a neat feature of BTP, derived from the fact that the BTP transaction manager is not as godlike as its equivalents in other transaction management models, but instead accepts the possibility that other parts of the system might justifiably want to help in the decision-making process. Qualifiers support this bilateral exchange of "small print." In essence, each BTP message allows the sender to tag qualifiers that describe such things as, "I will be prepared for the next 10 minutes, and after that I will unilaterally cancel" and "You must be available for at least the next 24 hours to participate in this transaction." In the API, qualifiers are delivered through the Qualifier qualifiers parameter (where the transaction manager gets the chance to state its additional terms and conditions) and are returned from the prepare(...) method as part of the vote (where the participant then gets to respond with its own terms and conditions). Qualifiers are a real help when it comes to Web services transactions because in a loosely coupled environment, knowing from the client side that the party you're communicating with will only be around for so long, or being able to specify from the participant side that your party won't hang around while others procrastinate, is invaluable.
Integrating Participants and Services
If context/work association is where the BTP and Web services worlds collide gently, then the integration of participants and services is the real crunch issue. Unlike service-side context handling, sadly, there are no stock answers to the problem of participant-service integration because the strategy adopted will depend on the existing transactional back-end infrastructure that the service itself relies upon. However, you can mitigate this by providing useful tools to the back-end developer in the form of an API that takes care of at least the common tasks. In the same spirit as the client API, two further verbs deal with enlisting and removing participating services from a transaction. Supported by the TransactionManager API, which may be used from the service's back end, they are:
Using this service-side API and in keeping with the theme of noninvasiveness that's so much in the spirit of BTP, it would be ideal to deploy systems that don't disturb existing (working!) Web services. Fortunately, there are ways and means of doing this.
Figure 5 depicts the back end of a Web service, and is simply the continuation of the diagrams shown in Figures 2 and 3. You can assume that there will be some kind of transactional infrastructure in the back end of most enterprise-class Web services. For the sake of simplicity, here you can assume it's something like a database.
The good news is that even without BTP transactions thrown into the mix, the exposed Web service will still need to talk to its own back-end systems. It's therefore possible to hijack the interactions between the service and the back end to suit your own purposes. A useful strategy in this situation is to wrap the service's database provider within your own provider that supports the same interface, but is also aware of the BTP infrastructure.
In this example, the database provider wrapper has access to BTP context information from the header processing logic embedded in the service's stack and is aware of the participant service, which performs BTP work on behalf of the business service. Armed with such knowledge, the database wrapper can enroll a participant in the BTP transaction through the enroll operation supported by the API, which causes a BTP Enroll message exchange to occur with the transaction manager. Where there are no upsets during the enrollment of the participant, BTP messages can now be exchanged between the transaction manager and the participant, ensuring that the participant knows exactly what's happening in the BTP transaction at all times.
This knowledge allows the participant to arbitrate between the transactional semantics (if any) of the service's database access and the activity in the BTP transaction. Such arbitration may not be trivial and will certainly require some domain expertise, since the participant implementation will have to reconcile BTP semantics with those of the service's own back-end transaction processing model. For example, a participant implementation might choose to perform a simple mapping of BTP messages to the database, queue, or workflow system equivalents, or the participant might choose to take an optimistic approach and immediately commit all changes to the database and perform a compensating action in the event of a failure. What implementers must remember is that there is no absolute right or wrong, just participant implementations that work well for a given system and those that don't. Time spent analyzing use cases up front will pay dividends in the long run.
The Transaction Manager
Given the transaction manager's importance in the architecture, it might seem strange to mention it at such a late stage and in such little detail, especially since the transaction manager is, after all, the component upon which all other components depend. The paradox is that the transaction manager is simply the least interesting part of the architecture from a developer's point of view. It's a SOAP document exchange- based Web service that implements the BTP abstract state machine and suitable recovery mechanisms such that transactions aren't compromised in the event of failure of the transaction manager. From a development point of view, a BTP transaction manager is simply a black box, deployed somewhere on the network to enable the rest of the infrastructure to work, and only those who have chosen to implement BTP toolkits must worry about its internals.
Bringing It All Together:
A Cohesive Example
You've seen the BTP architecture and the SOAP plumbing, and I've touched on the transaction model that BTP supports. Now the many different aspects can be drawn together to form a more powerful example. This example revisits the night out example shown in Figure 1. In part 1 of this article, I showed you some code that interacted with a number of Web services within the context of a BTP atom, thus ensuring a consistent outcome for the services in the transaction. I'll use a similar pattern for this cohesion example, although since cohesions are more powerful than atoms, I'll have to work just a little harder. I'll use approximately the same use case, spicing things up a little by allowing either the theatre or the restaurant service to fail - as long as we get a night out, it's not important what we actually do! Listing 2 shows how to program with cohesions.
The code in Listing 2 follows a pattern similar to the atom example in part 1. You start the transaction and interact with your services via proxies in the normal way. The important difference in the cohesion scenario as compared to the atom example is that the application becomes concerned with the participants that support transaction management on behalf of the services, whereas with atoms the details of any service's participants remain encapsulated by the transaction manager.
There are various ways in which you can obtain the names of the participants that the services enroll into the transaction, but unfortunately the BTP specification doesn't provide any definitive means of obtaining that information (though it does allow participants to be given "friendly names" through the use of a qualifier to help out in such situations). In this example, the services themselves are allowed to report on the names of their participants through their participantID() methods, though any form of a priori knowledge via human or automated means (like UDDI-based discovery) could be feasibly substituted. With time and experience, patterns for this kind of work will no doubt emerge and be embraced by the BTP community.
Once you have the names under which the participants are enrolled in the cohesion, you can then use them to ascertain what decisions the services' participants make when the transaction is terminated. In this example, iterate over the decisions that were returned by the prepare_inferiors(...) call to see whether the taxi and at least one other service are indeed prepared to satisfy the request. If the conditions are met, confirm the transaction with the confirm() method, which confirms all those services' participants that agreed that they could meet your requirements (those that voted to confirm) and cancels any services that couldn't meet your requirements (those that voted to cancel). Conversely, if your overall requirements can't be met, then you can immediately cancel the transaction and the participants will all be instructed to undo the work of their associated Web services.
The power of cohesions therefore arises from the fact that you're at liberty to make choices about who will participate in your transaction right up until the point that you try to confirm it. In fact, you could have used several taxi services in this example to ensure that you got at least one travel option and simply cancelled those that you didn't want before you came to confirm the cohesion.
Similarly, as implementers of the client application, you're at liberty to structure your transactions as you see fit to suit the problem domain. For instance, if you knew that you absolutely had to take a taxi to meet friends at the restaurant, but were not sure whether or not you wanted to go to a show afterwards, you could wrap the taxi and restaurant booking operations within an atom (or indeed wrap several independent taxi and restaurant bookings into several atoms) and enroll that atom in a cohesion along with the participant for the theatre Web service. In this case you're guaranteed to get the taxi and restaurant bookings together (or not at all), while you have some leeway in terms of whether or not you decide to go to the theatre, qualifiers allowing, of course.
Though BTP itself is a sophisticated protocol, from the perspective of an implementer much of its detail is handled by supporting toolkits. As illustrated in the previous article, creating applications that drive BTP transactions is straightforward because of the traditional-looking and intuitive API. In this article, you saw that making Web services transactional is a little trickier, though the toolkits will help to simplify things to a great extent by providing much of the Web service-side infrastructure. The only tough challenge for implementers is the construction of participants, which does require a more thorough understanding of transactional architectures to get right.
This begs the final question: Is BTP a viable technology to roll out with your Web services strategy? The short answer is yes. Since the APIs exposed by BTP implementations are similar to traditional transaction APIs, the learning curve for developers is reasonably gentle. Furthermore, because BTP is among the more mature Web services standards, it has a relatively broad coalition of vendor support. This means that there should be plenty of choices when it comes to picking your toolkit, and a similarly broad range of support from those vendors. All that's left for you to decide is whether the business you conduct over your Web services infrastructure is valuable enough to require transactional support, and when (not if) you will begin your own BTP rollout.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
Mar. 27, 2017 11:30 AM EDT Reads: 6,666
By now, every company in the world is on the lookout for the digital disruption that will threaten their existence. In study after study, executives believe that technology has either already disrupted their industry, is in the process of disrupting it or will disrupt it in the near future. As a result, every organization is taking steps to prepare for or mitigate unforeseen disruptions. Yet in almost every industry, the disruption trend continues unabated.
Mar. 27, 2017 11:23 AM EDT Reads: 144
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
Mar. 27, 2017 10:30 AM EDT Reads: 2,982
Building custom add-ons does not need to be limited to the ideas you see on a marketplace. In his session at 20th Cloud Expo, Sukhbir Dhillon, CEO and founder of Addteq, will go over some adventures they faced in developing integrations using Atlassian SDK and other technologies/platforms and how it has enabled development teams to experiment with newer paradigms like Serverless and newer features of Atlassian SDKs. In this presentation, you will be taken on a journey of Add-On and Integration ...
Mar. 27, 2017 08:15 AM EDT Reads: 3,151
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership abi...
Mar. 27, 2017 05:00 AM EDT Reads: 11,128
The essence of cloud computing is that all consumable IT resources are delivered as services. In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transi...
Mar. 27, 2017 05:00 AM EDT Reads: 6,258
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service.
Mar. 27, 2017 03:45 AM EDT Reads: 3,054
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
Mar. 27, 2017 03:00 AM EDT Reads: 3,089
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Mar. 27, 2017 12:45 AM EDT Reads: 2,231
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
Mar. 26, 2017 09:45 PM EDT Reads: 7,740
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningf...
Mar. 26, 2017 07:45 PM EDT Reads: 9,684
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
Mar. 26, 2017 03:15 PM EDT Reads: 2,930
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, explored HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
Mar. 26, 2017 03:00 PM EDT Reads: 10,666
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Mar. 26, 2017 01:45 PM EDT Reads: 8,668
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
Mar. 26, 2017 01:00 PM EDT Reads: 1,689
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server? Sounds magical, and it is! In his session at 20th Cloud Expo, Chris Munns, Senior Developer Advocate for Serverless Applications at Amazon Web Services, will show how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverle...
Mar. 26, 2017 12:45 PM EDT Reads: 2,019
The IT industry is undergoing a significant evolution to keep up with cloud application demand. We see this happening as a mindset shift, from traditional IT teams to more well-rounded, cloud-focused job roles. The IT industry has become so cloud-minded that Gartner predicts that by 2020, this cloud shift will impact more than $1 trillion of global IT spending. This shift, however, has left some IT professionals feeling a little anxious about what lies ahead. The good news is that cloud computin...
Mar. 26, 2017 10:30 AM EDT Reads: 1,334
An overall theme of Cloud computing and the specific practices within it is fundamentally one of automation. The core value of technology is to continually automate low level procedures to free up people to work on more value add activities, ultimately leading to the utopian goal of full Autonomic Computing. For example a great way to define your plan for DevOps tool chain adoption is through this lens. In this TechTarget article they outline a simple maturity model for planning this.
Mar. 26, 2017 06:00 AM EDT Reads: 4,307
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might...
Mar. 26, 2017 05:15 AM EDT Reads: 6,200
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...
Mar. 26, 2017 01:00 AM EDT Reads: 3,001