Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Stackify Blog, Andreas Grabner

Related Topics: Microservices Expo

Microservices Expo: Article

Building Asynchronous Applications & Web Services

Building Asynchronous Applications & Web Services

With the advent of J2EE 1.3, we've become familiar with the message-driven bean (MDB) as a key architecture component, and the use of asynchronous messaging as an aid to application scalability. But there are many more ways in which messaging can be used to build robust Java applications and Web services.

Today, as always, enterprises are faced with the challenges of time-to-market, data distribution, and business flexibility. They are also faced with

  • Multisite or globally dispersed operations and end users: Applications work across the Internet and must be reliable, scalable, and easily manageable.
  • A rapid and unpredictably changing business environment: Architects need to provide for frequent changes during the lifetime of a deployment.
  • Demanding architectural requirements: These requirements typically involve the Internet, the J2EE platform, .NET, Web services, and multiple-legacy execution environments.

    How can you use asynchronous Java messaging to meet these challenges? In this article, I'll explore why, when, and how to use asynchronous messaging in Java applications and Web services. I'll also discuss the advantages of an asynchronous model, when to adopt a message-based model, and typical applications of these techniques.

    I won't compare and contrast the many different implementation options for Java messaging for a review, see "When Should I Use JMS?" (available at www.sys-con.com/java/article.cfm?id=1275).

    A Simple Asynchronous Model
    I'm assuming that most readers are somewhat familiar with the basic constructs of a simple asynchronous programming model, including threads, events, and listeners, and the synchronization and locking of shared data.

    These constructs are used in multithreaded programs within a single Java VM process. The same basic model also applies with minor changes to C# and .NET - in fact, to any multithreaded model - except the construct names are changed. Java's synchronized keyword is replaced with C#'s lock to control access to critical blocks of code. Essentially, events, listeners, and synchronized data objects and code blocks are today's more flexible and sophisticated versions of earlier languages' interrupts, handlers, semaphores, and mutex constructs.

    The asynchronous model can certainly increase throughput and reduce latency (response times) even on single-processor systems, as different threads can simultaneously consume network, I/O, and processor capacity. On multiprocessor machines other related processes (including database shadow processes) can also execute in parallel.

    Figure 1 shows some typical interactions between layers in a J2EE architecture. The gold bars highlight periods where synchronous activities may be blocked, waiting for return of information from other layers. If you can shorten the critical path of the overall process by either returning control faster or moving work into the gold sections, you can reduce the overall response time. You can also offload work into longer-running background threads; for example, in principle the Place Order EJB could choose not to wait for the database update to complete before returning control to the servlet.


    Program threads interact through shared data (in process, or external) or using passed-in parameters. The more the different threads share data, the tighter the coupling; it becomes much harder to maintain the components separately. Shared data also raises the possibility of data corruption; locks or semaphores (synchronization) must be used to avoid the possibility of two threads simultaneously updating data objects. This complicates the programming model and tends to reduce performance as threads serialize (block and wait) as they queue up for access to locked data items.

    When to Adopt Messaging
    You've seen some of the advantages of an asynchronous model, and at the same time some of the limitations that a single-process, multithreaded model can impose. Let's look at how moving to a multiprocess, message-based asynchronous model can add significant benefits to your applications.

    Document-Centric Processing
    Figure 2 shows the most basic reason for executing a business transaction using multiple processes. In this simple example, three actors are involved: the Traveler, the Agent, and the Airline. Each has control over its own activities - but no one is in charge of the whole "conversation." Just as in the real world, the various parties exchange documents (business messages, if you like) to trigger each stage of the overall business transaction. This is because there's no single database they can all share. This style of interaction can be called document-centric processing to distinguish it from the database-centric model commonly used to build internal systems such as ERP and CRM. The document (the message passed from system to system) contains all necessary data items and takes the same role as input parameters in a well-structured function call.

    Transactional Islands
    No actor in a value chain like this can safely make assumptions about specific vendor or technology-platform decisions made by his partners. Each actor is a "transaction island," and interaction has to be loosely coupled.

    In document-centric process models (Web services are a prime example), business conversations are characterized by the different, maybe interlocking or interlaced, transaction scopes at work. In this type of system, messaging is used to safely pass control between the different actors without tying together their individual physical transaction islands.

    In the flight reservation example, any flow across the lanes (between actors) is implemented as a message. Flows within a lane could also be messages - that depends on the scale of each separate subsystem.

    An inevitable consequence of the separate development of these autonomous systems is that system A (the Agent) can't know how long it will take for system B (the Airline) to complete its part of the conversation (reserving seats and billing a credit card) before replying with a flight confirmation message. In these circumstances, a tightly coupled two-phase commit transaction simply isn't feasible - both technology and organizational politics rule it out.

    Compensating Transactions
    To be able to reverse the effect of any exceptions that may arise, you have to approach errors and corrections using compensating transactions.

    Think of using your credit card in a store. You buy a sweater, pay with your credit card, then notice that you've picked up the wrong size. The sales assistant doesn't simply roll back your transaction. Instead, he or she performs a second, compensating transaction that reverses the effect of the mistake. You get both credit card slips, and you should see both the debit and the credit on your monthly statement. Any nontrivial business conversation will anticipate the possibility of errors and need for corrections.

    Figure 3, a RosettaNet model of a purchase order, shows how both the buyer and the seller can compensate for errors, production problems, or lack of end-user demand by varying the details of the order over a period of hours, days, weeks, or even months. It also highlights that the purchase order is just one part of a bigger buyer/seller relationship - loosely integrated with other conversations like Quote, Forecast, Shipment, and Billing/Payment.


    The Benefits of Messaging
    As before, you can benefit from improvements in throughput and latency - but now in a wider range of cases. You can also introduce scalability, as you can load balance not just between threads on a single processor but also across servers in clustered and distributed systems.

    Messaging also offers increased reliability and availability. Because the message server itself can be clustered, message consumers (such as J2EE MDBs) can be spread over multiple (redundant) host servers; in the event of a failure of one part of the message-server cluster, message clients can seamlessly reconnect and retransmit any unacknowledged messages.

    Finally, it's easy to reconfigure message flows by interfering at the message layer. Messages can be filtered, enriched, transformed, and rerouted using simple standards-based tools such as XSLT; or published on a topic to multiple subscribers. Any message broker can be used to achieve this kind of adaptability. Try doing that with a remote procedure call!

    Separation of Concerns
    Perhaps the most important benefit of a document-centric approach is the clean separation of application components - each of which can be owned, developed, and deployed autonomously - either because different actors (as in our flight reservations example) are involved, or simply because the best-of-breed application components used to construct a single business application may use different technology platforms. When you have millions of dollars invested in software assets, the last thing you want is to trash them just because they don't match your current preferred platform.

    Different document-centric components can have different development life cycles, languages, and platforms. You can mix-and-match your 30-year-old mainframe systems - using MQSeries to kick off CICS transactions - with your J2EE/UNIX, Windows/.NET, and any other legacy processing.

    Using message-oriented middleware, it's easy to plug these components together. Because it's easier, it's also cheaper. Interfaces tend to be simpler and cleaner, which discourages the spaghetti interconnections typical of multithreaded programming and drives toward a true loosly-coupled approach. You can assemble systems from best-of-breed pieces, which may be hosted in different departments or even companies Management of each component is autonomous; each can be separately scaled, replicated, or replaced. Easier integration also promotes more frequent reuse rather than redevelopment of components.

    Typical Applications
    So, what kind of applications are we talking about in which asynchronous messaging patterns can be applied? System-to-system workflow, which is a catch-all for any kind of fully automated integration; in-house enterprise application integration (EAI); and external integration between businesses - what we used to call B2B, then e-business integration, now called Web services.

    The integration may involve multiple deployment platforms - J2EE, .NET, or any number of legacy platforms, hardware, and software, from many different vendors. CIOs want to be able to plan their infrastructure to be completely uniform, but however hard they try, along comes a merger, acquisition, new technology, or just an opinionated CEO to thwart their tidy ideas.

    Even if you've managed to stick to J2EE, you're probably looking at more than one vendor's product stack. Maybe you're developing on JBoss but deploying on BEA WebLogic, with a nice helping of IBM WebSphere and SonicMQ on the side, and perhaps a sprinkling of Tibco Rendezvous as well.

    In financial services, message-oriented systems have been in place for the past 10-15 years. Over the past five years Java, J2EE, and JMS have brought increasing productivity benefits as the institutions reach toward straight-through processing and next-day/same-day settlement.

    In the telco market, in spite of the sector's deep recession, JMS is a key component of the OSS/J (Operational Support Systems for Java) initiative. A message-based approach is used to simplify the integration of ordering, provisioning, billing, and network management systems. With industry-wide support for messaging and formatting standards equipment manufacturers, service providers and network operators can easily plug in new products and services.

    In manufacturing, we're seeing RosettaNet and (more slowly) ebXML evolve to define and coordinate complex supply-chain processes; and now we're beginning to see Web services technology, with SOAP as the transport, adopted to support all kinds of loosely coupled B2B integration.

    Process description and choreography standards like BPML (Business Process Markup Language), WSFL (Web Services Flow Language), XLANG (the choreography language used by Microsoft's BizTalk), and WSCI (Web Services Choreography Interface) are being used to represent how individual collaborating processes are tied together by message flows, both in XML and intuitive graphical notations.

    It all boils down to this: a message-based, document-centric approach is appropriate for any integration of autonomous components in a business process flow - providing the technology underpinnings you need to connect your functional systems to each other and then to the rest of your organization and your employees, customers, partners, suppliers, and regulators. Most developers today are faced with the need to connect heterogeneous applications and services across the inherently unreliable and unpredictable Internet; they can make it easy on themselves by simply using asynchronous Java messaging.


  • Thomas, N. "When Should I Use JMS?" Java Developer's Journal. Vol. 7, issue 1.
  • Ross-Talbot, G. and Brown, G. "Scalable Web Services using JMS & JCache." Web Services Journal. Vol. 2, issue 3.
  • Ross-Talbot, S. "Building to Scale." Java Developer's Journal. Vol. 7, issue 2.
  • More Stories By Nigel Thomas

    Nigel Thomas offers independent product marketing consultancy in the application infrastructure software market place, and can be contacted at [email protected]

    Nigel recently spent two years as Director of Product Management for SpiritSoft's Java messaging, caching and integration products. Prior to that, he spent five years with EAI pioneer Constellar as product architect and then director of product management for the flagship Constellar Hub product. Nigel spent over eight years at Oracle Corporation, architecting and delivering Oracle's Accounting products and then moving on to worldwide performance consulting and CASE development assignments.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    Microservices Articles
    Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
    Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
    In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
    SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...
    Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
    Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
    As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
    In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
    DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
    Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...