Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Microservices Expo, Industrial IoT

Microservices Expo: Article

Application Server Architecture and BPEL

Promises and challenges

In recent years the application server has greatly evolved, expanding the set of core services provided by the infrastructure. The current Java platform supports XML data handling, scalability, load balancing, and other capabilities that allow application-level services to be developed more easily and deployed more reliably. This progression must now address developers' latest concerns regarding security, distributed transactions, and reliable messaging because applications no longer stand alone - they're deployed into a technology ecosystem that can span departmental and organizational boundaries.

In this environment, a well-behaved application not only needs to interact with external systems and consume services from them, but also needs to be a service provider. This is driven by a need for reuse and adaptability and fuels the current push toward services-oriented architectures (SOA).

However, this leads to the question: How do we get all these services to work together in a heterogeneous, networked environment? The answer is BPEL, the Business Process Execution Language for Web Services. BPEL provides a standard, portable language for orchestrating services into end-to-end business processes and builds upon a decade of progress in the areas of business process management, workflow, and integration technologies. It's built from the ground up around XML and Web services and is supported on both the Microsoft .NET and Java platforms.

What does BPEL add to the existing Web services standards and Java platform? It's clear that the industry must have been hungry for BPEL, given the support it has received from nearly every major technology vendor in the past year - but why? The first driving force is the new class of connected applications, which makes implementing business processes a mainstream problem that most developers must tackle. Surely a second factor was the alphabet soup of earlier proprietary workflow languages, which slowed their adoption and created a standards vacuum that BPEL fit perfectly. And finally, Web services have accelerated the process by providing a standard interface for publishing services and requiring a shift in the way service composition is done. BPEL, then, is just what the doctor ordered. The emergence of BPEL as a standard for describing business processes is a step in the evolution of the application platform (see Figure 1).

Aligning with this shift, many of the major technology vendors in both the Java and .NET camps have announced that they will ship BPEL engines in the future, so why are so few commercially available today? It turns out that the maturity of BPEL as a process language makes it feature rich but complicates the process of building a scalable, reliable BPEL engine. For example, BPEL is designed with asynchronous services at its core, but this means that servers must deal effectively with persisting state for long-running flows, correlating asynchronous messages, and reliably handling the case where an outbound message has been sent but the server crashes before the response is received. The rest of this article examines how these requirements, and the new standards to support them, naturally extend both the Web services standards and the current Java platform.

To make this discussion more interesting - and more comprehensible - let's consider a real-world example: an order management process at a large hardware manufacturer. This manufacturer accepts wholesale orders from many different sources and responds immediately with an order tracking number, but has a long-running flow in the back end to process and track the order and call the client back when an invoice is ready.

As shown in Figure 2, this flow needs to invoke synchronous services, such as looking up payment terms in an Oracle Financials package, as well as asynchronous services, such as submitting the order to a mainframe system, which will compute the invoice as part of a batch process. XML data is exchanged between all the systems and the manufacturer must process millions of these transactions a day at peak loads—tracking them, reporting on them, and handling exceptions, notifications, and manual processing steps as needed.

This process is a typical example of the new class of requirements that developers must address when developing SOA-based applications, including:

  • Bindings to heterogeneous back-end systems
  • Asynchronous interactions
  • XML data transformation
  • Flow coordination
These requirements are transforming the application server from a container for presentation and tightly coupled business logic to an infrastructure that equally supports asynchronous messages and flow coordination. This transformation is enabled and accelerated by emerging standards that will extend the boundary of the Java platform as we know it (see Figure 3).

Some of the key standards that are being implemented in J2EE application servers around this area include the following.

Extensible WSDL Binding Framework (JSR-208)
Web services are clearly demonstrating their value as an integration standard; however, not all back-end systems are SOAP or Web service enabled. The JSR-208 working group and existing frameworks like WSIF (Web Services Invocation Framework) from Apache focused on helping the Java platform support Web services messaging without requiring every system to be wrapped with a Web service. In this way, hardware manufacturers can use BPEL to orchestrate JMS messages sent to and received from the mainframe.

Process Flow Coordination (BPEL)
Asynchrony, parallelism, sophisticated exception handling, long-running processes, and a need for compensating transactions change the fundamental nature of what we think of as an application. While the order management flow looks simple, the long-running and asynchronous message exchanges alone would make it complex to implement in Java today. Add parallelism and compensation logic and things get downright ugly....

Reliable Web Messaging (WS-Reliability)
One of the challenges of SOA-based applications is that you can't assume that all the end-points are available at the same time, all the time. The WS-Reliability Web messaging standard lets the infrastructure guarantee the order and delivery of messages across service end-points.

Security (WS-Security)
In addition, security requirements are obvious when exchanging text-based messages over (and across) unsecure networks. By addressing this problem with infrastructure-level standards, security is provided without sacrificing interoperability.

XML Data and Transformation (JAXB, XQuery)
SOA-based applications need to access and manipulate XML documents flowing into and out of each service. New Java facilities like JAXB and languages like XQuery simplify these tasks.

User Interactivity (WSRP)
Most business processes incorporate user interactions at many levels, such as portals to initiate and inspect the state of processes, manual approval tasks, and exception handling. The Web Services for Remote Portlets (WSRP) standard enables the next generation of application servers to support user interactions in composite processes as robustly as they're supported for Web applications today.

Choreography and Contracts (WS-CDL)
As the Web services standards reduce the barrier for trading partners to interact, a formalism is required to describe the contracts involved in richer business collaborations.

Summary
The trend towards building SOA-based applications marks a fundamental shift in the way that applications are built. Applications today are triggered by events, orchestrating services from both existing and new applications, and integration must be asynchronous and loosely coupled to be reliable. Not coincidentally, a new set of standards has emerged to address these requirements, and vendor adoption of these open standards is increasing confidence and accelerating adoption in the IT community. This is clearly promising and will offer enterprises more seamless interoperability between heterogeneous systems and services than was previously possible. Of course, all the applications deployed on this standards-based infrastructure benefit from the inherent capabilities of the underlying platform.Of course, some challenges remain. Several standards in areas such as reliability and connectivity are less mature. Also, because this new architecture doesn't fully address information quality, vendors must provide data-quality services that offer profiling and cleansing features. Still, it's clear that these developments are going to dramatically change how we build applications - and that the application server and BPEL are at the core of this new wave.

More Stories By Amlan Debnath

Oracle Integration Guru and vice president of Server Technology, Amlan Debnath joined Oracle from TIBCO Software Inc., where he was vice president of Integration Products. While at TIBCO, Amlan drove TIBCO’s shift from messaging vendor to major player in the enterprise integration market, working actively with many world-leading customers to understand their requirements and develop products that better satisfied their needs.

Comments (2)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...