Welcome!

Microservices Expo Authors: Liz McMillan, AppDynamics Blog, Pat Romanski, Elizabeth White, Martin Etmajer

Related Topics: Microservices Expo

Microservices Expo: Article

ESB Integration Patterns

An insider's look into SOA's implementation backbone

The past several years have seen some significant technology trends, such as service-oriented architecture (SOA), enterprise application integration (EAI), business-to-business (B2B), and Web services. These technologies have attempted to address the challenges of improving the results and increasing the value of integrated business processes, and have garnered the widespread attention of IT leaders, vendors, and industry analysts. The enterprise service bus (ESB) draws the best traits from these and other technology trends to form a new architecture for integration. The ESB concept is a new approach to integration that can provide the underpinnings for a loosely coupled integration network that can scale beyond the limits of a hub-and-spoke EAI broker.

An ESB is a highly distributed, event-driven, enterprise SOA that is geared toward integration. It is a standards-based integration platform that combines messaging, Web services, data transformation, and intelligent routing to reliably connect and coordinate the interaction of significant numbers of diverse applications across extended enterprises with transactional integrity. An extended enterprise represents an organization and its business partners, which are separated by both business boundaries and physical boundaries. In an extended enterprise, even the applications that are under the control of a single corporation may be separated by geographic dispersion, corporate firewalls, and interdepartmental security policies.

An ESB is designed to be pervasive, meaning that it is capable of spanning the extended enterprise. But an ESB is also pervasive in the sense that it is capable of being used as a general-purpose integration environment that is suitable for any project, no matter how large or how small.

The SOA of the ESB
An ESB is the implementation backbone for a loosely coupled, event-driven SOA that enables a highly distributed universe of named routing destinations across a multi-protocol message bus.

An SOA provides an integration architect with a broad abstract view of applications and integration components to be dealt with as high-level services. Service components in an ESB expose coarse-grained, message-driven interfaces for the purpose of sharing data between applications, both synchronously and asynchronously. In an ESB, applications and event-driven services are connected through the bus as abstract endpoints. These abstract endpoints are tied together in a loosely coupled SOA, which allows them to operate independently from one another. An integration architect uses an ESB to tie together assemblies of abstract endpoints that form composite business processes, or process flows (see Figure 1).

What the endpoints actually represent can be very diverse. For example, an endpoint may represent a discrete operation, like a specialized service for calculating sales tax. The underlying implementation of the endpoint could represent a local binding to an application adaptor, or a callout to an external Web service. The applications and services can be physically located anywhere that is accessible by the bus.

Itinerary-Based Routing
In an ESB, data is passed between endpoints using messages. The coordination of the message passing is done using an ESB concept known as itinerary-based routing. A message itinerary is metadata that gets carried with a message that provides a list of forwarding addresses. The itinerary is a set of instructions telling the ESB invocation framework which endpoints the message needs to be delivered to as it travels from endpoint to endpoint across the bus. Itineraries contribute to the distributed nature of the ESB architecture by eliminating the dependency on a centralized routing engine, which could potentially be a single point of failure. They are intended for relatively finite microflows of messages. Simple branching and merging of routing paths can be achieved through integration patterns that take advantage of specialized splitter and aggregator services. More sophisticated process orchestrations are also possible using specialized orchestration engines that can be layered onto the bus as additional services.

Configuration, Not Coding
The mantra of the ESB is "configuration rather than coding." In an ESB, abstract endpoints, which are accessible through application adapters, message queues, Web services invocations, and a variety of other protocols, are configured through a tool interface rather than coded into applications. It's not that there's anything wrong with writing code, but there's plenty of code to be written elsewhere that doesn't have to do with hard-wiring interdependencies between applications and services.

With its distributed deployment infrastructure, an ESB can efficiently provide central configuration, deployment, and management of services that are distributed across the extended enterprise. Artifacts that affect the behavior of an integration service, such as an XSLT stylesheet that can be used by a data transformation service, are also configurable in an ESB.

The ESB Service Container
The highly distributed nature, and the ESB mantra of "configuration rather than coding" is largely due to traits of the ESB service container. A service container is the physical manifestation of the abstract endpoint, and provides the implementation of the service interface. A service container is a remote process that can host software components.

A service container is simple and lightweight, but it can have many discrete functions. As shown in Figure 2, service containers take on different roles as they are deployed across an ESB.

In its simplest form, a service container is an operating system process that can be managed by the ESB's invocation and management framework. A service container provides a number of facilities for the service implementation such as event dispatch, thread management, security (encryption, authentication, and access control), and QoS via reliable message delivery. Unlike its distant cousins, the J2EE application server container and the EAI broker, the ESB service container allows the selective deployment of integration functionality exactly when and where you need it, and nothing more than what you need.

A service container can host a single service, or can combine multiple services in a single container environment (see Figure 3).

An ESB service is also scalable in a fashion that is independent of all other ESB services. A service container may manage multiple instances of a service within a container. Several containers may also be distributed across multiple machines for the purposes of scaling up to handle increased message volume (see Figure 4).

The ESB Service Interface
The ESB container provides the message flow in and out of a service. It also handles a number of facilities, such as service life cycle and itinerary management. As shown in Figure 5, the container manages an entry endpoint and an exit endpoint, which are used by the container to dispatch a message to and from the service.

Messages are received by the service from a configurable entry endpoint. Upon completion of its task, the service implementation simply places its output message in the exit endpoint to be carried to its next destination. The next destination may be a reply to the original sender of the message, or more often may be sent along to the next leg of its journey using a forwarding address. The output message may be the same message that it received. The service may modify the message before sending it to the exit endpoint. Or, in the service may create a completely new message to serve as a "response" to the incoming message and send the new message in the exit endpoint.

What is placed in the exit endpoint depends on the context of the situation and the message being processed. In the case of a content-based routing (CBR) service, the message content will be unchanged, with new forwarding addresses set in the message header.

In more sophisticated cases, one input message can transform into many outputs, each with its own routing information. For example, a splitter service can receive a purchase order document, split it into multiple output messages, and send out the purchase order and its individual line items as separate messages to an inventory or order fulfillment service. The service implementation in this case does not have to be written using traditional coding practices; it can be implemented as a specialized transformation service that applies an XSLT stylesheet to the purchase order document to produce the multiple outputs.

Process Tracking and Error Handling
In addition to a normal exit endpoint to handle the outgoing flow of a message, additional destinations are available to the service for auditing the message and for reporting errors. The tracking endpoint can be utilized to monitor the progression of a message as it travels through a business process. Tracking can be handled at both the individual service level and the business-process level. From the service implementation's point of view, it simply places data into the tracking endpoint or fault endpoint, and the surrounding ESB invocation and management framework takes care of the tracking and error reporting. This approach provides a separation between the implementation of the service and the details the surrounding fault handling. The implementer of a service need only be concerned that it has a place to put such information, whether it is information concerning the successful processing of good data, or the reporting of errors and bad data.

Integration Patterns
One of the many benefits of using itinerary-based routing to coordinate the interactions between discrete integration services is the ease with which integration patterns can be created and reused to solve common integration challenges. A message itinerary can be a powerful and flexible tool for intercepting the path of a message and performing operations on it, thus adding value to the integration environment. Through configuration and management tools, additional processing steps can be inserted into a business process definition as event-driven services into an XML processing pipeline. The following describes two of the common integration patterns in use today: the "VETO" pattern, and a variation known as the "VETRO" pattern.

The VETO Pattern
VETO is a common integration pattern that stands for Validate, Enrich, Transform, Operate (see Figure 6). The VETO pattern and its variations can ensure that consistent, validated data will be routed throughout the ESB.

Validate
The "Validate" step is usually the first part of any ESB process and can be accomplished in a number of ways. It's important that if possible, this step happen independently; this removes the burden of validation from all of the downstream service implementations and promotes reuse. Building validation directly into the first service of a process makes it difficult to insert an additional service in front of it without requiring that the new service also provide its own validation.

An example of validation is to simply verify that an incoming message contains a well-formed XML document and conforms to a particular schema or WSDL document that describes the message. This requires that the service always have available the up-to-date XML schema for a particular message type. The schema and WSDL can be kept in the directory service and managed remotely by the management infrastructure of the ESB. A service may also have scripting associated with it, which can be made available to the service as a configuration parameter

If the target data is not in XML format, or if there is no schema or WSDL available, then a custom service can be used to validate the incoming message.

Enrich
The "Enrich" step involves adding additional data to a message to make it more meaningful and useful to a target service or application. The Enrich service could be implemented to invoke another service to look up additional data, or it could access a database to get what it needs.

Transform
The "Transform" step converts the message to a target format. This often involves converting the data structure to an internal canonical format, or converting from the canonical format to the target format of the "Operate" step. The target system may have its own built-in validation rules requiring that the transformation step modify the incoming data in order to prevent the target system from rejecting the message. In this sense, the transformation step is also providing pre-validation protection in a separate service that can be separately managed. While this may mean redundant logic in the short term, it provides more flexibility in the long term, because it allows the "Operate" step to focus on business logic.

Operate
The "Operate" step is the invocation of the target service or an interaction with the target application. If the target operation is an enterprise application that requires its own data format, then the previous transformation step converts the message to the target format required by the application.

Variations: The VETRO Pattern
The VETO pattern has many variations. One such variation is the VETRO pattern, which includes a "Route" step such as a content-based router service (Figure 7).

In some cases the validate, enrich, and transform steps can be accomplished in one service implementation. For example, a CBR service may use a script-based validation directly in the service itself, rather than using a separate service. This may provide some convenience, particularly if the context of validation can't easily be applied to other uses. However, keeping them as separate services further promotes loose coupling and service reuse, and allows the validation to be separately defined and managed. Through the flexibility of configuration and deployment, that choice can be revisited over time without affecting all of the application endpoints that use the pattern. The stages of the VETO pattern can be implemented as separate services that can be configured, reused, and independently swapped out for alternate implementations.

The VETO concept is profoundly simple, yet is at the heart of what an integration architect does regularly with an ESB. An ESB provides an event-driven SOA for applications in an integration fabric. Regardless of the process routing and orchestration method being used - whether itineraries or the more sophisticated process modeling using an orchestration service - it is the use of integration patterns such as VETO and its variations that provide the overall value and flexibility to the integration fabric.

Summary
I hope that this brief introduction to the ESB and its use of integration patterns has provided you with insight into the internal workings of the ESB, and given you a sense of how an integration architect can use event-driven components as services to construct reusable integration patterns in an enterprise SOA. The VETO pattern is one of many being used in ESB-based integrations.

I encourage you to learn more about the ESB as a technology concept, for it is already rapidly changing the way integration is being done across a variety of industries. So get reading, and get on the bus!

More Stories By Dave Chappell

David Chappell is vice president and chief technologist for SOA at Oracle Corporation, and is driving the vision for Oracle’s SOA on App Grid initiative.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 ad...
Microservices are a type of software architecture where large applications are made up of small, self-contained units working together through APIs that are not dependent on a specific language. Each service has a limited scope, concentrates on a specific task and is highly independent. This setup allows IT managers and developers to build systems in a modular way. In his book, “Building Microservices,” Sam Newman said microservices are small, focused components built to do a single thing very w...
SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful...
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management...
With an estimated 50 billion devices connected to the Internet by 2020, several industries will begin to expand their capabilities for retaining end point data at the edge to better utilize the range of data types and sheer volume of M2M data generated by the Internet of Things. In his session at @ThingsExpo, Don DeLoach, CEO and President of Infobright, will discuss the infrastructures businesses will need to implement to handle this explosion of data by providing specific use cases for filte...
WebSocket is effectively a persistent and fat pipe that is compatible with a standard web infrastructure; a "TCP for the Web." If you think of WebSocket in this light, there are other more hugely interesting applications of WebSocket than just simply sending data to a browser. In his session at 18th Cloud Expo, Frank Greco, Director of Technology for Kaazing Corporation, will compare other modern web connectivity methods such as HTTP/2, HTTP Streaming, Server-Sent Events and new W3C event APIs ...
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Avere delivers a more modern architectural approach to storage that doesn’t require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbuilding of data centers ...
In most cases, it is convenient to have some human interaction with a web (micro-)service, no matter how small it is. A traditional approach would be to create an HTTP interface, where user requests will be dispatched and HTML/CSS pages must be served. This approach is indeed very traditional for a web site, but not really convenient for a web service, which is not intended to be good looking, 24x7 up and running and UX-optimized. Instead, talking to a web service in a chat-bot mode would be muc...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes ho...
SYS-CON Events announced today that AppNeta, the leader in performance insight for business-critical web applications, will exhibit and present at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. AppNeta is the only application performance monitoring (APM) company to provide solutions for all applications – applications you develop internally, business-critical SaaS applications you use and the networks that deli...
CIOs and those charged with running IT Operations are challenged to deliver secure, audited, and reliable compute environments for the applications and data for the business. Behind the scenes these tasks are often accomplished by following onerous time-consuming processes and often the management of these environments and processes will be outsourced to multiple IT service providers. In addition, the division of work is often siloed into traditional "towers" that are not well integrated for cro...
In a previous article, I demonstrated how to effectively and efficiently install the Dynatrace Application Monitoring solution using Ansible. In this post, I am going to explain how to achieve the same results using Chef with our official dynatrace cookbook available on GitHub and on the Chef Supermarket. In the following hands-on tutorial, we’ll also apply what we see as good practice on working with and extending our deployment automation blueprints to suit your needs.
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, will discuss how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved effi...
How is your DevOps transformation coming along? How do you measure Agility? Reliability? Efficiency? Quality? Success?! How do you optimize your processes? This morning on #c9d9 we talked about some of the metrics that matter for the different stakeholders throughout the software delivery pipeline. Our panelists shared their best practices.
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, will provide an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profes...
Father business cycles and digital consumers are forcing enterprises to respond faster to customer needs and competitive demands. Successful integration of DevOps and Agile development will be key for business success in today’s digital economy. In his session at DevOps Summit, Pradeep Prabhu, Co-Founder & CEO of Cloudmunch, covered the critical practices that enterprises should consider to seamlessly integrate Agile and DevOps processes, barriers to implementing this in the enterprise, and pr...
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
If we look at slow, traditional IT and jump to the conclusion that just because we found its issues intractable before, that necessarily means we will again, then it’s time for a rethink. As a matter of fact, the world of IT has changed over the last ten years or so. We’ve been experiencing unprecedented innovation across the board – innovation in technology as well as in how people organize and accomplish tasks. Let’s take a look at three differences between today’s modern, digital context...
Sensors and effectors of IoT are solving problems in new ways, but small businesses have been slow to join the quantified world. They’ll need information from IoT using applications as varied as the businesses themselves. In his session at @ThingsExpo, Roger Meike, Distinguished Engineer, Director of Technology Innovation at Intuit, showed how IoT manufacturers can use open standards, public APIs and custom apps to enable the Quantified Small Business. He used a Raspberry Pi to connect sensors...
The principles behind DevOps are not new - for decades people have been automating system administration and decreasing the time to deploy apps and perform other management tasks. However, only recently did we see the tools and the will necessary to share the benefits and power of automation with a wider circle of people. In his session at DevOps Summit, Bernard Sanders, Chief Technology Officer at CloudBolt Software, explored the latest tools including Puppet, Chef, Docker, and CMPs needed to...