Microservices Expo Authors: TJ Randall, Liz McMillan, Elizabeth White, Pat Romanski, AppDynamics Blog

Related Topics: Microservices Expo

Microservices Expo: Article

Service Platforms Emerge as the Foundation for SOA

SCA, domain-specific languages and XML processing capabilities - the next generation

Enterprise software architectures are shifting from collections of applications that are designed around user interfaces to assemblies of reuseable services. The first step in the evolution toward service-based applications was the definition and publication of services encapsulating discrete business functions.

The second wave used services in point-to-point combinations using protocols aimed at system interoperability for communication. The next wave of SOA adoption will focus on enabling composite service definitions that combine domain-specific languages for process orchestration, XML transformations, message routing, and business rules.

In this article, we'll look at how SOA platforms are evolving to meet these requirements. Specifically, we'll examine three related themes:
1.  The nature and role of service platforms that are designed to host composite services and complex business processes;
2.  The changes in how applications are described and designed in the new SOA platforms;
3.  The importance of key standards in simplifying and commoditizing the integration of services and applications.

In particular, we'll look at a series of emerging industry standards that describe how to design composite services implemented using many different implementation languages and protocols. These standards are defined in the Service Component Architecture (SCA) framework.

Why Services?
Interest in architectures based on services is driven by three distinct yet complementary technical goals. These goals include the need to:
1.  Integrate software functions across the data center to provide a consistent and rationalized approach to dealing with enterprise integration scenarios. Data and business functionality is often bound up in silos that need to be bridged to create new business functions that use multiple back-end systems. While initial approaches to Enterprise Applications Integration (EAI) often included ad hoc designs and layers of proprietary infrastructure, this is changing: a model based on the service paradigm and open standards is becoming the norm.
2.  Expose software assets as stateless services based on open Internet standards. Traditional systems integration required a uniform substrate to connect endpoints. We have reached a point in the industry where virtually any software asset can be exposed as a SOAP-based Web Service and described in WSDL.
3.  Leverage discrete services to build new business processes bridging many systems together. The standard language for building business processes is BPEL. However, effectively building a composite service out of multiple existing systems often requires additional functionality, including business rules, declarative XML processing capabilities, and asynchronous business events. The key requirement for service platforms is to support the composition of multiple services using multiple technologies required to implement composite services.

These technical goals are mirrored by specific business benefits that can be traced directly to SOA adoption. The business benefits start with increased flexibility to leverage and maximize the value of an organization's IT assets. SOA also increases productivity by using standards and reducing the effort that is required to get services communicating with each other.

Lastly, the SOA approach is comprehensive: it's a model that can tie together many technologies and describe ways in which services are related to each other. All these factors lead to cost savings.

Figure 1 shows a fictional HR system architected in a Service Oriented Architecture fashion. This system provides an end-to-end solution for employee provisioning, including a front-end to capture employee details, BPEL processes to orchestrate the provisioning of the employee assets, and a human workflow to route and gather manager approvals through e-mail.

An HR representative enters the new employee details on the HR Web site. This results in the publication of a message containing these details on the enterprise service bus. The ESB then looks at the country of hire of this new employee and routes it to the appropriate BPEL process to take into account the fact that HR regulations and practical details such as office space greatly differ from one location to another.

The "Employee Creation" BPEL process involves various other systems. First, it creates a new entry in the HR database. Then, it invokes an external rules engine to find the level of approvals required for a new employee based on such criteria as grade and department.

The next step is to gather the appropriate approvals, and this task is orchestrated with a human workflow that will take care of e-mailing managers. Finally, once all approvals have been received, the BPEL process publishes an event on the bus to trigger various other provisioning processes: IT provides the employee with all required internal accounts such as e-mail, payroll sets up the employee in Oracle Financials, and facilities allocates office space.

The benefits of designing this employee provisioning in this greatly decoupled fashion are:
• Because there is a single core employee creation BPEL process per country, the task of designing this process is greatly simplified and takes into account vastly different local regulations. A single application or process that tries to overlay all these variations would result in extremely complex logic.
• Because there is one process per country, adding a new location is a much less risky task: there's no need to touch the running processes for the existing locations.
• Because the approval levels are externalized to the rules engine, HR representatives can easily update the rules as hiring guidelines evolve, without having to modify the core BPEL employee creation process - a task that would require developers and testers to be involved.

But these benefits come at the price of having to manage many different types of artifacts: BPEL processes, rules definitions, and ESB flows to name a few. In many cases, this means running many different kinds of middleware that are packaged, deployed, and administered separately. The cost of the flexibility in the design comes in the form of much greater management, administration, and governance. The new SOA platforms and standards are designed to eliminate these costs by providing one model for building and running composite services.
In the following sections we see how SCA provides a model that lets us combine these different technologies together into a single composite service definition - a key requirement for SOA solutions.

Service Component Architecture Until recently, the development of composite services was hampered by a reliance on proprietary models for building new services and business processes. The development and acceptance of BPEL as a standard for service orchestration was a major step forward for the industry. The Service Component Architecture (SCA) is the next step in that evolution. In fact, we believe that the SCA will be viewed in retrospect as the key enabling technology for the widespread, successful adoption of SOA.

The SCA is a family of specifications developed by a group of leading vendors and platform providers in the integration and applications spaces. In February 2007 an initial set of SCA 1. 0 specifications completed incubation and were published on www.osoa.org. The authors announced their intention to submit the specifications to OASIS' open standards process. In addition a new member section, the Open Composite Services Architecture (OpenCSA) Member Section http://www.oasis-opencsa.org, was created in OASIS to coordinate the several technical committees that will start work later this year to process these specifications.

More Stories By Greg Pavlik

Greg Pavlik is an architect at Oracle. In this role he works on a combination of technology strategy, product development, and standards. He is currently responsible for Oracle’s SOA and Web services offerings. Greg is also the author of Java Transaction Processing (Prentice Hall, 2004).

More Stories By Demed L'Her

Demed L'Her is a senior principal product manager at Oracle. His focus is on enterprise service buses, JMS and next-generation SOA platforms. He has been involved in messaging and integration projects worldwide for 10 years.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Microservices Articles
At its core DevOps is all about collaboration. The lines of communication must be opened and it takes some effort to ensure that they stay that way. It’s easy to pay lip service to trends and talk about implementing new methodologies, but without action, real benefits cannot be realized. Success requires planning, advocates empowered to effect change, and, of course, the right tooling. To bring about a cultural shift it’s important to share challenges. In simple terms, ensuring that everyone k...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, will discuss how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galer...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...