Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Mehdi Daoudi, Yeshim Deniz

Related Topics: Microservices Expo

Microservices Expo: Article

Extending Your SOA for Intercompany Integration

Find your own value

Service-oriented architecture, or SOA, is the modern notion of connecting systems together at both the information and service levels. Indeed, enterprises are racing to enable their existing applications to externalize services, as well as build the appropriate integration infrastructure around it.

However, extending your SOA to automate your business means you must work and play well with other organizations. Managing services, orchestration layers, and connections between companies is not as easy as one might imagine.

Truth be told, while we've understood the value of SOA for some time now, the concept is still new to most enterprises. Not until the advent of Web services did we have a widely accepted standard and enabling technology that allows us to access all types of systems through a common services interface. In fact, we may be at a point in time where more is understood about the technology than the ways in which it fits into the enterprise or value chain. Organizations seem to adopt Web services without thought of strategic fit and function. Adoption is only half the battle.

To address strategic concerns, many enterprises are attempting to figure out how to best leverage SOA within their firewalls, as well as between organizations that are, or should be, part of their business processes. This notion of extending SOA to externalize and share processes and services is really the ultimate destination for SOA; certainly it was the vision for Web services.

Value of Services
Today most interorganization electronic business is conducted using traditional information-oriented mechanisms such as EDI or simple FTP exchanges. These exchanges deal with simple information, and they tend to occur within nightly and weekly batch transfers, meaning that latency is a consideration.

While traditional information-oriented exchanges are the way business gets done today, there are two basic needs for electronic business: the need to have access to information approaching zero latency, and the need to view external customer or supplier systems as sets of remote services as well as clusters of information sinks and sources.

The ability to leverage services will provide organizations with a clear advantage. In addition to the ability to see information in real time, they can abstract application behavior and leverage many remote services inside their enterprise systems as if they were local. This is the basic notion of Web services, so we won't get too deeply into it here.

Access to services implies that business processes existing in and between companies can be coupled at the services layer, meaning that services are shareable (if allowed) among the partner organizations. For instance, the service that defines how inventory is allocated supply chain-wide is shareable, and thus the service is not only consistent, but does not require reinvention within each organization. Moreover, since these services are always visible, information bound to these services is produced and consumed in real time. In essence, you're creating a virtual set of applications that exists between trading partners and allows those trading partners to function like a single entity, and thus service common business processes as if they existed in a single company.

By leveraging this type of architecture, businesses have the opportunity to reduce inventory costs a hundredfold. For example, all manufacturer systems would have service- and information-level visibility into all retail systems, and all of the parts suppliers have the same access, and perhaps their raw materials providers as well. With everyone sharing both information and services, business processes are fully automated and inefficiencies, such as overstocking, understocking, or manufacturing delays, drop dramatically. What's more, and perhaps more important, customer satisfaction goes up since items they demand are available, and at the best possible price.

Functional Components
So, how do you begin sharing your SOA? You must first break the SOAs down into several basic components before attempting integration. These include:

  • Private and public services
  • Public and private processes
  • Data and abstract data
  • Monitoring and event management
  • Points of integration
  • Directory services
  • Identity management and security services
  • Semantic management
Private and public services refer to services that you create for use within your organization (private) and services that you create to share with your partner organizations (public). These concepts are simple enough, although understanding which services to make public and which to make private requires a bit of analysis.

Public services are those that are redundant within your trading community, such as logistics, inventory, or billing. By exposing these services to outside organizations, you allow them to share the service and thus avoid their own development cost, and also allow them to leverage a shared service as a point of integration and a binding point for common processes.

There are a few key criteria for selecting services that are public, or, exposed to trading partners. First, the service should be redundant to two or more entities. In other words, you solve the same problem for several partners. Second, the service should be unique to the trading community; otherwise it makes sense to look for other public services to solve the problem. Finally, the service should offer ease of integration, including the ability to discover semantics as well as interfaces.

In order to make services public, you must create or leverage an existing shared directory service that allows those outside of the organization to locate, discover, and leverage the service you deem to be public. Directories may be proprietary, LDAP-based, or UDDI-based (or mixed and matched). Typically these directories are public, but support the notion of public and private services, processes, and semantics.

Public and private processes provide orchestration of services, binding them together into a business process to drive information movement and invocation of services. You may consider processes or orchestrations as a group of services gathered together to solve a particular business problem, an overriding control mechanism, if you will (see Figure 1).

There are three types of processes to visualize enterprise and cross-enterprise processes: private, public, and specialized processes.

  • Private processes exist at the intra-company level, allowing the business user to define common processes that span only systems that are within the enterprise and not visible to the trading partners or to community-wide processes. For example, the process of hiring an employee may span several systems within the enterprise, but should not be visible to processes that span an enterprise or trading community or other organizations.
  • Public processes exist between companies and consist of a set of agreed-upon procedures for exchanging information and automating business processes within a community. This is the core notion of intercompany SOA, since it's really the concept where we create intercompany orchestrations.
  • Specialized processes are created for a special requirement, such as collaboration on a common product development effort that only exists between two companies and has a limited life span.
Of course there are standards. BPEL (Business Process Execution Language) focuses on the creation of complex processes by joining together local and remote services, thus leveraging the notion of process integration as well as service-oriented Web services.

In the world of BPEL, process is one of two things:

  • Executable business processes that model actual behavior of a participant in a business interaction.
  • Business protocols that use process descriptions specifying the mutually visible message exchange's behavior for each of the parties leveraging the protocol (does not reveal internal behavior).
Process descriptions for business protocols are known as abstract processes, and BPEL models behavior for both abstract and executable processes.

To this end, BPEL leverages a well-defined language to define and execute business processes and business interaction protocols, thus extending the Web services interaction model by providing a mechanism to create meta-applications - process models, really - above the services that exist inside or outside the company. What's both different and compelling about BPEL is the use of a common syntax that is designed to be transferable from process engine to process engine. This is in contrast to other process integration standards, such as BPMI or WFMC, which are more about approaches than a common language. There is more momentum beyond BPEL, and all technology vendors are declaring support for BPEL.

The notion of data and data abstraction, in terms of intercompany SOA, lets us think about collections of data or services as abstract entities, thus represented in a form that is most useful to the integration server or the application integration architect. It's this notion that provides for the grouping of related pieces of information, independent of their physical location and structure, as well as defining and understanding what meaningful operations can be performed on the data or services. Thus, we can create any representation needed for data that exists anywhere, and bound to any service.

What's more, we need to separate the implementation from the abstraction itself. This allows us to change the internal representation and/or implementation without changing the abstract behavior, and allows people to use the abstraction in terms of intercompany SOA without needing to understand the internal implementation.

Monitoring and event management encompass the ability to analyze all aspects of the business and enterprise or trading community to determine the current state of the process in real time, and adjust those processes as needed and in real time. Optimization, or the ability to redefine the process at any given time in support of the business and thus make the process more efficient, is an aspect of event management (see Figure 2).

Points of integration allow services to interact with other services, or perhaps an orchestration layer. Services, especially those build for intercompany SOA, need to be designed to interact with other systems. For instance, they should provide more robust discovery of metadata and management of connections. Thus, the service developer needs to architect a service as a point of integration, not simply a point of abstracted functional behavior.

Identity management and security services seem to go without saying when you think of intercompany SOA, due to the naturally occurring exposures, and a detailed discussion is out of the scope of this article. However, it's worth a mention that there are three A's of security which you need to consider: authentication, authorization and audit.

In identity management, especially in the work of inter-company SOA, you must pay a lot of attention to authentication. However, rights and permissions are identity-based attributes and should play a very important role in the identity management.

Finally, we need to manage semantics between any number of systems that have very different application semantics and ontologies. We typically do this through a semantic repository, where the semantics are understood and persisted, and a semantic mapping layer that understands the semantics of the source or target systems and can account for the differences during runtime.

Share and Share Alike
It's not a matter of when intercompany SOA will become a reality. The evolution is under way today, as organizations attempt to extend their SOAs to other partner organizations, and want service-level visibility in return. The key words here are real-time automation and visibility. You can either get good at it now and create a more competitive and responsive business, or play catch-up later.

However, like any other new way of doing old things, you have to consider the architectures and how they mesh together over time. The new dynamic here is that you're dealing with many IT departments and many approaches to building applications The best approach is to understand your own value, and work directly with your partners to ensure that everyone is adjusting their way of thinking. The goal is to build sharable services and processes that automate your business, as well as processes between businesses.

More Stories By David Linthicum

Dave Linthicum is Sr. VP at Cloud Technology Partners, and an internationally known cloud computing and SOA expert. He is a sought-after consultant, speaker, and blogger. In his career, Dave has formed or enhanced many of the ideas behind modern distributed computing including EAI, B2B Application Integration, and SOA, approaches and technologies in wide use today. In addition, he is the Editor-in-Chief of SYS-CON's Virtualization Journal.

For the last 10 years, he has focused on the technology and strategies around cloud computing, including working with several cloud computing startups. His industry experience includes tenure as CTO and CEO of several successful software and cloud computing companies, and upper-level management positions in Fortune 500 companies. In addition, he was an associate professor of computer science for eight years, and continues to lecture at major technical colleges and universities, including University of Virginia and Arizona State University. He keynotes at many leading technology conferences, and has several well-read columns and blogs. Linthicum has authored 10 books, including the ground-breaking "Enterprise Application Integration" and "B2B Application Integration." You can reach him at [email protected] Or follow him on Twitter. Or view his profile on LinkedIn.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
As Enterprise business moves from Monoliths to Microservices, adoption and successful implementations of Microservices become more evident. The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Documenting hurdles and problems for the use of Microservices will help consultants, architects and specialists to avoid repeating the same mistakes and learn how and when to use (or not use) Microservices at the enterprise level. The circumstance w...
Containers, microservices and DevOps are all the rage lately. You can read about how great they are and how they’ll change your life and the industry everywhere. So naturally when we started a new company and were deciding how to architect our app, we went with microservices, containers and DevOps. About now you’re expecting a story of how everything went so smoothly, we’re now pushing out code ten times a day, but the reality is quite different.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...