Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Stackify Blog, Andreas Grabner

Related Topics: Microservices Expo

Microservices Expo: Article

Managing the Reach and Range of Your Business Processes

Managing the Reach and Range of Your Business Processes

Business processes reach across enterprises and partners, and require a range of complex functions. As the reach and range of your business processes increase, consider (a) moving these functions into an integration network, such as an enterprise service bus (ESB); and (b) recursively encapsulating your business processes as services. The resulting architecture is agile without redundant and confusing technology.

'Reach' and 'Range'
The metric of distance describes how far the business process reaches to interact with the entities it orchestrates. Distance is relevant to two resources or services that are orchestrated by a business process management (BPM) tool. Consider what is important when a business process is deployed, managed, or updated.

These activities are performed by company employees and systems in a coordinated fashion, so distance correlates to organizational boundaries and diverging of commonality of infrastructure, management, and operations.

Figure 1 summarizes how reach is defined using organizational boundaries as the metric. As the distance increases, the challenges to achieving the goals of deploying, managing, and updating increase.

 

The metric of complexity describes the range of sophistication of the business process. Complexity is relevant to resources that are orchestrated by a business process. Consider when a business process starts, stops, tries to undo, and is long running.

Transactional support enables activities to be orchestrated in a coordinated fashion. With transactional support, units of work become complex:

  • What is the unit of work?
  • Can it be undone?
  • How long running?
Figure 2 summarizes how range is defined using transactional complexity as the metric. As the complexity increases, the challenges to completing units of work increase.

 

Messaging is the simplest level of complexity for business process implementation. The transactional complexity is low, since sending and responding to messages are separate transactions. The business process is expressed in separate pieces of code.

Content-based message routing is the next level of complexity. The overall process is stateless, but messages have guaranteed delivery for recoverability. The core content of these messages is considered to be documents.

BPM is the next level of complexity. The "Ten Pillars of Business Process Management" (McDaniel) is an excellent summary of BPM tools. A BPM tool executes a business process described in a model. As the BPM tool executes the instance of the process, and records progress, we have state and can recover from failures. A BPM tool can execute long-running business processes, and is capable of rich exception handling.

We measure the progress of business processes by collecting business events. For messaging and content-based implementations, all of the business events have to be correlated. For BPM, the business events are naturally correlated to the instance of business process.

The highest level of complexity requires a combined view of business event and business object data. This is a powerful view of the operations of the enterprise, revealing hotspots and trends, and suggests how to optimize your business processes. This is awareness, which feeds suggested optimizations back into business processes.

Why Are There Data and Applications Everywhere?
An application encapsulates domain-specific procedures and policies. Each application is a subset of what an organization needs since no single product has all domain knowledge. Each application saves state, and remembers data by using a related database. Thus, we have multiple databases for multiple applications for reasons of domain knowledge. Within an enterprise, people with domain knowledge cluster into departments and business units, roughly along the lines of operational responsibility and authority to change.

A database supports transactions with ACID properties: atomic, consistent, isolated, and durable. An ACID transaction requires locking relevant and related data for the duration of the transaction. The wider the reach of a transaction, the more database resources are locked. This limits the scope of databases, to avoid locking all of our data all the time, for each transaction.

Thus, an enterprise uses many applications and databases. The databases may be able to participate in distributed ACID transactions, but the degree of locking and resource contention will restrict this on an enterprise scale.

Business processes need a different approach for transactions. Business processes span the enterprise, are long-running, and frequently have complex exception handling. A business process is a series of activities to update applications and databases, such that:

  • Forward progress is always made
  • Appropriate systems of record are updated
We will always have islands of domain knowledge embedded in applications and databases. For an organization to create enterprise-wide business processes, these islands have to be integrated and participate in orchestration.

Use Cases
Two typical use cases for applying BPM in your business are to orchestrate:

  • Business processes that reach across your enterprise, including franchises
  • Business processes that reach out to your partners

    In the first use case, there are many resources (applications, databases, and people) to integrate and orchestrate. These resources are managed by different organizations, and are typically "paper-driven." There are usually two high-level, dominant business processes, CreateProduct and SellProduct. The implementations reach all resources of the enterprise, and pose significant challenges in negotiating agreements between departments and business units of a large enterprise.

    In the second case, there are fewer resources to integrate and orchestrate. However, your degree of control over how a partner interfaces to your enterprise is low, and these resources range in sophistication, from FileTransfer to WebServices. The business processes between partner and enterprise collaborate with the enterprise's CreateProduct or SellProduct business process.

    An enterprise BPM tool executes a business process across a wide reach and range. We measure to remove operational blindspots, and determine business process improvements. And we change a business process to adopt improvements, and to leverage high-change areas of the business. This is the foundation of agility. Next, we'll see how reach and range impact the ability to execute, measure, and change a business process.

    Reach and Range Impact
    Reach and range require complex functions to keep your business processes agile across your enterprise, partners, and eco-system.

    When a business process executes, it depends on many complex functions to work over the entire reach and range:

  • Addressability: Required for even the smallest reach and simplest range of complexity. There are challenges, such as namespaces, for resources that are far apart and hosted on disparate servers. For some integration techniques, such as clustered app servers, namespaces pose a significant challenge.
  • Messaging: Base requirement for low range of complexity. Message-oriented middleware (MOM) provides asynchronous messages and durability, as well as Quality-of-Service semantics of "delivered at most once," "delivered exactly once," "delivered at least once."
  • Enterprise and Web services: Standards-based service-oriented architecture avoids point-to-point proprietary connections. Service-oriented standards leverage XML for a common form of message content.
  • Transformation: This is a fundamental function. It allows two services to communicate even if they speak two dialects of XML.
  • Security: A service must be secured from use by unintended users; and more important, allow appropriate users to use it.

    When the business process is measured it depends on this complex function to work over the entire reach and range:

  • XML data collection: Collect business events of your business processes in one place to enable XQuery reports against it. To achieve the most sophisticated range, you need to correlate your business events against enterprise business object data that is throughout the reach of your enterprise. This is a prerequisite to achieving operational awareness.

    When the business process is changed, it depends on many complex functions to work over the entire reach and range:

  • Transparent management: Implement once and manage forever. Your ability to manage orchestrated resources may be limited to subsets or clusters of resources, where you are forced to manually coordinate separate management efforts. Some enterprise integration software is limited in this capability due to clustering limitations. In particular, consider the investment a typical enterprise makes in these management activities:
    - Configuration: If anything changes, configuration must happen. There is a big difference in operations between managing configuration from one point rather than manually coordinating several acts of configuration.
    - Deployment: The promise of business processes is that they can be measured, optimized, and redeployed. A large enterprise is likely to have multiple implementations of business processes. Coordinating rollouts, tests, and rollbacks of business process implementations is onerous without transparent management.

    Reach and range of orchestrated resources in a confined area such as a Web service is easy. Reach and range over the enterprise and ecosystem, with multiple owners of systems, is hard. The reach of crossing over multiple organizations geometrically increases the difficulty of the range of process sophistication. The difficulty is similar to doing business between companies in different countries where there is a lack of trading agreements. With differing legal and monetary structures, there is much manual intervention, translation is imperfect, and you hope for the best.

    The BPM tool is not the best place to implement reach and range functions. A business process implementation requires the reach and range functions above, but the appropriate division of labor is to put reach and range functions into an integration network. The work of orchestrating resources in a business process becomes simple, and the promise of reuse is realized.

    The Integration Network
    The contemporary integration network is best typified by the enterprise service bus. In its 2003 Predictions series, industry analyst firm Gartner, Inc., said:

    A new category of integration middleware called the enterprise service bus (ESB) has emerged to support the proliferation of service-oriented interactions between enterprise applications. An ESB is a standards-based integration backbone that combines messaging, Web services, transformation and intelligent routing to reliably connect and coordinate the interaction of hundreds of application endpoints spanning a global organization.

    In the report, Gartner predicts that "a majority of large enterprises will have an ESB running by YE05."

    Using an ESB leverages its inherent reach and range functions, saving implementation and management costs. The ESB is a new product concept, similar in concept to the modern office building. A person arrives at work, goes to their room. In their room is fresh air, desk, phone, data, and so on. When one person talks to another, they don't install a custom phone and new wires to the people they expect to talk to. They just pick up the phone, dial a standard extension number, and talk to anyone.

    Prior to the ESB, resources were manually glued together one by one, by plugging them into either an integration broker hub or an application server hub. If those hubs themselves need to integrate, the process is repeated, yielding a brittle hierarchical structure.

    Using an ESB, the implementation of the business process is minimal, saving on development and QA costs. This is true whether the business process is basic, as with messaging, or sophisticated, as with a BPM tool.

    The ESB approach is the cleanest way to implement a business process as a service, as described in ZapThink's April 2003 Service Oriented Process. Your inventory of applications and databases are your atomic services. Business processes are built, and encapsulated as services, and installed on the ESB as a resource. Top-level business processes, such as CreateProduct, are then composed of sub-processes, perhaps DesignProduct, ProcureParts, and BuildProduct. Each of these sub-processes can use the atomic resources, or be further decomposed.

    The end result is a clean hierarchical grouping and usage of business processes, as services, without the redundant and confusing technology of prior approaches. In Figure 3 we see one service on the left, which is transparent, and composed of other services which are orchestrated by BPM, all on the ESB. This larger process is itself encapsulated as a service and is available on the ESB, within the reach of the enterprise. Further, some processes are opaque, as we see on the right of Figure 3. This can be the situation when integrating with partners. In this situation the interactions between services is via collaboration, for example as expressed in ebXML.

     

    Using BPM in an ESB
    The ESB provides all reach functions, and the basic range functions of messaging and content-based routing. The BPM tool must provide facilities to:

  • Model a business process: In an ESB context, modeling is simplified, since the BPM tool only needs to know a resource's service name and its API messages. The BPM tool uses the ESB to:
    - Connect to a service
    - Transform each service XML to common form
  • Execute the business process: In an ESB context, execution is simplified because the BPM tool can uniformly interact with any service on the ESB. An executing business process can be encapsulated as a service on the ESB. Thus, a business process can be composed of other business processes, all without additional plumbing or coding.
  • Monitor the business process: In an ESB context, we capture business events into a business event repository service to be queried anywhere on the ESB.

    Can we use multiple BPM tools in an ESB? Yes! By factoring out the ESB from the BPM reach and range functions, we encapsulate executing business processes as services deployed on the ESB. The actual BPM tool that executes the business process is hidden. We can have one or several BPM tools spread out over the enterprise. We can have BPM tools from different vendors, and different BPM tools from the same vendor.

    The ESB highlights the difference between the two use cases. In the first use case the ESB spans the enterprise, including franchises. Spanning franchises is attractive because of the inherently cost-effective nature of the ESB. In the second use case we can't assume tight ESB integration to the Partner, so we use collaboration, as defined, for example, in the ebXML standard. This is a radically different style of business process and may require a different BPM tool than the one used within the enterprise.

    Summary
    To summarize, when implementing a business process ask:

  • What is the reach of the business process?
  • What is the range of complexity of the business process?
  • Are we ready to factor out the reach and range complexity of your processes, into an ESB?

    Then implement the business process or processes that are right for you!

  • Within your enterprise
  • To your partners
  • Decompose into smaller business sub-processes and encapsulate as a service?

    References

  • Zapthink, LLC. Service Oriented Process. Report #ZTR-WS108, April 14, 2003.
  • McDaniel, Tyler, "Ten Pillars of Business Process Management," eAI Journal, November 2001: 30-34.
  • Gartner, Inc. "Predicts 2003: Enterprise Service Buses Emerge." December 9, 2002 (DF-18-7304)
  • More Stories By Harvey Reed

    Harvey Reed is a Technical Product Manager with Sonic Software. Harvey joins Sonic from the acquisition of eXcelon, where he was the Product Manager of the eXcelon BPM. Prior to eXcelon he has worked as Chief Architect for several Release 1 software products, as well as a Practice Director for a medium size Systems Integrator. Prior to software development, he has worked in Aerospace, and Electronics.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Microservices Articles
    In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
    While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
    In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
    Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
    DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
    Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
    Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
    In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
    Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
    SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...