Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Microservices Expo

Microservices Expo: Article

Future-Proofing Solutions with Coarse-Grained Service Oriented Architecture

Future-Proofing Solutions with Coarse-Grained Service Oriented Architecture

Web services and service-oriented architectures are transforming application construction. The ubiquity of Web services support by all leading platform venders brings the promise of a flexible application environment with simplified interface techniques, location transparency, and platform-neutral interoperability. This dynamic infrastructure brings about a new implementation approach, the service-oriented architecture.

However, to date most Web services projects have really only created simplified communication mechanisms for the invocation of those same old complicated legacy interfaces that we have always had. To truly realize the creation of service-based components, a new design approach is needed, one that produces simple, straightforward coarse-grained service interfaces that conceal the ugliness of the legacy low-level interfaces. Designing coarse-grained interfaces is not as easy as it sounds. This article discusses how to recognize good coarse-grain interfaces and how to design coarse-grain interfaces for maximum flexibility and longevity.

When you use a service, be it a bank, a restaurant, or a store, you expect to easily interact with that service. How would the restaurant-goers experience be if there was a different process for ordering each part of the meal, or when withdrawing cash at an ATM, there were a hundred different menu options. Service interfaces are expected to be simple and intuitive. This is what makes a service successful. New services should be easily discoverable and the consumption of those interfaces undemanding. ATM machines are ubiquitous because everyone knows or can easily learn how to use one.

The proper design of a service should take an "outside in" approach to constructing the interface by focusing on the client application's perspective and how the component plays within the larger business process context. Sadly, many interface designs take an "inside out" approach by basing the interface design on the dirty details of the existing implementation rather than the specific requirements for client utilization. The interface should be on a "need-to-know" basis, only exposing interface details that are meaningful to semantics of the larger interaction with client applications.

 

Ongoing software maintenance is a constant burden for business applications. Today's development is tomorrow's legacy. Loosely coupled service-oriented architectures create an opportunity to reverse this escalating legacy cycle. Proper design of the interfaces can minimize changes as requirements evolution occurs, making these services future-proof.

The Problem
Most poor interfaces are the result of allowing a convoluted implementation to bleed through into the interface. Even a good implementation that is transparently presented as a Web service becomes a thorny service interface. A component often contains dozens of classes and hundreds of methods. Exposing this detail as a service is analogous to an ATM machine with hundreds of menu options. Complicated interface designs impose unnecessary responsibilities on the client application, for example:

  • They result in excessive chattiness over the Web services interface.
  • They overburden the client application with the maintenance of the service component's context.
  • They force the client application to become co-dependent with the service component requiring dual development for the life of the solution.

    This poor interface design will sign a death sentence for the component and the service-oriented implementation by tightly coupling the client application with the component, requiring future maintenance of intricate interaction semantics and proprietary interaction context.

    Interaction Semantics
    Part of the problem is that the mechanism for interacting with a component is different from - and often in some very bizarre ways - any other component. The client application needs costly custom implementation for invocation of the interface of each component. This is reminiscent of Indiana Jones navigating through hidden passageways to find the lost treasure with secret incantations required for entrance to each secret chamber. In their book BPM: the Third Wave (Meghan-Kiffer Press, 2001), Howard Smith and Peter Fingar expanded this thought: "Imagine a world where people speak a language that brilliantly describes the molecular structure of a large object but can't tell you what that object is - or that it's about to fall on you." The reason for failure of component interoperability from this perspective is self-evident.

    We remember a recent engagement for the integration of physician practice applications with a hospital IS system that had outlandish interface semantics. Ignoring reasonable practices and embracing the quirks of their implementation, the system required that the patient's first and middle name fields be left blank and the last name field contain the patient's full name. This is just one of the many urban legends of strange system interfaces that litter the IT landscape.

    Burden of Context
    Another problem of low-level proprietary interfaces is that the client application is pulled into the internal context of the called component. With multiple method invocations for any coarse-grained operation, there is a need to maintain implementation context in the client application in order to provide that context on subsequent method calls. One example of a system from a recent project would require special codes indicating proprietary state and numeric status codes (e.g., code="F", Status="989"), which had no relevance to the client application, to drive the invocation of each operation. Implementation-specific state information was forced onto the client application by the component. The delineation between the relevant context of the interaction (that which is meaningful to both the client and the component) and irrelevant context (the exclusive state information of the component) is often not considered during interface design.

    Context in terms of resource references, processing state, and method parameter values are unnecessarily forced into the interface. As the interface grows so does the interaction context. The client application has its own context to maintain and does not need to be burdened with the component's implementation context.

    Example: Customer Management System Interface
    To illustrate the difficulty presented by exposed low-level interfaces, look at a customer-service application that has a component for customer. With objects for atomic elements of the customer information results in the client application, implementation might look like the following:

    customer = CustomerManage.findCustomer("123456789");
    customerID = customer.getCustomerID();
    addressVector = addressList.findAddresses(customerID)
    homeAddress = addressVector.findAddress("Home")
    homePhone = homeAddress.getPhone();
    shipTo = addressVector.findAddress("Ship To");
    shipToZip = shipTo.getZip();

    This demonstrates how the client application is pulled into an unnecessary interaction context that it has no concern with.

    Designing for Serviceability
    Creating a coarse-grained interface that truly embodies a service must be a conscious part of the interface design process. Serviceability comes from an interface that is easy to exploit, straightforward, and has a life span beyond the first version. To discuss these characteristics of coarse-grained service design, we'll borrow the ACID acronym from the transactional processing domain:

  • Atomic: Any one business operation can be completed through one service interface. The coarse-grained interface is close to a 1:1 ratio of business operations to service interfaces. In document-oriented Web service interfaces the interface is simplified to the semantics of document exchange.
  • Consistent: There is a consistent behavior to the interface within a domain that makes new services in that domain easily recognizable and understood. Consistency extends beyond the interfaces of one component to a common interface format for all components within a domain. Locating services instances, establishing context, retrieving data, performing updates, and executing business operations all have consistent interaction. One of the best ways to do this is with a FCRUD semantic where service resource objects are operated with straightforward semantics of the Find resource, Create resource, Retrieve resource information, Update resource information, and Delete resource. Consistent interaction semantics empower the client application developer to easily utilize any of the service components once he or she has experience developing an interface to one.
  • Isolated: Any one interface can be invoked independent of other service interfaces. Component implementation context and detailed invocation sequencing are not forced onto the client application. Loose coupling exists rather than tight semantic coupling. Predecessor and successor invocation requirements are not part of the interaction beyond the degree that they are part of the shared process between the client and the component. Isolation of interface design enables the client application to invoke any service interface with a minimum of preconditional interaction, perhaps only a Find resource call.
  • Durable: This interface has been designed with a vision of the future and has longevity built into it. The interface envisions the broad range of usage scenarios and future implementation enhancements and has been designed with an eye towards ease of migration. A durable interface is future-proof, not that the interface will never change, but it has the capability to easily incorporate future enhancements. Document-centric Web service interfaces and other loosely coupled interface techniques minimize the impact of extensions to the interface.

    How do you design ACID characteristics into your service component interfaces to best ensure successful exploitation in a service-oriented architecture? The best way is by proper modeling of the component interface. Modeling often looks inward. It is easy for programmers to immediately concern themselves with the implementation while the interface and its usage become a background activity. Taking an "outside in" approach to the modeling ensures that consideration is properly given to the continuum of utilization.

    Correct modeling takes a top-down approach beginning with the business processes and requirements for the solutions that the component service envisions being exploited in. Correct modeling should begin at the domain level by identifying the business context of the component usage and what business processes would exploit the component. For each business process, identify the use cases or scenarios that demonstrate the various ways the process could be executed. These process use cases lead to specific use cases of component interaction.

    This top-down approach ensures that the context of the process is firmly established before design looks inward at the interface and implementation of the component. Although it may seem like overkill, establishing use cases at the domain level will ensure that correct specification of the service is created and future-proofs that component by anticipating all possible usage scenarios.

    Proper Service Interface - Customer Management System Interface
    Returning to our customer component example, you can see how a proper coarse-grain interface results in a simplified client implementation without the burden of component context or proprietary interaction semantics:

    customerdocument =
    CustomerManageService.getCustomerDocument("123456789");
    homePhone = customerdocument.getHomePhone();
    shipToZip = customerdocument.getShiptToZip();

    This document-centric interface requires only one service invocation, which returns a document containing all the necessary information the client application requires without the burden of implementation details.

    Creating Coarse Grain Implementations
    Typically a Web services project focuses primarily on migrating existing application functionality to a Web services interface. The preexisting application can be anything from legacy systems to J2EE Enterprise JavaBeans (EJBs). How do you map this to a coarse-grain service interface? Very seldom will traditional low-level interfaces naturally map to a proper coarse-grained structure. An abstraction layer is required to hide the details of the implementation from the user behind a facade. This abstraction layer encapsulates:

  • Multiple low-level interfaces that comprise the business operation
  • Multiple data sources that need to be aggregated for the service
  • Legacy system interaction
  • The sequencing of low-level calls
  • Maintaining context for the low-level implementations
  • Transactional coordination of updates to multiple low-level interfaces

    The abstraction layer can be constructed using either of two approaches:
    1.   Build to integrate: Using application development techniques, a facade or mediator is implemented to provide the interface that aggregates the lower-level interfaces. Coarse-grained components are created that broker the interactions with multiple classes. Traditional application development tools and techniques can be employed for development of this facade. Component development environments like J2EE or Microsoft's .NET provide application environments to host both the facade component and the implementation.
    2.   Enterprise application integration (EAI)/ Business process integration (BPI): EAI tools exist for the purpose of integrating applications together. They provide rich tools and functionality for rapidly integrating all types of applications, including legacy, Web, and packaged software applications. BPI tools extend this capability to provide choreography of the business process and application flow outside the application. Web services-based integration is now a component of almost all EAI/BPI tools. This means that these tools can expose their integration flows through coarse-grain Web services interfaces.

     

    Which is a better choice depends on a number of factors. Some things to consider are if the implementations are of similar technologies, what interfaces these lower-level interfaces currently support, and whether legacy systems are part of the equation. Often it comes down to whether the primary focus of the project is application development or business integration.

    Conclusion
    The success of service-oriented architectures depends on a rich universe of available services that are easily located, understood, and utilized by a diverse community of users. These interfaces must have a life span beyond the first implementation, which can only be achieved by proper design of coarse-grained interfaces that are truly coarse-grain in nature and not just a weak veneer on top of an existing tortuously complicated interfaces. By taking an "outside-in" approach to modeling the service component interface, it is possible to identify the full spectrum of usage of the component.

    As you design services, remember the ACID acronym and ask yourself if the interface models a full atomic business operation; is there consistency to the interface across the family of components; can any one interface be invoked in reasonable independence from other interfaces; and has the interface been designed with a view towards future usage scenarios. This perspective will lead to components that can truly become ubiquitous services in the Web services world.

  • More Stories By John Medicke

    John Medicke is the chief architect of the On Demand Solution Center
    in Research Triangle Park, NC. He has designed solutions for various
    industries including financial services, retail, healthcare, industrial,
    and government. John has worked extensively on the exploitation of
    business integration, business process management, and business
    intelligence within an integrated solution context. He is author of the
    book Integrated Solutions with DB2, as well as several articles.

    Comments (2)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Microservices Articles
    Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
    "NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
    In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
    Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
    In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
    The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
    DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
    Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
    TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...