Welcome!

Microservices Expo Authors: Elizabeth White, Derek Weeks, Mehdi Daoudi, Don MacVittie, John Katrick

Related Topics: Microservices Expo

Microservices Expo: Article

Future-Proofing Solutions with Coarse-Grained Service Oriented Architecture

Future-Proofing Solutions with Coarse-Grained Service Oriented Architecture

Web services and service-oriented architectures are transforming application construction. The ubiquity of Web services support by all leading platform venders brings the promise of a flexible application environment with simplified interface techniques, location transparency, and platform-neutral interoperability. This dynamic infrastructure brings about a new implementation approach, the service-oriented architecture.

However, to date most Web services projects have really only created simplified communication mechanisms for the invocation of those same old complicated legacy interfaces that we have always had. To truly realize the creation of service-based components, a new design approach is needed, one that produces simple, straightforward coarse-grained service interfaces that conceal the ugliness of the legacy low-level interfaces. Designing coarse-grained interfaces is not as easy as it sounds. This article discusses how to recognize good coarse-grain interfaces and how to design coarse-grain interfaces for maximum flexibility and longevity.

When you use a service, be it a bank, a restaurant, or a store, you expect to easily interact with that service. How would the restaurant-goers experience be if there was a different process for ordering each part of the meal, or when withdrawing cash at an ATM, there were a hundred different menu options. Service interfaces are expected to be simple and intuitive. This is what makes a service successful. New services should be easily discoverable and the consumption of those interfaces undemanding. ATM machines are ubiquitous because everyone knows or can easily learn how to use one.

The proper design of a service should take an "outside in" approach to constructing the interface by focusing on the client application's perspective and how the component plays within the larger business process context. Sadly, many interface designs take an "inside out" approach by basing the interface design on the dirty details of the existing implementation rather than the specific requirements for client utilization. The interface should be on a "need-to-know" basis, only exposing interface details that are meaningful to semantics of the larger interaction with client applications.

 

Ongoing software maintenance is a constant burden for business applications. Today's development is tomorrow's legacy. Loosely coupled service-oriented architectures create an opportunity to reverse this escalating legacy cycle. Proper design of the interfaces can minimize changes as requirements evolution occurs, making these services future-proof.

The Problem
Most poor interfaces are the result of allowing a convoluted implementation to bleed through into the interface. Even a good implementation that is transparently presented as a Web service becomes a thorny service interface. A component often contains dozens of classes and hundreds of methods. Exposing this detail as a service is analogous to an ATM machine with hundreds of menu options. Complicated interface designs impose unnecessary responsibilities on the client application, for example:

  • They result in excessive chattiness over the Web services interface.
  • They overburden the client application with the maintenance of the service component's context.
  • They force the client application to become co-dependent with the service component requiring dual development for the life of the solution.

    This poor interface design will sign a death sentence for the component and the service-oriented implementation by tightly coupling the client application with the component, requiring future maintenance of intricate interaction semantics and proprietary interaction context.

    Interaction Semantics
    Part of the problem is that the mechanism for interacting with a component is different from - and often in some very bizarre ways - any other component. The client application needs costly custom implementation for invocation of the interface of each component. This is reminiscent of Indiana Jones navigating through hidden passageways to find the lost treasure with secret incantations required for entrance to each secret chamber. In their book BPM: the Third Wave (Meghan-Kiffer Press, 2001), Howard Smith and Peter Fingar expanded this thought: "Imagine a world where people speak a language that brilliantly describes the molecular structure of a large object but can't tell you what that object is - or that it's about to fall on you." The reason for failure of component interoperability from this perspective is self-evident.

    We remember a recent engagement for the integration of physician practice applications with a hospital IS system that had outlandish interface semantics. Ignoring reasonable practices and embracing the quirks of their implementation, the system required that the patient's first and middle name fields be left blank and the last name field contain the patient's full name. This is just one of the many urban legends of strange system interfaces that litter the IT landscape.

    Burden of Context
    Another problem of low-level proprietary interfaces is that the client application is pulled into the internal context of the called component. With multiple method invocations for any coarse-grained operation, there is a need to maintain implementation context in the client application in order to provide that context on subsequent method calls. One example of a system from a recent project would require special codes indicating proprietary state and numeric status codes (e.g., code="F", Status="989"), which had no relevance to the client application, to drive the invocation of each operation. Implementation-specific state information was forced onto the client application by the component. The delineation between the relevant context of the interaction (that which is meaningful to both the client and the component) and irrelevant context (the exclusive state information of the component) is often not considered during interface design.

    Context in terms of resource references, processing state, and method parameter values are unnecessarily forced into the interface. As the interface grows so does the interaction context. The client application has its own context to maintain and does not need to be burdened with the component's implementation context.

    Example: Customer Management System Interface
    To illustrate the difficulty presented by exposed low-level interfaces, look at a customer-service application that has a component for customer. With objects for atomic elements of the customer information results in the client application, implementation might look like the following:

    customer = CustomerManage.findCustomer("123456789");
    customerID = customer.getCustomerID();
    addressVector = addressList.findAddresses(customerID)
    homeAddress = addressVector.findAddress("Home")
    homePhone = homeAddress.getPhone();
    shipTo = addressVector.findAddress("Ship To");
    shipToZip = shipTo.getZip();

    This demonstrates how the client application is pulled into an unnecessary interaction context that it has no concern with.

    Designing for Serviceability
    Creating a coarse-grained interface that truly embodies a service must be a conscious part of the interface design process. Serviceability comes from an interface that is easy to exploit, straightforward, and has a life span beyond the first version. To discuss these characteristics of coarse-grained service design, we'll borrow the ACID acronym from the transactional processing domain:

  • Atomic: Any one business operation can be completed through one service interface. The coarse-grained interface is close to a 1:1 ratio of business operations to service interfaces. In document-oriented Web service interfaces the interface is simplified to the semantics of document exchange.
  • Consistent: There is a consistent behavior to the interface within a domain that makes new services in that domain easily recognizable and understood. Consistency extends beyond the interfaces of one component to a common interface format for all components within a domain. Locating services instances, establishing context, retrieving data, performing updates, and executing business operations all have consistent interaction. One of the best ways to do this is with a FCRUD semantic where service resource objects are operated with straightforward semantics of the Find resource, Create resource, Retrieve resource information, Update resource information, and Delete resource. Consistent interaction semantics empower the client application developer to easily utilize any of the service components once he or she has experience developing an interface to one.
  • Isolated: Any one interface can be invoked independent of other service interfaces. Component implementation context and detailed invocation sequencing are not forced onto the client application. Loose coupling exists rather than tight semantic coupling. Predecessor and successor invocation requirements are not part of the interaction beyond the degree that they are part of the shared process between the client and the component. Isolation of interface design enables the client application to invoke any service interface with a minimum of preconditional interaction, perhaps only a Find resource call.
  • Durable: This interface has been designed with a vision of the future and has longevity built into it. The interface envisions the broad range of usage scenarios and future implementation enhancements and has been designed with an eye towards ease of migration. A durable interface is future-proof, not that the interface will never change, but it has the capability to easily incorporate future enhancements. Document-centric Web service interfaces and other loosely coupled interface techniques minimize the impact of extensions to the interface.

    How do you design ACID characteristics into your service component interfaces to best ensure successful exploitation in a service-oriented architecture? The best way is by proper modeling of the component interface. Modeling often looks inward. It is easy for programmers to immediately concern themselves with the implementation while the interface and its usage become a background activity. Taking an "outside in" approach to the modeling ensures that consideration is properly given to the continuum of utilization.

    Correct modeling takes a top-down approach beginning with the business processes and requirements for the solutions that the component service envisions being exploited in. Correct modeling should begin at the domain level by identifying the business context of the component usage and what business processes would exploit the component. For each business process, identify the use cases or scenarios that demonstrate the various ways the process could be executed. These process use cases lead to specific use cases of component interaction.

    This top-down approach ensures that the context of the process is firmly established before design looks inward at the interface and implementation of the component. Although it may seem like overkill, establishing use cases at the domain level will ensure that correct specification of the service is created and future-proofs that component by anticipating all possible usage scenarios.

    Proper Service Interface - Customer Management System Interface
    Returning to our customer component example, you can see how a proper coarse-grain interface results in a simplified client implementation without the burden of component context or proprietary interaction semantics:

    customerdocument =
    CustomerManageService.getCustomerDocument("123456789");
    homePhone = customerdocument.getHomePhone();
    shipToZip = customerdocument.getShiptToZip();

    This document-centric interface requires only one service invocation, which returns a document containing all the necessary information the client application requires without the burden of implementation details.

    Creating Coarse Grain Implementations
    Typically a Web services project focuses primarily on migrating existing application functionality to a Web services interface. The preexisting application can be anything from legacy systems to J2EE Enterprise JavaBeans (EJBs). How do you map this to a coarse-grain service interface? Very seldom will traditional low-level interfaces naturally map to a proper coarse-grained structure. An abstraction layer is required to hide the details of the implementation from the user behind a facade. This abstraction layer encapsulates:

  • Multiple low-level interfaces that comprise the business operation
  • Multiple data sources that need to be aggregated for the service
  • Legacy system interaction
  • The sequencing of low-level calls
  • Maintaining context for the low-level implementations
  • Transactional coordination of updates to multiple low-level interfaces

    The abstraction layer can be constructed using either of two approaches:
    1.   Build to integrate: Using application development techniques, a facade or mediator is implemented to provide the interface that aggregates the lower-level interfaces. Coarse-grained components are created that broker the interactions with multiple classes. Traditional application development tools and techniques can be employed for development of this facade. Component development environments like J2EE or Microsoft's .NET provide application environments to host both the facade component and the implementation.
    2.   Enterprise application integration (EAI)/ Business process integration (BPI): EAI tools exist for the purpose of integrating applications together. They provide rich tools and functionality for rapidly integrating all types of applications, including legacy, Web, and packaged software applications. BPI tools extend this capability to provide choreography of the business process and application flow outside the application. Web services-based integration is now a component of almost all EAI/BPI tools. This means that these tools can expose their integration flows through coarse-grain Web services interfaces.

     

    Which is a better choice depends on a number of factors. Some things to consider are if the implementations are of similar technologies, what interfaces these lower-level interfaces currently support, and whether legacy systems are part of the equation. Often it comes down to whether the primary focus of the project is application development or business integration.

    Conclusion
    The success of service-oriented architectures depends on a rich universe of available services that are easily located, understood, and utilized by a diverse community of users. These interfaces must have a life span beyond the first implementation, which can only be achieved by proper design of coarse-grained interfaces that are truly coarse-grain in nature and not just a weak veneer on top of an existing tortuously complicated interfaces. By taking an "outside-in" approach to modeling the service component interface, it is possible to identify the full spectrum of usage of the component.

    As you design services, remember the ACID acronym and ask yourself if the interface models a full atomic business operation; is there consistency to the interface across the family of components; can any one interface be invoked in reasonable independence from other interfaces; and has the interface been designed with a view towards future usage scenarios. This perspective will lead to components that can truly become ubiquitous services in the Web services world.

  • More Stories By John Medicke

    John Medicke is the chief architect of the On Demand Solution Center
    in Research Triangle Park, NC. He has designed solutions for various
    industries including financial services, retail, healthcare, industrial,
    and government. John has worked extensively on the exploitation of
    business integration, business process management, and business
    intelligence within an integrated solution context. He is author of the
    book Integrated Solutions with DB2, as well as several articles.

    Comments (2) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Most Recent Comments
    Doug Kaye 09/01/03 04:37:01 PM EDT

    Ah...the URL didn't come through. See http://www.rds.com/doug/weblogs/2003/09/01.html.

    Doug Kaye 09/01/03 04:35:55 PM EDT

    I've just posted a response to a response to this article on my weblog.

    @MicroservicesExpo Stories
    "I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
    The past few years have seen a huge increase in the amount of critical IT services that companies outsource to SaaS/IaaS/PaaS providers, be it security, storage, monitoring, or operations. Of course, along with any outsourcing to a service provider comes a Service Level Agreement (SLA) to ensure that the vendor is held financially responsible for any lapses in their service which affect the customer’s end users, and ultimately, their bottom line. SLAs can be very tricky to manage for a number ...
    Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things c...
    Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
    Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
    DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to clos...
    The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
    The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
    Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
    The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
    Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
    The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
    Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
    The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
    SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
    Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
    You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
    For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
    Many IT organizations have come to learn that leveraging cloud infrastructure is not just unavoidable, it’s one of the most effective paths for IT organizations to become more responsive to business needs. Yet with the cloud comes new challenges, including minimizing downtime, decreasing the cost of operations, and preventing employee burnout to name a few. As companies migrate their processes and procedures to their new reality of a cloud-based infrastructure, an incident management solution...