Welcome!

Microservices Expo Authors: Stackify Blog, Aruna Ravichandran, Dalibor Siroky, Kevin Jackson, PagerDuty Blog

Related Topics: Microservices Expo

Microservices Expo: Article

Future-Proofing Solutions with Coarse-Grained Service Oriented Architecture

Future-Proofing Solutions with Coarse-Grained Service Oriented Architecture

Web services and service-oriented architectures are transforming application construction. The ubiquity of Web services support by all leading platform venders brings the promise of a flexible application environment with simplified interface techniques, location transparency, and platform-neutral interoperability. This dynamic infrastructure brings about a new implementation approach, the service-oriented architecture.

However, to date most Web services projects have really only created simplified communication mechanisms for the invocation of those same old complicated legacy interfaces that we have always had. To truly realize the creation of service-based components, a new design approach is needed, one that produces simple, straightforward coarse-grained service interfaces that conceal the ugliness of the legacy low-level interfaces. Designing coarse-grained interfaces is not as easy as it sounds. This article discusses how to recognize good coarse-grain interfaces and how to design coarse-grain interfaces for maximum flexibility and longevity.

When you use a service, be it a bank, a restaurant, or a store, you expect to easily interact with that service. How would the restaurant-goers experience be if there was a different process for ordering each part of the meal, or when withdrawing cash at an ATM, there were a hundred different menu options. Service interfaces are expected to be simple and intuitive. This is what makes a service successful. New services should be easily discoverable and the consumption of those interfaces undemanding. ATM machines are ubiquitous because everyone knows or can easily learn how to use one.

The proper design of a service should take an "outside in" approach to constructing the interface by focusing on the client application's perspective and how the component plays within the larger business process context. Sadly, many interface designs take an "inside out" approach by basing the interface design on the dirty details of the existing implementation rather than the specific requirements for client utilization. The interface should be on a "need-to-know" basis, only exposing interface details that are meaningful to semantics of the larger interaction with client applications.

 

Ongoing software maintenance is a constant burden for business applications. Today's development is tomorrow's legacy. Loosely coupled service-oriented architectures create an opportunity to reverse this escalating legacy cycle. Proper design of the interfaces can minimize changes as requirements evolution occurs, making these services future-proof.

The Problem
Most poor interfaces are the result of allowing a convoluted implementation to bleed through into the interface. Even a good implementation that is transparently presented as a Web service becomes a thorny service interface. A component often contains dozens of classes and hundreds of methods. Exposing this detail as a service is analogous to an ATM machine with hundreds of menu options. Complicated interface designs impose unnecessary responsibilities on the client application, for example:

  • They result in excessive chattiness over the Web services interface.
  • They overburden the client application with the maintenance of the service component's context.
  • They force the client application to become co-dependent with the service component requiring dual development for the life of the solution.

    This poor interface design will sign a death sentence for the component and the service-oriented implementation by tightly coupling the client application with the component, requiring future maintenance of intricate interaction semantics and proprietary interaction context.

    Interaction Semantics
    Part of the problem is that the mechanism for interacting with a component is different from - and often in some very bizarre ways - any other component. The client application needs costly custom implementation for invocation of the interface of each component. This is reminiscent of Indiana Jones navigating through hidden passageways to find the lost treasure with secret incantations required for entrance to each secret chamber. In their book BPM: the Third Wave (Meghan-Kiffer Press, 2001), Howard Smith and Peter Fingar expanded this thought: "Imagine a world where people speak a language that brilliantly describes the molecular structure of a large object but can't tell you what that object is - or that it's about to fall on you." The reason for failure of component interoperability from this perspective is self-evident.

    We remember a recent engagement for the integration of physician practice applications with a hospital IS system that had outlandish interface semantics. Ignoring reasonable practices and embracing the quirks of their implementation, the system required that the patient's first and middle name fields be left blank and the last name field contain the patient's full name. This is just one of the many urban legends of strange system interfaces that litter the IT landscape.

    Burden of Context
    Another problem of low-level proprietary interfaces is that the client application is pulled into the internal context of the called component. With multiple method invocations for any coarse-grained operation, there is a need to maintain implementation context in the client application in order to provide that context on subsequent method calls. One example of a system from a recent project would require special codes indicating proprietary state and numeric status codes (e.g., code="F", Status="989"), which had no relevance to the client application, to drive the invocation of each operation. Implementation-specific state information was forced onto the client application by the component. The delineation between the relevant context of the interaction (that which is meaningful to both the client and the component) and irrelevant context (the exclusive state information of the component) is often not considered during interface design.

    Context in terms of resource references, processing state, and method parameter values are unnecessarily forced into the interface. As the interface grows so does the interaction context. The client application has its own context to maintain and does not need to be burdened with the component's implementation context.

    Example: Customer Management System Interface
    To illustrate the difficulty presented by exposed low-level interfaces, look at a customer-service application that has a component for customer. With objects for atomic elements of the customer information results in the client application, implementation might look like the following:

    customer = CustomerManage.findCustomer("123456789");
    customerID = customer.getCustomerID();
    addressVector = addressList.findAddresses(customerID)
    homeAddress = addressVector.findAddress("Home")
    homePhone = homeAddress.getPhone();
    shipTo = addressVector.findAddress("Ship To");
    shipToZip = shipTo.getZip();

    This demonstrates how the client application is pulled into an unnecessary interaction context that it has no concern with.

    Designing for Serviceability
    Creating a coarse-grained interface that truly embodies a service must be a conscious part of the interface design process. Serviceability comes from an interface that is easy to exploit, straightforward, and has a life span beyond the first version. To discuss these characteristics of coarse-grained service design, we'll borrow the ACID acronym from the transactional processing domain:

  • Atomic: Any one business operation can be completed through one service interface. The coarse-grained interface is close to a 1:1 ratio of business operations to service interfaces. In document-oriented Web service interfaces the interface is simplified to the semantics of document exchange.
  • Consistent: There is a consistent behavior to the interface within a domain that makes new services in that domain easily recognizable and understood. Consistency extends beyond the interfaces of one component to a common interface format for all components within a domain. Locating services instances, establishing context, retrieving data, performing updates, and executing business operations all have consistent interaction. One of the best ways to do this is with a FCRUD semantic where service resource objects are operated with straightforward semantics of the Find resource, Create resource, Retrieve resource information, Update resource information, and Delete resource. Consistent interaction semantics empower the client application developer to easily utilize any of the service components once he or she has experience developing an interface to one.
  • Isolated: Any one interface can be invoked independent of other service interfaces. Component implementation context and detailed invocation sequencing are not forced onto the client application. Loose coupling exists rather than tight semantic coupling. Predecessor and successor invocation requirements are not part of the interaction beyond the degree that they are part of the shared process between the client and the component. Isolation of interface design enables the client application to invoke any service interface with a minimum of preconditional interaction, perhaps only a Find resource call.
  • Durable: This interface has been designed with a vision of the future and has longevity built into it. The interface envisions the broad range of usage scenarios and future implementation enhancements and has been designed with an eye towards ease of migration. A durable interface is future-proof, not that the interface will never change, but it has the capability to easily incorporate future enhancements. Document-centric Web service interfaces and other loosely coupled interface techniques minimize the impact of extensions to the interface.

    How do you design ACID characteristics into your service component interfaces to best ensure successful exploitation in a service-oriented architecture? The best way is by proper modeling of the component interface. Modeling often looks inward. It is easy for programmers to immediately concern themselves with the implementation while the interface and its usage become a background activity. Taking an "outside in" approach to the modeling ensures that consideration is properly given to the continuum of utilization.

    Correct modeling takes a top-down approach beginning with the business processes and requirements for the solutions that the component service envisions being exploited in. Correct modeling should begin at the domain level by identifying the business context of the component usage and what business processes would exploit the component. For each business process, identify the use cases or scenarios that demonstrate the various ways the process could be executed. These process use cases lead to specific use cases of component interaction.

    This top-down approach ensures that the context of the process is firmly established before design looks inward at the interface and implementation of the component. Although it may seem like overkill, establishing use cases at the domain level will ensure that correct specification of the service is created and future-proofs that component by anticipating all possible usage scenarios.

    Proper Service Interface - Customer Management System Interface
    Returning to our customer component example, you can see how a proper coarse-grain interface results in a simplified client implementation without the burden of component context or proprietary interaction semantics:

    customerdocument =
    CustomerManageService.getCustomerDocument("123456789");
    homePhone = customerdocument.getHomePhone();
    shipToZip = customerdocument.getShiptToZip();

    This document-centric interface requires only one service invocation, which returns a document containing all the necessary information the client application requires without the burden of implementation details.

    Creating Coarse Grain Implementations
    Typically a Web services project focuses primarily on migrating existing application functionality to a Web services interface. The preexisting application can be anything from legacy systems to J2EE Enterprise JavaBeans (EJBs). How do you map this to a coarse-grain service interface? Very seldom will traditional low-level interfaces naturally map to a proper coarse-grained structure. An abstraction layer is required to hide the details of the implementation from the user behind a facade. This abstraction layer encapsulates:

  • Multiple low-level interfaces that comprise the business operation
  • Multiple data sources that need to be aggregated for the service
  • Legacy system interaction
  • The sequencing of low-level calls
  • Maintaining context for the low-level implementations
  • Transactional coordination of updates to multiple low-level interfaces

    The abstraction layer can be constructed using either of two approaches:
    1.   Build to integrate: Using application development techniques, a facade or mediator is implemented to provide the interface that aggregates the lower-level interfaces. Coarse-grained components are created that broker the interactions with multiple classes. Traditional application development tools and techniques can be employed for development of this facade. Component development environments like J2EE or Microsoft's .NET provide application environments to host both the facade component and the implementation.
    2.   Enterprise application integration (EAI)/ Business process integration (BPI): EAI tools exist for the purpose of integrating applications together. They provide rich tools and functionality for rapidly integrating all types of applications, including legacy, Web, and packaged software applications. BPI tools extend this capability to provide choreography of the business process and application flow outside the application. Web services-based integration is now a component of almost all EAI/BPI tools. This means that these tools can expose their integration flows through coarse-grain Web services interfaces.

     

    Which is a better choice depends on a number of factors. Some things to consider are if the implementations are of similar technologies, what interfaces these lower-level interfaces currently support, and whether legacy systems are part of the equation. Often it comes down to whether the primary focus of the project is application development or business integration.

    Conclusion
    The success of service-oriented architectures depends on a rich universe of available services that are easily located, understood, and utilized by a diverse community of users. These interfaces must have a life span beyond the first implementation, which can only be achieved by proper design of coarse-grained interfaces that are truly coarse-grain in nature and not just a weak veneer on top of an existing tortuously complicated interfaces. By taking an "outside-in" approach to modeling the service component interface, it is possible to identify the full spectrum of usage of the component.

    As you design services, remember the ACID acronym and ask yourself if the interface models a full atomic business operation; is there consistency to the interface across the family of components; can any one interface be invoked in reasonable independence from other interfaces; and has the interface been designed with a view towards future usage scenarios. This perspective will lead to components that can truly become ubiquitous services in the Web services world.

  • More Stories By John Medicke

    John Medicke is the chief architect of the On Demand Solution Center
    in Research Triangle Park, NC. He has designed solutions for various
    industries including financial services, retail, healthcare, industrial,
    and government. John has worked extensively on the exploitation of
    business integration, business process management, and business
    intelligence within an integrated solution context. He is author of the
    book Integrated Solutions with DB2, as well as several articles.

    Comments (2) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Most Recent Comments
    Doug Kaye 09/01/03 04:37:01 PM EDT

    Ah...the URL didn't come through. See http://www.rds.com/doug/weblogs/2003/09/01.html.

    Doug Kaye 09/01/03 04:35:55 PM EDT

    I've just posted a response to a response to this article on my weblog.

    @MicroservicesExpo Stories
    The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
    "Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
    Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
    "CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
    Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
    "This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
    It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
    Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
    "We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
    While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
    Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
    Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
    DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
    We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
    "Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
    identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...