Welcome!

Microservices Expo Authors: Derek Weeks, Liz McMillan, Elizabeth White, John Katrick, Pat Romanski

Related Topics: Microservices Expo

Microservices Expo: Blog Feed Post

Juggling Multiple Data Models with Services

Guidelines for Success

Recently, a client approached me with a quandary.  When designing XML schemas for Web services, how do you balance the desire to use industry standards such as UBL ( Universal Business Language) or CICA ( Context Inspired Component Architecture) to support data interoperability with the unique needs of particular domains and sub-systems within the enterprise?  The client’s service design team is rightfully concerned with the competing interests of internal enterprise standardization, interoperability with external entities, and addressing the unique needs of local domains and process constraints. How can these competing needs be effectively managed when designing the schema for a given service?

The Standard Answer

Naturally, you start by giving the standard answer - “it depends”.  This is an essential and carefully crafted phrase that all consultants are taught to give according to page 17 of How to Win Clients and Influence Budgets. Of course, the standard answer is rarely sufficient, but it is not without merit.  It is certainly true that the decision regarding your data model design depends significantly upon a host of factors:

  • Which use cases are high priority?
  • Which use cases are high impact (to stakeholders and/or to customers)?
  • Which systems and/or processes are mission critical?
  • Are you including industry standard data models in you enterprise out of necessity (i.e. ONLY when you are forced to interact with other entities), or is it part of a broader modernization effort within your organization?
  • Do interoperability challenges or integration challenges currently contribute most significantly to your development and/or maintenance costs?
  • Which interaction scenarios are of higher priority, internal or external service calls?
  • Would aligning more closely to one of the available models lead to a significant reduction in data mapping activities or is the environment simply too fragmented to support this?
  • Is there one particular system or business process that is unique and skewing perceptions regarding our service data models?

While all of these questions (and many more) are important and aid in facilitating a thorough examination of the problem, we rarely have the time to examine each problem from all possible angles.  Consequently, we must look toward guidelines and rules of thumbs.

Guidelines for Managing Multiple Service Schema

1)       Your business processes should only work with a single data model if at all possible.  Business processes are best designed to be data model agnostic and operate off of an internal, process-centric model.  I explained the importance of process-centric data models in a previous post.

2)       Your services should work with as few data models as possible (one model being preferable).  This is just good commonsense.  For each data model that is added to the mix, your development time and long-term maintenance costs increase at a non-linear rate.  The pain will intensify and it will do so rapidly with each new model you add to the mix.

3)       If your services are going to work with multiple models, you should put some sort of taxonomy / categorization scheme in place to distinguish the data models used by services.  For example, services that are outward facing might use a data model accepted more broadly by the industry.  Services used by a particular LOB might use a certain data model, and services used by another LOB might use another.  Another distinction could be infrastructure services vs data services vs application services.  Regardless of the approach, there needs to be some methodology that is objective and governable for when one data model is used vs when another data model is used.

4)       Data model transformation is a necessary evil.  It should be done only when necessary and you should contain the transformation to a designated component.  Transformation activities should be handled by intermediaries (data services, ESB, network appliance, other mediation framework) when possible rather than building it into the internals of a service or process.  This keeps your services clean, provides a nice reusable transformation mechanism, keeps your interoperability more loosely-coupled, and provides for agility and extensibility in the future.

Juggling multiple data models within a service oriented environment is no one’s idea of fun.  When possible, aim for a more comprehensive and strategic analysis of the environment (see the ‘Standard Answer’ outlined above).  When this is not realistic,  try to use the above guidelines and rules of thumb to help you tactically navigate the murky waters of data model incongruity.  Service design isn’t easy, but it doesn’t have to be rocket surgery either.  Best of luck!

More Stories By Kyle Gabhart

Kyle Gabhart is a subject matter expert specializing in strategic planning and tactical delivery of enterprise technology solutions, blending EA, BPM, SOA, Cloud Computing, and other emerging technologies. Kyle currently serves as a director for Web Age Solutions, a premier provider of technology education and mentoring. Since 2001 he has contributed extensively to the IT community as an author, speaker, consultant, and open source contributor.

@MicroservicesExpo Stories
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things ...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
Cloud Governance means many things to many people. Heck, just the word cloud means different things depending on who you are talking to. While definitions can vary, controlling access to cloud resources is invariably a central piece of any governance program. Enterprise cloud computing has transformed IT. Cloud computing decreases time-to-market, improves agility by allowing businesses to adapt quickly to changing market demands, and, ultimately, drives down costs.
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Recent survey done across top 500 fortune companies shows almost 70% of the CIO have either heard about IAC from their infrastructure head or they are on their way to implement IAC. Yet if you look under the hood while some level of automation has been done, most of the infrastructure is still managed in much tradition/legacy way. So, what is Infrastructure as Code? how do you determine if your IT infrastructure is truly automated?
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.
As people view cloud as a preferred option to build IT systems, the size of the cloud-based system is getting bigger and more complex. As the system gets bigger, more people need to collaborate from design to management. As more people collaborate to create a bigger system, the need for a systematic approach to automate the process is required. Just as in software, cloud now needs DevOps. In this session, the audience can see how people can solve this issue with a visual model. Visual models ha...
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with lega...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, will discuss some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he’ll go over some of the best practices for structured team migrat...