Welcome!

Microservices Expo Authors: Derek Weeks, Liz McMillan, Elizabeth White, John Katrick, Pat Romanski

Related Topics: Microservices Expo

Microservices Expo: Article

Data Services for Next-Generation SOAs

A shared data layer can meet a critical business need

This article discusses the advantages of implementing shared "data services" to deliver on the true promise of service-oriented architectures - rapid application development through reusable components without sacrificing fast, accurate enterprise data access.

With a shared data layer, you can avoid integrity, performance, scalability, and availability issues that might otherwise occur.

We have entered an exciting period in the evolution of enterprise system design. More than ever, standards influence the way architects define and plan new projects. The component approach to development focuses on building blocks and provides a structure for solving complex problems. Sophisticated development tools relieve engineers of "nuts and bolts" work and allow them to concentrate more on business requirements.

I've had the opportunity to work closely with our customers as they transition into component-based technologies such as Web services and service-oriented architectures (SOAs). Their experiences highlight the importance of planning ahead for an efficient and robust data access strategy.

Because data access is such a basic requirement for enterprise development, the tendency is to pick standards such as Enterprise JavaBeans (EJB). The underlying data access may be performed with ADO, JDBC, or ODBC APIs, but it is common to leave the responsibility for database performance with database administrators. However, moving to a new architecture often means exponential growth in the demands placed on the data infrastructure - demands for increased volume and data integrity that cannot be solved in the database layer alone.

Data Access Challenges
Data access logic consumes a high percentage of development resources and plays a significant role in the success or failure of a development project. An R.B. Webber study concluded that coding and configuring object/relational (O-R) data access typically accounts for 30-40% of total project effort. Ultimately, data access logic often determines whether the resulting systems meet performance and scalability requirements.

The typical implementation of a component-based architecture makes each functional component responsible for its own data access logic. In the September 2004 issue of WSJ (Vol. 4, issue 9), Dr. Adam Kolawa confirmed this in his article's definition of application logic:

Application logic (or business logic): Handles requests from customers and agents, makes necessary connection to the database, and returns responses to customers and agents.

This is a concise description of the common architecture illustrated in Figure 1. In such systems, a request to the order application might require a database lookup of a price. In separate billing and shipping transactions, each of those applications again make their own database request. This architecture poses problems in three different areas:

  1. The team writing each application implements similar, but slightly different, data access logic. Even when the data access is standards based, this low-level coding is tedious, error prone, and inefficient. The costs multiply when you add redundant testing and maintenance over an application's life cycle.
  2. Requests that require database access are expensive. Each application's performance degrades when more requests come in than can be handled by the number of database connections available.
  3. This architecture often gives more individuals access to data, opening the floodgates and creating even greater demands on the database. Every application handles its own data access, even when they need the same data. Databases are an expensive and finite resource. You don't want critical business functions waiting on a queue to update the database while less important requests clog the network.
As shown in Figure 2, by separating the data access out of the application logic, you can avoid these problems. These shared "data services" reduce the number of database connections required and support a stateful architecture. By caching frequently requested data, more requests can be satisfied without querying the database, which improves performance and increases scalability and reliability. (It should be noted that the effectivesss of caching varies across systems, but that your typical CRUD application will benefit nicely.) In addition, the reusability and flexibility of data services allows new services to be developed and rolled out more quickly.

However, most enterprise systems are much more complex than this example, and data integrity can become a concern. Traditional applications often have their own database "silo," which contains a copy of business reference data such as customer information, product information, and inventory levels. Typically, each database is synchronized only once a day, so each application operates with slightly different data. When applications are redistributed as enterprise services without integrating the data silos, these data inconsistencies can create unanticipated business errors.

Figure 3 illustrates the inconsistencies that can arise when silo applications are exposed as services, each with different inventory data. In this example, the "show_status" service thinks the inventory level is 27, while the "check_ avail" service thinks the inventory level is 0.

Shared Data Services Enable SOA Success
An increasing number of enterprises recognize the need for a shared data service that offers domain-specific data classes used by multiple applications. Each application might use only a subset of the data classes managed by the data service. The data service manages relationships between the data classes and serves data changes to each application, regardless of the source of change.

Using the SOA paradigm, it is preferable to implement a credit card authorization, for example, as a single service that can be reused by many applications. Similarly, it is preferable to implement a single customer data service to retrieve current customer information for a set of related applications.

To be successful, an SOA initiative requires data access infrastructure software specifically designed to provide consistent performance and highly available data across distributed computing environments. Ideally, system architects should seek cross-platform data access products that are capable of meeting requirements across the project life cycle - from development through tuning and deployment.

From our customers' experience we've found that many organizations implement Web services or an SOA without realizing how this can increase the load on their back-end database and result in data bottlenecks. There are really three main concerns: performance, scalability, and data integrity. High-volume, complex systems require careful design of their data access to be successful.

Case Study: An SOA in Financial Services
A leading financial services firm implemented more than 40 equity trading applications on top of a shared data services layer. With rigorous requirements for reliability, performance, and scalability - up to $7 billion per day in trades and thousands of transactions per second at peak volume - they gave careful consideration to data access.

In equity trading, a single data consistency error can result in business-breaking consequences. They expected their shared data services layer to protect data integrity, deliver immediate response to end users, have the ability to scale to meet the growing needs of their businesses, and finally, to ensure 24x7 availability.

In their architecture, the data service layer provides caching, optimized updates, distributed cache synchronization, load balancing, failover, and client notification. These capabilities are far more robust than the homegrown data persistence layer used in previous generation of applications.

Figure 4 illustrates the structure of this SOA deployment. The data services layer provides data management for relational data and real-time market data feeds. Because the applications are related and share a common data model and common data, data services deliver up-to-date business information to each server and application.

The economic benefits that they realized include:

  • Doubled developer productivity: Shared functional and data services account for more than 50% of new application functionality
  • Tripled maintenance productivity: Systems deployed using SOA can be maintained with 75% fewer resources
  • Dramatically higher availability: Fault tolerance within the data services layer eliminates application failures due to intermittent database or network failures
  • Significant infrastructure and operational savings: Distributed application deployment with centralized data storage can achieve 40% capital cost savings and 30% annual operating cost savings over traditional data centers
Conclusion
Handling data access and updates accounts for the lion's share of enterprise application development efforts. Most IT groups today use ad hoc data access solutions, such as ADO.NET, that work well within a silo architecture but are unable to support data consistency enterprise wide. A shared data layer can meet critical business needs while supplying consistent data across all applications. When designed and implemented with the appropriate development tools, a shared data layer delivers the following benefits:
  • Increases developer productivity by allowing developers to focus more on business-critical logic
  • Maintains data integrity when migrating existing data and application silos to enterprise services
  • Ensures the performance and scalability of the deployed system

More Stories By Christopher Keene

Christopher Keene is Chairman and CEO of WaveMaker (formerly ActiveGrid). He was the founder, in 1991, of Persistence Software, a San Mateo, CA-based company that created a new approach for managing data in high-transaction banking and communications systems. Persistence Software investors included Cisco, Intel, Reuters and Sun Microsystems. The company went public in 1999 on the NASDAQ exchange and was sold in 2004 to Progress software.

After leaving Persistence Software in 2005, Chris spent a year in France as chairman of Reportive Software, a Paris-based maker of business-intelligence tools, and as an adjunct professor and entrepreneur-in-residence at INSEAD, a leading graduate business school.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Peter Chang 12/13/04 01:57:42 PM EST

A shared data access service across enterprise data sources could fundamental problems like consistency. However, the big questions seems to be flexibility and performance. Can the service concisely provide data in the form required by the application or does a developer need to programatically filter and massage the output? Can the service query underlying data sources and translate the data into the abstract format (implemented by the shared data access layer) fast enough to meet usage requirements? Without both flexibility and performance, a shared data service would either cost more in terms of developer productivity or be too slow for other applications. In either case, it would limit the applications that can use the service and therefore limit the consistency and reuse benefits of building a shared data access service.

Peter Chang

@MicroservicesExpo Stories
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things ...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
Cloud Governance means many things to many people. Heck, just the word cloud means different things depending on who you are talking to. While definitions can vary, controlling access to cloud resources is invariably a central piece of any governance program. Enterprise cloud computing has transformed IT. Cloud computing decreases time-to-market, improves agility by allowing businesses to adapt quickly to changing market demands, and, ultimately, drives down costs.
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Recent survey done across top 500 fortune companies shows almost 70% of the CIO have either heard about IAC from their infrastructure head or they are on their way to implement IAC. Yet if you look under the hood while some level of automation has been done, most of the infrastructure is still managed in much tradition/legacy way. So, what is Infrastructure as Code? how do you determine if your IT infrastructure is truly automated?
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.
As people view cloud as a preferred option to build IT systems, the size of the cloud-based system is getting bigger and more complex. As the system gets bigger, more people need to collaborate from design to management. As more people collaborate to create a bigger system, the need for a systematic approach to automate the process is required. Just as in software, cloud now needs DevOps. In this session, the audience can see how people can solve this issue with a visual model. Visual models ha...
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with lega...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, will discuss some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he’ll go over some of the best practices for structured team migrat...