Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Karthick Viswanathan, Andy Thurai

Related Topics: Microservices Expo

Microservices Expo: Article

Data Services for Next-Generation SOAs

A shared data layer can meet a critical business need

This article discusses the advantages of implementing shared "data services" to deliver on the true promise of service-oriented architectures - rapid application development through reusable components without sacrificing fast, accurate enterprise data access.

With a shared data layer, you can avoid integrity, performance, scalability, and availability issues that might otherwise occur.

We have entered an exciting period in the evolution of enterprise system design. More than ever, standards influence the way architects define and plan new projects. The component approach to development focuses on building blocks and provides a structure for solving complex problems. Sophisticated development tools relieve engineers of "nuts and bolts" work and allow them to concentrate more on business requirements.

I've had the opportunity to work closely with our customers as they transition into component-based technologies such as Web services and service-oriented architectures (SOAs). Their experiences highlight the importance of planning ahead for an efficient and robust data access strategy.

Because data access is such a basic requirement for enterprise development, the tendency is to pick standards such as Enterprise JavaBeans (EJB). The underlying data access may be performed with ADO, JDBC, or ODBC APIs, but it is common to leave the responsibility for database performance with database administrators. However, moving to a new architecture often means exponential growth in the demands placed on the data infrastructure - demands for increased volume and data integrity that cannot be solved in the database layer alone.

Data Access Challenges
Data access logic consumes a high percentage of development resources and plays a significant role in the success or failure of a development project. An R.B. Webber study concluded that coding and configuring object/relational (O-R) data access typically accounts for 30-40% of total project effort. Ultimately, data access logic often determines whether the resulting systems meet performance and scalability requirements.

The typical implementation of a component-based architecture makes each functional component responsible for its own data access logic. In the September 2004 issue of WSJ (Vol. 4, issue 9), Dr. Adam Kolawa confirmed this in his article's definition of application logic:

Application logic (or business logic): Handles requests from customers and agents, makes necessary connection to the database, and returns responses to customers and agents.

This is a concise description of the common architecture illustrated in Figure 1. In such systems, a request to the order application might require a database lookup of a price. In separate billing and shipping transactions, each of those applications again make their own database request. This architecture poses problems in three different areas:

  1. The team writing each application implements similar, but slightly different, data access logic. Even when the data access is standards based, this low-level coding is tedious, error prone, and inefficient. The costs multiply when you add redundant testing and maintenance over an application's life cycle.
  2. Requests that require database access are expensive. Each application's performance degrades when more requests come in than can be handled by the number of database connections available.
  3. This architecture often gives more individuals access to data, opening the floodgates and creating even greater demands on the database. Every application handles its own data access, even when they need the same data. Databases are an expensive and finite resource. You don't want critical business functions waiting on a queue to update the database while less important requests clog the network.
As shown in Figure 2, by separating the data access out of the application logic, you can avoid these problems. These shared "data services" reduce the number of database connections required and support a stateful architecture. By caching frequently requested data, more requests can be satisfied without querying the database, which improves performance and increases scalability and reliability. (It should be noted that the effectivesss of caching varies across systems, but that your typical CRUD application will benefit nicely.) In addition, the reusability and flexibility of data services allows new services to be developed and rolled out more quickly.

However, most enterprise systems are much more complex than this example, and data integrity can become a concern. Traditional applications often have their own database "silo," which contains a copy of business reference data such as customer information, product information, and inventory levels. Typically, each database is synchronized only once a day, so each application operates with slightly different data. When applications are redistributed as enterprise services without integrating the data silos, these data inconsistencies can create unanticipated business errors.

Figure 3 illustrates the inconsistencies that can arise when silo applications are exposed as services, each with different inventory data. In this example, the "show_status" service thinks the inventory level is 27, while the "check_ avail" service thinks the inventory level is 0.

Shared Data Services Enable SOA Success
An increasing number of enterprises recognize the need for a shared data service that offers domain-specific data classes used by multiple applications. Each application might use only a subset of the data classes managed by the data service. The data service manages relationships between the data classes and serves data changes to each application, regardless of the source of change.

Using the SOA paradigm, it is preferable to implement a credit card authorization, for example, as a single service that can be reused by many applications. Similarly, it is preferable to implement a single customer data service to retrieve current customer information for a set of related applications.

To be successful, an SOA initiative requires data access infrastructure software specifically designed to provide consistent performance and highly available data across distributed computing environments. Ideally, system architects should seek cross-platform data access products that are capable of meeting requirements across the project life cycle - from development through tuning and deployment.

From our customers' experience we've found that many organizations implement Web services or an SOA without realizing how this can increase the load on their back-end database and result in data bottlenecks. There are really three main concerns: performance, scalability, and data integrity. High-volume, complex systems require careful design of their data access to be successful.

Case Study: An SOA in Financial Services
A leading financial services firm implemented more than 40 equity trading applications on top of a shared data services layer. With rigorous requirements for reliability, performance, and scalability - up to $7 billion per day in trades and thousands of transactions per second at peak volume - they gave careful consideration to data access.

In equity trading, a single data consistency error can result in business-breaking consequences. They expected their shared data services layer to protect data integrity, deliver immediate response to end users, have the ability to scale to meet the growing needs of their businesses, and finally, to ensure 24x7 availability.

In their architecture, the data service layer provides caching, optimized updates, distributed cache synchronization, load balancing, failover, and client notification. These capabilities are far more robust than the homegrown data persistence layer used in previous generation of applications.

Figure 4 illustrates the structure of this SOA deployment. The data services layer provides data management for relational data and real-time market data feeds. Because the applications are related and share a common data model and common data, data services deliver up-to-date business information to each server and application.

The economic benefits that they realized include:

  • Doubled developer productivity: Shared functional and data services account for more than 50% of new application functionality
  • Tripled maintenance productivity: Systems deployed using SOA can be maintained with 75% fewer resources
  • Dramatically higher availability: Fault tolerance within the data services layer eliminates application failures due to intermittent database or network failures
  • Significant infrastructure and operational savings: Distributed application deployment with centralized data storage can achieve 40% capital cost savings and 30% annual operating cost savings over traditional data centers
Conclusion
Handling data access and updates accounts for the lion's share of enterprise application development efforts. Most IT groups today use ad hoc data access solutions, such as ADO.NET, that work well within a silo architecture but are unable to support data consistency enterprise wide. A shared data layer can meet critical business needs while supplying consistent data across all applications. When designed and implemented with the appropriate development tools, a shared data layer delivers the following benefits:
  • Increases developer productivity by allowing developers to focus more on business-critical logic
  • Maintains data integrity when migrating existing data and application silos to enterprise services
  • Ensures the performance and scalability of the deployed system

More Stories By Christopher Keene

Christopher Keene is Chairman and CEO of WaveMaker (formerly ActiveGrid). He was the founder, in 1991, of Persistence Software, a San Mateo, CA-based company that created a new approach for managing data in high-transaction banking and communications systems. Persistence Software investors included Cisco, Intel, Reuters and Sun Microsystems. The company went public in 1999 on the NASDAQ exchange and was sold in 2004 to Progress software.

After leaving Persistence Software in 2005, Chris spent a year in France as chairman of Reportive Software, a Paris-based maker of business-intelligence tools, and as an adjunct professor and entrepreneur-in-residence at INSEAD, a leading graduate business school.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
Kin Lane recently wrote a couple of blogs about why copyrighting an API is not common. I couldn’t agree more that copyrighting APIs is uncommon. First of all, the API definition is just an interface (It is the implementation detail … Continue reading →
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MyS...