Welcome!

Microservices Expo Authors: Stackify Blog, Pat Romanski, AppDynamics Blog, Liz McMillan, Elizabeth White

Related Topics: Microservices Expo

Microservices Expo: Article

Data Services for Next-Generation SOAs

A shared data layer can meet a critical business need

This article discusses the advantages of implementing shared "data services" to deliver on the true promise of service-oriented architectures - rapid application development through reusable components without sacrificing fast, accurate enterprise data access.

With a shared data layer, you can avoid integrity, performance, scalability, and availability issues that might otherwise occur.

We have entered an exciting period in the evolution of enterprise system design. More than ever, standards influence the way architects define and plan new projects. The component approach to development focuses on building blocks and provides a structure for solving complex problems. Sophisticated development tools relieve engineers of "nuts and bolts" work and allow them to concentrate more on business requirements.

I've had the opportunity to work closely with our customers as they transition into component-based technologies such as Web services and service-oriented architectures (SOAs). Their experiences highlight the importance of planning ahead for an efficient and robust data access strategy.

Because data access is such a basic requirement for enterprise development, the tendency is to pick standards such as Enterprise JavaBeans (EJB). The underlying data access may be performed with ADO, JDBC, or ODBC APIs, but it is common to leave the responsibility for database performance with database administrators. However, moving to a new architecture often means exponential growth in the demands placed on the data infrastructure - demands for increased volume and data integrity that cannot be solved in the database layer alone.

Data Access Challenges
Data access logic consumes a high percentage of development resources and plays a significant role in the success or failure of a development project. An R.B. Webber study concluded that coding and configuring object/relational (O-R) data access typically accounts for 30-40% of total project effort. Ultimately, data access logic often determines whether the resulting systems meet performance and scalability requirements.

The typical implementation of a component-based architecture makes each functional component responsible for its own data access logic. In the September 2004 issue of WSJ (Vol. 4, issue 9), Dr. Adam Kolawa confirmed this in his article's definition of application logic:

Application logic (or business logic): Handles requests from customers and agents, makes necessary connection to the database, and returns responses to customers and agents.

This is a concise description of the common architecture illustrated in Figure 1. In such systems, a request to the order application might require a database lookup of a price. In separate billing and shipping transactions, each of those applications again make their own database request. This architecture poses problems in three different areas:

  1. The team writing each application implements similar, but slightly different, data access logic. Even when the data access is standards based, this low-level coding is tedious, error prone, and inefficient. The costs multiply when you add redundant testing and maintenance over an application's life cycle.
  2. Requests that require database access are expensive. Each application's performance degrades when more requests come in than can be handled by the number of database connections available.
  3. This architecture often gives more individuals access to data, opening the floodgates and creating even greater demands on the database. Every application handles its own data access, even when they need the same data. Databases are an expensive and finite resource. You don't want critical business functions waiting on a queue to update the database while less important requests clog the network.
As shown in Figure 2, by separating the data access out of the application logic, you can avoid these problems. These shared "data services" reduce the number of database connections required and support a stateful architecture. By caching frequently requested data, more requests can be satisfied without querying the database, which improves performance and increases scalability and reliability. (It should be noted that the effectivesss of caching varies across systems, but that your typical CRUD application will benefit nicely.) In addition, the reusability and flexibility of data services allows new services to be developed and rolled out more quickly.

However, most enterprise systems are much more complex than this example, and data integrity can become a concern. Traditional applications often have their own database "silo," which contains a copy of business reference data such as customer information, product information, and inventory levels. Typically, each database is synchronized only once a day, so each application operates with slightly different data. When applications are redistributed as enterprise services without integrating the data silos, these data inconsistencies can create unanticipated business errors.

Figure 3 illustrates the inconsistencies that can arise when silo applications are exposed as services, each with different inventory data. In this example, the "show_status" service thinks the inventory level is 27, while the "check_ avail" service thinks the inventory level is 0.

Shared Data Services Enable SOA Success
An increasing number of enterprises recognize the need for a shared data service that offers domain-specific data classes used by multiple applications. Each application might use only a subset of the data classes managed by the data service. The data service manages relationships between the data classes and serves data changes to each application, regardless of the source of change.

Using the SOA paradigm, it is preferable to implement a credit card authorization, for example, as a single service that can be reused by many applications. Similarly, it is preferable to implement a single customer data service to retrieve current customer information for a set of related applications.

To be successful, an SOA initiative requires data access infrastructure software specifically designed to provide consistent performance and highly available data across distributed computing environments. Ideally, system architects should seek cross-platform data access products that are capable of meeting requirements across the project life cycle - from development through tuning and deployment.

From our customers' experience we've found that many organizations implement Web services or an SOA without realizing how this can increase the load on their back-end database and result in data bottlenecks. There are really three main concerns: performance, scalability, and data integrity. High-volume, complex systems require careful design of their data access to be successful.

Case Study: An SOA in Financial Services
A leading financial services firm implemented more than 40 equity trading applications on top of a shared data services layer. With rigorous requirements for reliability, performance, and scalability - up to $7 billion per day in trades and thousands of transactions per second at peak volume - they gave careful consideration to data access.

In equity trading, a single data consistency error can result in business-breaking consequences. They expected their shared data services layer to protect data integrity, deliver immediate response to end users, have the ability to scale to meet the growing needs of their businesses, and finally, to ensure 24x7 availability.

In their architecture, the data service layer provides caching, optimized updates, distributed cache synchronization, load balancing, failover, and client notification. These capabilities are far more robust than the homegrown data persistence layer used in previous generation of applications.

Figure 4 illustrates the structure of this SOA deployment. The data services layer provides data management for relational data and real-time market data feeds. Because the applications are related and share a common data model and common data, data services deliver up-to-date business information to each server and application.

The economic benefits that they realized include:

  • Doubled developer productivity: Shared functional and data services account for more than 50% of new application functionality
  • Tripled maintenance productivity: Systems deployed using SOA can be maintained with 75% fewer resources
  • Dramatically higher availability: Fault tolerance within the data services layer eliminates application failures due to intermittent database or network failures
  • Significant infrastructure and operational savings: Distributed application deployment with centralized data storage can achieve 40% capital cost savings and 30% annual operating cost savings over traditional data centers
Conclusion
Handling data access and updates accounts for the lion's share of enterprise application development efforts. Most IT groups today use ad hoc data access solutions, such as ADO.NET, that work well within a silo architecture but are unable to support data consistency enterprise wide. A shared data layer can meet critical business needs while supplying consistent data across all applications. When designed and implemented with the appropriate development tools, a shared data layer delivers the following benefits:
  • Increases developer productivity by allowing developers to focus more on business-critical logic
  • Maintains data integrity when migrating existing data and application silos to enterprise services
  • Ensures the performance and scalability of the deployed system

More Stories By Christopher Keene

Christopher Keene is Chairman and CEO of WaveMaker (formerly ActiveGrid). He was the founder, in 1991, of Persistence Software, a San Mateo, CA-based company that created a new approach for managing data in high-transaction banking and communications systems. Persistence Software investors included Cisco, Intel, Reuters and Sun Microsystems. The company went public in 1999 on the NASDAQ exchange and was sold in 2004 to Progress software.

After leaving Persistence Software in 2005, Chris spent a year in France as chairman of Reportive Software, a Paris-based maker of business-intelligence tools, and as an adjunct professor and entrepreneur-in-residence at INSEAD, a leading graduate business school.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...