Welcome!

Microservices Expo Authors: Pat Romanski, Stackify Blog, Elizabeth White, Liz McMillan, Yeshim Deniz

Related Topics: Microservices Expo, Industrial IoT, Recurring Revenue

Microservices Expo: Article

Improving the Efficiency of SOA-Based Applications

Using an Application Grid with large XML documents to build SOA applications that scale linearly and predictably

According to Moore's Law [1], processing speed and storage capacity have been doubling about every two years since the invention of the integrated circuit in 1958.

Yet it seems that our propensity for building larger more complex software systems that anticipate these improvements inevitably outpace the exponential growth in capacity to support these systems. SOA is becoming more broadly adopted, along with the practice of using XML as a means of communicating data between services and the more rapid adoption of applications to Internet scale. Staring you in the face of your application's success, the potential to overwhelm your systems has become very real, and may happen at times when you least expect it.

How do we get ahead of this trend? Given that memory and storage are always increasing in the realm of enterprise computing, software needs to keep up with the pace. We need to architect from the beginning using the proper approach toward achieving linear scalability with predictable latency. Data files and feeds are increasing in size, requiring more processing, and becoming more cumbersome to manage with software designed to materialize entire files before consuming them. In some cases, the operations that are to be performed require multiple input sources to be consumed before processing can begin.

Those who are building the eXtreme Transaction Processing (XTP) style of applications - such as Telco call setup and billing, online gaming, securities trading, risk management, and online travel booking - understand this challenge well. The broader use case that is applicable across more industries is web applications that need to scale up to Internet volumes, against backend systems that were never designed to handle that kind of traffic.

Boundary Costs
In discussions with customers about scaling a SOA with predictable latency, the term that often comes up is "Boundary Costs." To put this in context, consider the following scenario - an XML document that may have originated from an internal application, database, an external business partner, or perhaps converted from an EDI document, needs to be processed by a number of services, which are coordinated by a BPEL process or an ESB process pipeline. The common approach is to place the XML document on the bus and have the bus invoke the services in accordance with the process definition, passing the XML document as part of the service request payload. Each service that needs to process that data will access the XML accordingly. Interaction with a database may also occur. This approach, as illustrated in Figure 1, sounds simple enough.

Figure 1: Calling services using BPEL process or Service Bus pipeline

However, in practice there are challenges to scalability when using this approach. What is the cost of crossing the boundary from one service to the next? How many times does that cost get incurred in the context of invoking a simple business process? What if the XML document is really large in the multi-megabyte range, or there are lots of them numbering in the thousands, or both?

Compounding this challenge is the reality that most IT environments are a mixture of platforms and technologies. Regardless of how efficient your process engine or service bus might be, the processing at the service endpoint might still become a bottleneck. A recent conversation at a customer site revealed a 15-step business process that normally takes 15 seconds to run, but of late under peak loads it is violating its 30-second SLA. The developers had spent the better part of the past two years optimizing and tuning every last bit of performance out of each one of those 15 services, and the remaining culprit identified for the poor end-to-end latency is the boundary cost between the services. A detailed examination revealed that each of the 15 service calls was spending 1-2 seconds in an open source web service toolkit doing parsing and marshaling of the XML payload. This is not intended to be a disparaging comment about open source web services toolkits, but is simply illustrating the point that parsing and marshaling of XML at the endpoints can introduce latency that can add up pretty quickly.

As illustrated in Figure 2, each service invoked needs to read the XML payload from its on-the-wire serialization form, and parse the XML into a native Java or .NET object form to be processed by the business logic. In addition if database interaction is required, then there is an additional object to relational mapping that needs to occur. Finally, the inverse of those steps needs to occur in order to generate a response to the service request and send that along to the next downstream service in accordance with the business process that is coordinating the interaction between the services.

Figure 2: Service request boundary cost between XML to Object to Relational and back again for each invocation

A popular approach for dealing with XML in a SOA is to use web services and XMLBeans. Using XMLBeans, objects are typically created by fully materializing the inputs and outputs, as this allows for maximum usability and processing. In-memory processing may include sorting, filtering, or aggregation operations, all of which increase the overall memory required to deal with each call. This strategy is not scalable and cannot be applied to many of the use cases in this area. Many products support streaming of XML, but this may limit the ability to do anything meaningful without putting the data somewhere else first.

What if there was a way to take this information and store it in an application grid, a place where the size of the data and the processing capability can far eclipse that of any single machine or process? The application grid can utilize the combined memory and processing power of multiple machines in order to complete an operation, such as the application of a complex formula or filter across an enormous data set. The application grid also provides the ability to hold the data for longer periods of time beyond the cycle of a single service request, survive server restarts, and even work across network boundaries.

If we could combine the power of the grid for data storage and manipulation with the efficiency of streaming, the result would be a highly scalable system capable of processing much more information than before. Using a combination of complementary technologies here, we achieve our goal of spreading compute operations across a distributed network of machines, and we lessen the processing and memory requirements of our data consumers - SOA services, application servers, and client applications. We also remove the need to use a database for intermediate storage of data while it is (or simply so it can be) processed. By using an application grid we can also implement patterns where we pass around references to data, rather than the data, resulting in huge efficiency gains in the communications layer, and dramatically reducing or eliminating the boundary cost.

This article includes a code example that covers the use case of processing large XML files in an application grid. In a typical XML file, there are a usually elements that repeat without any pre-determined limit. Using a STAX parser to handle streaming XML, and JAXB to handle conversion between XML and Java objects, we can extract these repeating elements from the XML stream and put them on the application grid as individual objects. The implementation can populate the grid with these objects, and do so with a limited amount of memory consumption. Once populated, the grid can process the data across the multiple machines that constitute the grid. Each grid member processes an operation or a filter and passes intermediate results to the grid client, which then assembles them into a final result set.

What Is an Application Grid?
An application grid is a horizontally scalable agent based in an in-memory storage engine for application state data. This effectively provides a distributed shared memory pool that can be linearly scaled across a heterogeneous grid of machines that consists of any combination of high-end and lower-cost commodity hardware. Use of an application grid in an application simultaneously provides performance, scalability and reliability to in-memory data.

One way that an application utilizes an application grid is to use API-level interfaces that mimic the Java Hashmap, .NET Dictionary, or JPA interfaces. An alternate approach is to use a service-level interface from a SOA environment. As applications or services place data into the application grid, a group of constantly cooperating caching servers coordinate updates to data objects, as well as their backups, using cluster-wide concurrency control.

As shown in Figure 3, the request to put data to the map is taken over by the application grid and transported across a highly efficient networking protocol to the grid node P, which owns the primary instance data. The primary node in turn copies the updated value to the secondary node B for backup, and then returns control to the service.

Figure 3: Application grid clustering ensures primary / backup of in-memory data on separate machines.

The application grid stores data across multiple machines with complete location transparency as it sees fit. A unique hash key value is all that is necessary to retrieve the stored data at a future point, regardless of where the application grid chose to store the data. This prevents the application logic from dealing with complex location dependencies and manual partitioning schemes. If one or more nodes in the grid fails, or can't be reached due to network failure, the application grid will immediately react to the failure and rebalance the data across the remaining healthy nodes. This can happen even if the failing node had been participating in an autonomous update operation. In Figure 4, the primary owner ‘P' of a piece of data fails while in the midst of retrieving data for the service. The get() request is immediately routed to the backup node and a new primary / backup pair is allotted.

Figure 4:  Application grid provides continual failover of in-memory state data

This data stored in the grid can be anything from simple variables to complex objects or even large XML documents. In our case we chose to fragment what would have been very large XML documents into smaller parts and store those XML fragments as Java objects in the application grid. This allows us to do parallel queries against the data using the Java APIs.

The application grid supports a range of operations including parallel processing of queries, events, and transactions. For large datasets, an entire collection of data may be put to the grid as a single operation, and the grid can disperse the contents of the collection across multiple primary and backup nodes in order to scale. In more advanced applications, the grid may even execute business logic directly and in parallel on data storage nodes, and do so with data and logic affinity such that the logic executes on the same machine that is storing the data that the logic is operating on.

More Stories By Dave Chappell

David Chappell is vice president and chief technologist for SOA at Oracle Corporation, and is driving the vision for Oracle’s SOA on App Grid initiative.

More Stories By Andrew Gregory

Andrew Gregory is currently a Sales Consultant at Oracle Corporation. He has worked in Development, Product Support, Infrastructure, and Sales over 13 years in the industry.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
jhv1blz5 07/03/09 10:31:00 AM EDT

The article validated SOA as an IT architecture paradigm that can be leveraged in many ways. Taking data storage, scalability and application performance to a nifty level using SOA Application Grid infrastructure will no doubt enhance data and application performance on Oracle architecture platforms, it also has the promise of a cost effective and efficient IT delivery model. The very benefits of SOA.

@MicroservicesExpo Stories
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus intern...
There are two main reasons for infrastructure automation. First, system administrators, IT professionals and DevOps engineers need to automate as many routine tasks as possible. That’s why we build tools at Stackify to help developers automate processes like application performance management, error monitoring, and log management; automation means you have more time for mission-critical tasks. Second, automation makes the management of complex, diverse environments possible and allows rapid scal...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
This talk centers around how to automate best practices in a multi-/hybrid-cloud world based on our work with customers like GE, Discovery Communications and Fannie Mae. Today’s enterprises are reaping the benefits of cloud computing, but also discovering many risks and challenges. In the age of DevOps and the decentralization of IT, it’s easy to over-provision resources, forget that instances are running, or unintentionally expose vulnerabilities.
Regardless of what business you’re in, it’s increasingly a software-driven business. Consumers’ rising expectations for connected digital and physical experiences are driving what some are calling the "Customer Experience Challenge.” In his session at @DevOpsSummit at 20th Cloud Expo, Marco Morales, Director of Global Solutions at CollabNet, will discuss how organizations are increasingly adopting a discipline of Value Stream Mapping to ensure that the software they are producing is poised to ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
DevOps at Cloud Expo – being held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real r...
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to c...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry’s single source for the cloud. Fusion’s advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including cloud...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, Cloud Expo and @ThingsExpo are two of the most important technology events of the year. Since its launch over eight years ago, Cloud Expo and @ThingsExpo have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, I provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading the...
The purpose of this article is draw attention to key SaaS services that are commonly overlooked during contact signing that are essential to ensuring they meet the expectations and requirements of the organization and provide guidance and recommendations for process and controls necessary for achieving quality SaaS contractual agreements.
SYS-CON Events announced today that OpsGenie will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2012, OpsGenie is an alerting and on-call management solution for dev and ops teams. OpsGenie provides the tools needed to design actionable alerts, manage on-call schedules and escalations, and ensure that the right people are notified at the right time, using multiple notification methods.