Welcome!

Microservices Expo Authors: Pat Romanski, Aruna Ravichandran, Elizabeth White, XebiaLabs Blog, Yeshim Deniz

Related Topics: Java IoT, Microservices Expo, Linux Containers, Machine Learning , Recurring Revenue

Java IoT: Article

Why an Application Grid?

Doing more with less infrastructure

Application servers, those dependable workhorses that run most enterprise Java applications, are rarely a hot topic of conversation these days. As a technology category, the application server appears to be fairly "established" and that the focus has moved elsewhere in the stack, but appearances can be deceiving.

In fact, much remains to be done at the application server layer. One area ripe for innovation is the ability for application server instances to work together to enable more rapid deployment of new applications and hardware while at the same time improving the utilization of underlying physical resources. In contrast to the traditional one-app/one-app-server/one-OS/one-machine architecture, a new approach has emerged with multiple application servers pooling and sharing lower-cost compute resources, while dynamically reallocating these resources across applications as needs evolve.

Grid computing refers to the aggregation of multiple, distributed computing resources, making them function as a single computing resource with respect to a particular computational task. Grid is a form of virtualization in the sense that it hides the details of resources and makes them appear like something different. Application grid applies the same concept to application servers and describes an architecture in which multiple application server instances work together to provide a shared, dynamically allocatable pool of resources to a set of applications.

Why an Application Grid?
Before delving into what it takes to make this concept work, let's look at the motivation for seeking an alternative approach in the first place. What is the primary infrastructure challenge as it stands today? An issue widely discussed in the pages of Java Developer's Journal is that of stove-piped architecture whereby applications are structured as monolithic entities that are difficult to integrate and reuse. Industry adoption of SOA has gone a long way to breaking down stovepipes at the application level. SOA achieves this by decomposing applications into finer-grained services that can be connected and reused in a more flexible way. Stove-piped resources typically remain underneath each of these SOA services - machines that are statically allocated to the entities they run. As each stovepipe (stack) is statically configured, bringing new stacks online takes a lot of effort and a big investment in hardware that will likely be underutilized.

With an application grid, the allocation of machines to applications is dynamic since it becomes easier to bring both new machines and new applications into service. With a stovepipe under an application, increasing capacity typically means adding another app server/OS/machine stack and then putting a mechanism in place to load-balance. This causes inefficiency because you don't get linear scaling - doubling the number of servers doesn't get you double the number of transactions per second or concurrent users - because other bottlenecks come into play. By contrast new application grid-enabled application servers support clustering that scales to much higher levels.

An application grid also helps improve hardware efficiency because excess capacity can be redirected to applications that need it most. By sharing and pooling resources, an application grid allows the total compute resources required to be less than the sum of all the applications' peak demands. Since few applications hit their peak loads at the same time in most environments, shared resources can be moved from lower-demand applications to those with higher demand. Continuous, automated, dynamic adjustment of resources is one of the primary capabilities of the application grid architecture.

Finally, an application grid enables a higher quality of service. Faster response times and higher reliability, which come from the application grid's ability to parallelize computation, replicate data across distributed nodes, and reduce interruptions from network problems or Java garbage collection, allow more computation per unit of time, and improve resiliency by eliminating single points of failure and automating failover. An application grid also provides tools to manage a collection of machines in an aggregated way, enabling faster administrative response and reducing human error.

Creating an Application Grid
Sounds great, but can this be achieved with current technologies? There is certainly more work for vendors in future product releases, but much can be done today. There are four fundamental capabilities that must be in place at the application server level: clustering, adjusting, metering, and automating.

Clustering is supported by most application servers, though with varying levels of reliability and administration. It is most often used for availability/failover: instances in a cluster divide work and replicate data, such as Web user sessions; each instance is responsible to another member of the cluster that serves as a backup. A backup server automatically takes over responsibilities in the event of the primary's failure. Clustering also allows horizontal scale-out since work is distributed (load-balanced) across the cluster.

Adjustment capability coupled with scale-out clustering is a key element of application grids. It's one thing to statically set up a set of application server instances (nodes) as a cluster and put load balancing in front of it. But when nodes can be added to or removed from the cluster while the application is running, we have the basis for dynamic scaling.

Metering, or instrumentation, complements adjustment. We need to adjust clusters for visibility into what's happening inside them. Are any computing resources near critical thresholds? Are application service levels in danger? In short, the application server, the Java Virtual Machine, and other resources must provide the right kind of information about things like memory use and latency.

Once we have dynamically adjustable clustering with good instrumentation, the linchpin of the application grid is automation. This meta-level controller plugs into the adjustment controls and metering instruments of the clusters creating an automated feedback loop of observations and adjustments. The mechanism adds nodes to clusters in need of capacity and removes nodes from clusters with reduced need. Since each cluster is ignorant of the surrounding clusters competing for resources, the application grid controller makes allocation decisions that are optimal for the grid overall, taking into account demands, resources, and policies.

Getting Started
Many enterprises have already started down the path to an application grid by using the clustering mechanisms in contemporary application servers for horizontal scale-out and by using scripting to partially automate the addition and removal of nodes.

State-of-the-art distributed caching technologies complement these early steps by creating an even more dynamic in-memory data grid with extreme scalability. Real-time JVM technology provides the predictability and additional instrumentation for applications with microsecond latency demands. And finally, as understanding and practices around application grid mature, management technologies with increasingly sophisticated mechanisms for cross-grid optimization will continue to evolve.

The combination of accelerating business change and the agility enabled by SOA imposes increasingly volatile demands on infrastructure. At the same time, the economic climate is driving the need for greater resource efficiency. It's time for a new approach to application resourcing: application grid.

More Stories By Adam Messinger

Adam Messinger is Vice President of Development in the Fusion Middleware group at Oracle. He is responsible for managing the Oracle Coherence, Oracle JRockit, Oracle WebLogic Operations Control, and other web tier products. Prior to joining Oracle, he worked as a venture capitalist at Smartforest Ventures and O'Reilly AlphaTech Ventures. Adam is a graduate of the Stanford Graduate School of Business where he was a Sloan Fellow and of Willamette University where he was a G. Herbert Smith Scholar.

More Stories By Mike Piech

Mike Piech is senior director at Oracle with responsibility for Oracle Fusion Middleware products: Oracle WebLogic, Oracle Coherence, Oracle JRockit, and Oracle Tuxedo. He joined Oracle as part of the BEA acquisition, prior to which he spent seven years running product marketing at Dorado Corporation, which builds a WebLogic-based cloud solution for mortgage banking.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
jabailo 05/14/09 01:39:00 PM EDT

I think application server developers should switch from trying to build something that has everything but the kitchen sink, to moderately functional, very robust and specifically useful servers.

The application grid is a great idea. Instead of repeating clusters of overloaded database and application servers, a heterogeneous network of servers that have a blend of specific functionality and some degree of programmability inside the box.

@MicroservicesExpo Stories
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
A lot of time, resources and energy has been invested over the past few years on de-siloing development and operations. And with good reason. DevOps is enabling organizations to more aggressively increase their digital agility, while at the same time reducing digital costs and risks. But as 2017 approaches, the hottest trends in DevOps aren’t specifically about dev or ops. They’re about testing, security, and metrics.
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
Software delivery was once specific to the IT industry. Now, Continuous Delivery pipelines are used around world from e-commerce to airline software. Building a software delivery pipeline once involved hours of scripting and manual steps–a process that’s painful, if not impossible, to scale. However Continuous Delivery with Application Release Automation tools offers a scripting-free, automated experience. Continuous Delivery pipelines are immensely powerful for the modern enterprise, boosting ...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and the delivery of applications and microservices. This applies equally to enterprise data centers as well as the cloud. In his session at 20th Cloud Expo, Jari Kolehmainen, founder and CTO of Kontena, will discuss solutions and benefits of a deeply integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the drone.io Cl tool...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ...
In his session at @DevOpsSummit at 19th Cloud Expo, Robert Doyle, lead architect at eCube Systems, will examine the issues and need for an agile infrastructure and show the advantages of capturing developer knowledge in an exportable file for migration into production. He will introduce the use of NXTmonitor, a next-generation DevOps tool that captures application environments, dependencies and start/stop procedures in a portable configuration file with an easy-to-use GUI. In addition to captur...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of D...
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
We call it DevOps but much of the time there’s a lot more discussion about the needs and concerns of developers than there is about other groups. There’s a focus on improved and less isolated developer workflows. There are many discussions around collaboration, continuous integration and delivery, issue tracking, source code control, code review, IDEs, and xPaaS – and all the tools that enable those things. Changes in developer practices may come up – such as developers taking ownership of code ...
The proper isolation of resources is essential for multi-tenant environments. The traditional approach to isolate resources is, however, rather heavyweight. In his session at 18th Cloud Expo, Igor Drobiazko, co-founder of elastic.io, drew upon his own experience with operating a Docker container-based infrastructure on a large scale and present a lightweight solution for resource isolation using microservices. He also discussed the implementation of microservices in data and application integrat...
I’m told that it has been 21 years since Scrum became public when Jeff Sutherland and I presented it at an Object-Oriented Programming, Systems, Languages & Applications (OOPSLA) workshop in Austin, TX, in October of 1995. Time sure does fly. Things mature. I’m still in the same building and at the same company where I first formulated Scrum.[1] Initially nobody knew of Scrum, yet it is now an open source body of knowledge translated into more than 30 languages[2] People use Scrum worldwide for ...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, showed how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful f...
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...