Microservices Expo Authors: Sematext Blog, Lori MacVittie, Carmen Gonzalez, Roger Strukhoff, Pat Romanski

Related Topics: Java IoT, Microservices Expo, Linux Containers, IoT User Interface, Recurring Revenue

Java IoT: Article

Why an Application Grid?

Doing more with less infrastructure

Application servers, those dependable workhorses that run most enterprise Java applications, are rarely a hot topic of conversation these days. As a technology category, the application server appears to be fairly "established" and that the focus has moved elsewhere in the stack, but appearances can be deceiving.

In fact, much remains to be done at the application server layer. One area ripe for innovation is the ability for application server instances to work together to enable more rapid deployment of new applications and hardware while at the same time improving the utilization of underlying physical resources. In contrast to the traditional one-app/one-app-server/one-OS/one-machine architecture, a new approach has emerged with multiple application servers pooling and sharing lower-cost compute resources, while dynamically reallocating these resources across applications as needs evolve.

Grid computing refers to the aggregation of multiple, distributed computing resources, making them function as a single computing resource with respect to a particular computational task. Grid is a form of virtualization in the sense that it hides the details of resources and makes them appear like something different. Application grid applies the same concept to application servers and describes an architecture in which multiple application server instances work together to provide a shared, dynamically allocatable pool of resources to a set of applications.

Why an Application Grid?
Before delving into what it takes to make this concept work, let's look at the motivation for seeking an alternative approach in the first place. What is the primary infrastructure challenge as it stands today? An issue widely discussed in the pages of Java Developer's Journal is that of stove-piped architecture whereby applications are structured as monolithic entities that are difficult to integrate and reuse. Industry adoption of SOA has gone a long way to breaking down stovepipes at the application level. SOA achieves this by decomposing applications into finer-grained services that can be connected and reused in a more flexible way. Stove-piped resources typically remain underneath each of these SOA services - machines that are statically allocated to the entities they run. As each stovepipe (stack) is statically configured, bringing new stacks online takes a lot of effort and a big investment in hardware that will likely be underutilized.

With an application grid, the allocation of machines to applications is dynamic since it becomes easier to bring both new machines and new applications into service. With a stovepipe under an application, increasing capacity typically means adding another app server/OS/machine stack and then putting a mechanism in place to load-balance. This causes inefficiency because you don't get linear scaling - doubling the number of servers doesn't get you double the number of transactions per second or concurrent users - because other bottlenecks come into play. By contrast new application grid-enabled application servers support clustering that scales to much higher levels.

An application grid also helps improve hardware efficiency because excess capacity can be redirected to applications that need it most. By sharing and pooling resources, an application grid allows the total compute resources required to be less than the sum of all the applications' peak demands. Since few applications hit their peak loads at the same time in most environments, shared resources can be moved from lower-demand applications to those with higher demand. Continuous, automated, dynamic adjustment of resources is one of the primary capabilities of the application grid architecture.

Finally, an application grid enables a higher quality of service. Faster response times and higher reliability, which come from the application grid's ability to parallelize computation, replicate data across distributed nodes, and reduce interruptions from network problems or Java garbage collection, allow more computation per unit of time, and improve resiliency by eliminating single points of failure and automating failover. An application grid also provides tools to manage a collection of machines in an aggregated way, enabling faster administrative response and reducing human error.

Creating an Application Grid
Sounds great, but can this be achieved with current technologies? There is certainly more work for vendors in future product releases, but much can be done today. There are four fundamental capabilities that must be in place at the application server level: clustering, adjusting, metering, and automating.

Clustering is supported by most application servers, though with varying levels of reliability and administration. It is most often used for availability/failover: instances in a cluster divide work and replicate data, such as Web user sessions; each instance is responsible to another member of the cluster that serves as a backup. A backup server automatically takes over responsibilities in the event of the primary's failure. Clustering also allows horizontal scale-out since work is distributed (load-balanced) across the cluster.

Adjustment capability coupled with scale-out clustering is a key element of application grids. It's one thing to statically set up a set of application server instances (nodes) as a cluster and put load balancing in front of it. But when nodes can be added to or removed from the cluster while the application is running, we have the basis for dynamic scaling.

Metering, or instrumentation, complements adjustment. We need to adjust clusters for visibility into what's happening inside them. Are any computing resources near critical thresholds? Are application service levels in danger? In short, the application server, the Java Virtual Machine, and other resources must provide the right kind of information about things like memory use and latency.

Once we have dynamically adjustable clustering with good instrumentation, the linchpin of the application grid is automation. This meta-level controller plugs into the adjustment controls and metering instruments of the clusters creating an automated feedback loop of observations and adjustments. The mechanism adds nodes to clusters in need of capacity and removes nodes from clusters with reduced need. Since each cluster is ignorant of the surrounding clusters competing for resources, the application grid controller makes allocation decisions that are optimal for the grid overall, taking into account demands, resources, and policies.

Getting Started
Many enterprises have already started down the path to an application grid by using the clustering mechanisms in contemporary application servers for horizontal scale-out and by using scripting to partially automate the addition and removal of nodes.

State-of-the-art distributed caching technologies complement these early steps by creating an even more dynamic in-memory data grid with extreme scalability. Real-time JVM technology provides the predictability and additional instrumentation for applications with microsecond latency demands. And finally, as understanding and practices around application grid mature, management technologies with increasingly sophisticated mechanisms for cross-grid optimization will continue to evolve.

The combination of accelerating business change and the agility enabled by SOA imposes increasingly volatile demands on infrastructure. At the same time, the economic climate is driving the need for greater resource efficiency. It's time for a new approach to application resourcing: application grid.

More Stories By Adam Messinger

Adam Messinger is Vice President of Development in the Fusion Middleware group at Oracle. He is responsible for managing the Oracle Coherence, Oracle JRockit, Oracle WebLogic Operations Control, and other web tier products. Prior to joining Oracle, he worked as a venture capitalist at Smartforest Ventures and O'Reilly AlphaTech Ventures. Adam is a graduate of the Stanford Graduate School of Business where he was a Sloan Fellow and of Willamette University where he was a G. Herbert Smith Scholar.

More Stories By Mike Piech

Mike Piech is senior director at Oracle with responsibility for Oracle Fusion Middleware products: Oracle WebLogic, Oracle Coherence, Oracle JRockit, and Oracle Tuxedo. He joined Oracle as part of the BEA acquisition, prior to which he spent seven years running product marketing at Dorado Corporation, which builds a WebLogic-based cloud solution for mortgage banking.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Most Recent Comments
jabailo 05/14/09 01:39:00 PM EDT

I think application server developers should switch from trying to build something that has everything but the kitchen sink, to moderately functional, very robust and specifically useful servers.

The application grid is a great idea. Instead of repeating clusters of overloaded database and application servers, a heterogeneous network of servers that have a blend of specific functionality and some degree of programmability inside the box.

@MicroservicesExpo Stories
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center. What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will present at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they manag...
SYS-CON Events announced today that eCube Systems, the leading provider of modern development tools and best practices for Continuous Integration on OpenVMS, will exhibit at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. eCube Systems offers a family of middleware products and development tools that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of So...
Apache Hadoop is a key technology for gaining business insights from your Big Data, but the penetration into enterprises is shockingly low. In fact, Apache Hadoop and Big Data proponents recognize that this technology has not yet achieved its game-changing business potential. In his session at 19th Cloud Expo, John Mertic, director of program management for ODPi at The Linux Foundation, will explain why this is, how we can work together as an open data community to increase adoption, and the i...
operations aren’t merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations – together with other practices such as security – to collaborate and coexist with less overhead and conflict than in the past. In his session at @DevOpsSummit at 19th Cloud Expo, Gordon Haff, Red Hat Technology Evangelist, will discuss what modern operational practices look like in a world in which applications are more loosely coupled, are deve...
DevOps is a term that comes full of controversy. A lot of people are on the bandwagon, while others are waiting for the term to jump the shark, and eventually go back to business as usual. Regardless of where you are along the specturm of loving or hating the term DevOps, one thing is certain. More and more people are using it to describe a system administrator who uses scripts, or tools like, Chef, Puppet or Ansible, in order to provision infrastructure. There is also usually an expectation of...
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
DevOps theory promotes a culture of continuous improvement built on collaboration, empowerment, systems thinking, and feedback loops. But how do you collaborate effectively across the traditional silos? How can you make decisions without system-wide visibility? How can you see the whole system when it is spread across teams and locations? How do you close feedback loops across teams and activities delivering complex multi-tier, cloud, container, serverless, and/or API-based services?
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will show how customers are able to achieve a level of transparency that enables everyon...
Today every business relies on software to drive the innovation necessary for a competitive edge in the Application Economy. This is why collaboration between development and operations, or DevOps, has become IT’s number one priority. Whether you are in Dev or Ops, understanding how to implement a DevOps strategy can deliver faster development cycles, improved software quality, reduced deployment times and overall better experiences for your customers.