Welcome!

Microservices Expo Authors: Steve Wilson, Jason Bloomberg, Derek Weeks, Mehdi Daoudi, Pat Romanski

Related Topics: @CloudExpo, Microservices Expo, Agile Computing, @BigDataExpo, SDN Journal, @DevOpsSummit

@CloudExpo: Article

In-Memory Data Grids and Cloud Computing

The promise of the cloud is a reduction in total cost of ownership

The use of in-memory data grids (IMDGs) for scaling application performance has rapidly increased in recent years as firms have seen their application workloads explode. This trend runs across nearly every vertical market, touching online applications for financial services, ecommerce, travel, manufacturing, social media, mobile, and more. At the same time, many firms are also looking to leverage the use of cloud computing to meet the challenge of ever increasing workloads. One of the fundamental promises of the cloud is elastic, transparent, on-demand scalability -- a key capability that has become practical with the use of in-memory data grid technology. As such IMDGs are becoming a vital factor in the cloud, just as they have been for on-premise applications.

What makes IMDGs such a good fit with cloud computing? The promise of the cloud is a reduction in total cost of ownership. Part of that reduction comes from the ability to quickly provision and use new server capacity (without having to own the hardware). The essential synergy between IMDGs and the cloud derives from their common elasticity. IMDGs can scale out their memory-based storage and performance linearly as servers are added to the grid, and they can gracefully scale back when fewer servers are needed. IMDGs take full advantage of the cloud's ability to easily spin-up or remove servers. IMDGs enable cloud-hosted applications to be quickly and easily deployed on an elastic pool of cloud servers to deliver scalable performance, maintaining fast data access even as workloads increase. This is an ideal solution for fast-growing companies and for applications whose workloads create widely varying demands (like online flowers for Mother's Day, concert tickets, etc.). These companies no longer need to create space, power, and cooling for new hardware to meet these fluctuating workloads. Instead, with a few button clicks, they can start up an IMDG-enabled cloud architecture, which transparently meets their performance demands at a cost is solely based on usage.

Expanding on the promise of the cloud, some in-memory data grids can span both on-premise and cloud environments to provide seamless "cloud bursting" for handling high workloads. Let's say your e-commerce application stores shopping carts in an IMDG to give customers fast response times. To spur sales, your marketing group plans to run a special online sales event. Because projected traffic is expected to double during this event, additional web servers will be needed to handle the workload. Of course, maintaining fast response times as the workload increases is essential to success. By deploying your web app in the cloud and connecting it to your on-premise server farm with an IMDG, you can seamlessly double your traffic-handling capacity without interrupting current shopping activity on your site. You don't even need to make changes to your application. The combined deployments transparently work together to serve web traffic, and data freely flows between them within the IMDGs at both sites.

These synergies form a solid basis for making 2014 a watershed year for IMDGs in the cloud. But, there's another big trend that will further drive adoption. As the discussion around "Big Data" analysis heats up, the emerging combination of Big Data and cloud computing - cloud-based analytics - promises to fundamentally change the technology of data mining, machine learning and many other analytics use cases. In 2014, we expect to see the trend toward in-memory, predictive analytics sharply increase, and cloud computing will be a fundamental enabler of that trend.

IMDGs integrate memory-based data storage and computing to make real-time data analysis easily accessible to users and help extend a company's competitive edge. IMDGs automatically take full advantage of the cloud's elasticity to run analytics in parallel across cloud servers with lightning fast performance. Now it's possible to host a real-time analytics engine in the cloud and provide on-demand analytics to a wide range of users, from SaaS services for mobile devices to business simulations for corporate users. Or, maybe you want to spin-up servers with, say, a terabyte of memory, load the grid, run analytics across that data, and then release the resources. In an extreme example, chemistry researchers recently used Amazon Web Services to achieve a "petaflop" of computing power</a> running an analysis of 205,000 molecules for just one week. The elasticity of the cloud again makes the difference by providing the equivalent of a parallel processing supercomputer at your fingertips without the huge capital investment (it costs $33,000 total).

To sum-up, in 2014 we expect firms to adopt cloud computing and cloud-hosted IMDGs at a rapid rate, and the trends of in-memory computing and data analytics will converge to enable fast adoption of in-memory data grid technology in public, private, and hybrid cloud environments. Enterprises that take advantage of this convergence are expected to enjoy a quantum leap in the value of their data without the need to break their IT budgets.

More Stories By William Bain

Dr. William L. Bain is founder and CEO of ScaleOut Software, Inc. Bill has a Ph.D. in electrical engineering/parallel computing from Rice University, and he has worked at Bell Labs research, Intel, and Microsoft. Bill founded and ran three start-up companies prior to joining Microsoft. In the most recent company (Valence Research), he developed a distributed Web load-balancing software solution that was acquired by Microsoft and is now called Network Load Balanc¬ing within the Windows Server operating system. Dr. Bain holds several patents in computer architecture and distributed computing. As a member of the Seattle-based Alliance of Angels, Dr. Bain is actively involved in entrepreneurship and the angel community.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
As today's digital disruptions bounce and smash their way through conventional technologies and conventional wisdom alike, predicting their path is a multifaceted challenge. So many areas of technology advance on Moore's Law-like exponential curves that divining the future is fraught with danger. Such is the problem with artificial intelligence (AI), and its related concepts, including cognitive computing, machine learning, and deep learning.
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
In our first installment of this blog series, we went over the different types of applications migrated to the cloud and the benefits IT organizations hope to achieve by moving applications to the cloud. Unfortunately, IT can’t just press a button or even whip up a few lines of code to move applications to the cloud. Like any strategic move by IT, a cloud migration requires advanced planning.
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
“Why didn’t testing catch this” must become “How did this make it to testing?” Traditional quality teams are the crutch and excuse keeping organizations from making the necessary investment in people, process, and technology to accelerate test automation. Just like societies that did not build waterways because the labor to keep carrying the water was so cheap, we have created disincentives to automate. In her session at @DevOpsSummit at 20th Cloud Expo, Anne Hungate, President of Daring System...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies? We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measure...
If you are part of the cloud development community, you certainly know about “serverless computing”, almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity. We came from monolithic computing to client-server to services to microservices to serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-func...
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong...
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
In his session at 20th Cloud Expo, Chris Carter, CEO of Approyo, discussed the basic set up and solution for an SAP solution in the cloud and what it means to the viability of your company. Chris Carter is CEO of Approyo. He works with business around the globe, to assist them in their journey to the usage of Big Data in the forms of Hadoop (Cloudera and Hortonwork's) and SAP HANA. At Approyo, we support firms who are looking for knowledge to grow through current business process, where even 1%...