Welcome!

Microservices Expo Authors: Pat Romanski, AppDynamics Blog, Automic Blog, Liz McMillan, Jason Bloomberg

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Linux Containers, SDN Journal

Containers Expo Blog: Blog Post

In-Memory Computing: In Plain English

Explaining in-memory computing and defining what in-memory computing is really about

After five days (and eleven meetings) with new customers in Europe, Russia, and the Middle East, I think time is right for another refinement of in-memory computing's definition. To me, it is clear that our industry is lagging when it comes to explaining in-memory computing to potential customers and defining what in-memory computing is really about. We struggle to come up with a simple, understandable definition of what in-memory computing is all about, what problems it solves, and what uses are a good fit for the technology.

In-Memory Computing: What Is It?
In-memory computing means using a type of middleware software that allows one to store data in RAM, across a cluster of computers, and process it in parallel. Consider operational datasets typically stored in a centralized database which you can now store in "connected" RAM across multiple computers. RAM, roughly, is 5,000 times faster than traditional spinning disk. Add to the mix native support for parallel processing, and things get very fast. Really, really, fast.

RAM storage and parallel distributed processing are two fundamental pillars of in-memory computing.

RAM storage and parallel distributed processing are two fundamental pillars of in-memory computing. While in-memory data storage is expected of in-memory technology, the parallelization and distribution of data processing, which is an integral part of in-memory computing, calls for an explanation.

Parallel distributed processing capabilities of in-memory computing are... a technical necessity. Consider this: a single modern computer can hardly have enough RAM to hold a significant dataset. In fact, a typical x86 server today (mid-2014) would have somewhere between 32GB to 256GB of RAM. Although this could be a significant amount of memory for a single computer, that's not enough to store many of today's operational datasets that easily measure in terabytes.

To overcome this problem in-memory computing software is designed from the ground up to store data in a distributed fashion, where the entire dataset is divided into individual computers' memory, each storing only a portion of the overall dataset. Once data is partitioned - parallel distributed processing becomes a technical necessity simply because data is stored this way.

And while it makes the development of in-memory computing software challenging (literally fewer than 10 companies in the world have mastered this type of software development) - end users of in-memory computing seeking dramatic performance and scalability increas benefit greatly from this technology.

In-Memory Computing: What Is It Good For?
Let's get this out of the way first: if one wants a 2-3x performance or scalability improvements - flash storage (SSD, Flash on PCI-E, Memory Channel Storage, etc.) can do the job. It is relatively cheap and can provide that kind of modest performance boost.

To see, however, what a difference in-memory computing can make, consider this real-live example...

Last year GridGain won an open tender for one of the largest banks in the world. The tender was for a risk analytics system to provide real-time analysis of risk for the bank's trading desk (common use case for in-memory computing in the financial industry). In this tender GridGain software demonstrated one billion (!) business transactions per second on 10 commodity servers with the total of 1TB of RAM. The total cost of these 10 commodity servers? Less than $25K.

Now, read the previous paragraph again: one billion financial transactions per second on $25K worth of hardware. That is the in-memory computing difference - not just 2-3x times faster; more than 100x faster than theoretically possible even with the most expensive flash-based storage available on today's market (forget about spinning disks). And 1TB of flash-based storage alone would cost 10x of entire hardware setup mentioned.

Importantly, that performance translates directly into the clear business value:

  • you can use less hardware to support the required performance and throughput SLAs, get better data center consolidation, and significantly reduce capital costs, as well as operational and infrastructure overhead, and
  • you can also significantly extend the lifetime of your existing hardware and software by getting increased performance and improve its ROI by using what you already have longer and making it go faster.

And that's what makes in-memory computing such a hot topic these days: the demand to process ever growing datasets in real-time can now be fulfilled with the extraordinary performance and scale of in-memory computing, with economics so compelling that the business case becomes clear and obvious.

In-Memory Computing: What Are the Best Use Cases?
I can only speak for GridGain here but our user base is big enough to be statistically significant. GridGain has production customers in a wide variety of industries:

  • Investment banking
  • Insurance claim processing & modeling
  • Real-time ad platforms
  • Real-time sentiment analysis
  • Merchant platform for online games
  • Hyper-local advertising
  • Geospatial/GIS processing
  • Medical imaging processing
  • Natural language processing & cognitive computing
  • Real-time machine learning
  • Complex event processing of streaming sensor data

And we're also seeing our solutions deployed for more mundane use cases, like speeding the response time of a student registration system from 45 seconds to under a half-second.

By looking at this list it becomes pretty obvious that the best use cases are defined not by specific industry but by the underlying technical need, i.e. the need to get the ultimate best and uncompromised performance and scalability for a given task.

In many of these real-life deployments in-memory computing was an enabling technology, the technology that made these particular systems possible to consider and ultimately possible to implement.

The bottom line is that in-memory computing is beginning to unleash a wave of innovation that's not built on Big Data per se, but on Big Ideas, ideas that are suddenly attainable. It's blowing up the costly economics of traditional computing that frankly can't keep up with either the growth of information or the scale of demand.

As the Internet expands from connecting people to connecting things, devices like refrigerators, thermostats, light bulbs, jet engines and even heart rate monitors are producing streams of information that will not just inform us, but also protect us, make us healthier and help us live richer lives. We'll begin to enjoy conveniences and experiences that only existed in science fiction novels. The technology to support this transformation exists today - and it's called in-memory computing.

More Stories By Nikita Ivanov

Nikita Ivanov is founder and CEO of GridGain Systems, started in 2007 and funded by RTP Ventures and Almaz Capital. Nikita has led GridGain to develop advanced and distributed in-memory data processing technologies – the top Java in-memory computing platform starting every 10 seconds around the world today.

Nikita has over 20 years of experience in software application development, building HPC and middleware platforms, contributing to the efforts of other startups and notable companies including Adaptec, Visa and BEA Systems. Nikita was one of the pioneers in using Java technology for server side middleware development while working for one of Europe’s largest system integrators in 1996.

He is an active member of Java middleware community, contributor to the Java specification, and holds a Master’s degree in Electro Mechanics from Baltic State Technical University, Saint Petersburg, Russia.

@MicroservicesExpo Stories
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter...
The goal of any tech business worth its salt is to provide the best product or service to its clients in the most efficient and cost-effective way possible. This is just as true in the development of software products as it is in other product design services. Microservices, an app architecture style that leans mostly on independent, self-contained programs, are quickly becoming the new norm, so to speak. With this change comes a declining reliance on older SOAs like COBRA, a push toward more s...
From the conception of Docker containers to the unfolding microservices revolution we see today, here is a brief history of what I like to call 'containerology'. In 2013, we were solidly in the monolithic application era. I had noticed that a growing amount of effort was going into deploying and configuring applications. As applications had grown in complexity and interdependency over the years, the effort to install and configure them was becoming significant. But the road did not end with a ...
You deployed your app with the Bluemix PaaS and it's gaining some serious traction, so it's time to make some tweaks. Did you design your application in a way that it can scale in the cloud? Were you even thinking about the cloud when you built the app? If not, chances are your app is going to break. Check out this webcast to learn various techniques for designing applications that will scale successfully in Bluemix, for the confidence you need to take your apps to the next level and beyond.
Digital means customer preferences and behavior are driving enterprise technology decisions to be sure, but let’s not forget our employees. After all, when we say customer, we mean customer writ large, including partners, supply chain participants, and yes, those salaried denizens whose daily labor forms the cornerstone of the enterprise. While your customers bask in the warm rays of your digital efforts, are your employees toiling away in the dark recesses of your enterprise, pecking data into...
Wow, if you ever wanted to learn about Rugged DevOps (some call it DevSecOps), sit down for a spell with Shannon Lietz, Ian Allison and Scott Kennedy from Intuit. We discussed a number of important topics including internal war games, culture hacking, gamification of Rugged DevOps and starting as a small team. There are 100 gold nuggets in this conversation for novices and experts alike.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
With DevOps becoming more well-known and established practice in nearly every industry that delivers software, it is important to continually reassess its efficacy. This week’s top 10 includes a discussion on how the quick uptake of DevOps adoption in the enterprise has posed some serious challenges. Additionally, organizations who have taken the DevOps plunge must find ways to find, hire and keep their DevOps talent in order to keep the machine running smoothly.
Call it DevOps or not, if you are concerned about releasing more code faster and at a higher quality, the resulting software delivery chain and process will look and smell like DevOps. But for existing development teams, no matter what the velocity objective is, getting from here to there is not something that can be done without a plan. Moving your release cadence from months to weeks is not just about learning Agile practices and getting some automation tools. It involves people, tooling and ...
Between the mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at 18th Cloud Expo, Charles Kendrick, CTO & Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how business and devel...
The notion of customer journeys, of course, are central to the digital marketer’s playbook. Clearly, enterprises should focus their digital efforts on such journeys, as they represent customer interactions over time. But making customer journeys the centerpiece of the enterprise architecture, however, leaves more questions than answers. The challenge arises when EAs consider the context of the customer journey in the overall architecture as well as the architectural elements that make up each...
In 2006, Martin Fowler posted his now famous essay on Continuous Integration. Looking back, what seemed revolutionary, radical or just plain crazy is now common, pedestrian and "just what you do." I love it. Back then, building and releasing software was a real pain. Integration was something you did at the end, after code complete, and we didn't know how long it would take. Some people may recall how we, as an industry, spent a massive amount of time integrating code from one team with another...
As the software delivery industry continues to evolve and mature, the challenge of managing the growing list of the tools and processes becomes more daunting every day. Today, Application Lifecycle Management (ALM) platforms are proving most valuable by providing the governance, management and coordination for every stage of development, deployment and release. Recently, I spoke with Madison Moore at SD Times about the changing market and where ALM is headed.
Struggling to keep up with increasing application demand? Learn how Platform as a Service (PaaS) can streamline application development processes and make resource management easy.
If there is anything we have learned by now, is that every business paves their own unique path for releasing software- every pipeline, implementation and practices are a bit different, and DevOps comes in all shapes and sizes. Software delivery practices are often comprised of set of several complementing (or even competing) methodologies – such as leveraging Agile, DevOps and even a mix of ITIL, to create the combination that’s most suitable for your organization and that maximize your busines...
SYS-CON Events announced today that Stratoscale, the software company developing the next generation data center operating system, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Stratoscale is revolutionizing the data center with a zero-to-cloud-in-minutes solution. With Stratoscale’s hardware-agnostic, Software Defined Data Center (SDDC) solution to store everything, run anything and scale everywhere...
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
Much of the discussion around cloud DevOps focuses on the speed with which companies need to get new code into production. This focus is important – because in an increasingly digital marketplace, new code enables new value propositions. New code is also often essential for maintaining competitive parity with market innovators. But new code doesn’t just have to deliver the functionality the business requires. It also has to behave well because the behavior of code in the cloud affects performan...
This is not a small hotel event. It is also not a big vendor party where politicians and entertainers are more important than real content. This is Cloud Expo, the world's longest-running conference and exhibition focused on Cloud Computing and all that it entails. If you want serious presentations and valuable insight about Cloud Computing for three straight days, then register now for Cloud Expo.
I had the opportunity to catch up with Chris Corriere - DevOps Engineer at AutoTrader - to talk about his experiences in the realm of Rugged DevOps. We discussed automation, culture and collaboration, and which thought leaders he is following. Chris Corriere: Hey, I'm Chris Corriere. I'm a DevOps Engineer AutoTrader. Derek Weeks: Today we're going to talk about Rugged DevOps. It's a subject that's gaining a lot of traction in the community but not a lot of people are really familiar with wh...