Welcome!

SOA & WOA Authors: Elizabeth White, Liz McMillan, Natalie Lerner, Jason Bloomberg, Vincent Brasseur

Related Topics: Virtualization, Java, SOA & WOA, Linux, SDN Journal

Virtualization: Blog Post

In-Memory Computing: In Plain English

Explaining in-memory computing and defining what in-memory computing is really about

After five days (and eleven meetings) with new customers in Europe, Russia, and the Middle East, I think time is right for another refinement of in-memory computing's definition. To me, it is clear that our industry is lagging when it comes to explaining in-memory computing to potential customers and defining what in-memory computing is really about. We struggle to come up with a simple, understandable definition of what in-memory computing is all about, what problems it solves, and what uses are a good fit for the technology.

In-Memory Computing: What Is It?
In-memory computing means using a type of middleware software that allows one to store data in RAM, across a cluster of computers, and process it in parallel. Consider operational datasets typically stored in a centralized database which you can now store in "connected" RAM across multiple computers. RAM, roughly, is 5,000 times faster than traditional spinning disk. Add to the mix native support for parallel processing, and things get very fast. Really, really, fast.

RAM storage and parallel distributed processing are two fundamental pillars of in-memory computing.

RAM storage and parallel distributed processing are two fundamental pillars of in-memory computing. While in-memory data storage is expected of in-memory technology, the parallelization and distribution of data processing, which is an integral part of in-memory computing, calls for an explanation.

Parallel distributed processing capabilities of in-memory computing are... a technical necessity. Consider this: a single modern computer can hardly have enough RAM to hold a significant dataset. In fact, a typical x86 server today (mid-2014) would have somewhere between 32GB to 256GB of RAM. Although this could be a significant amount of memory for a single computer, that's not enough to store many of today's operational datasets that easily measure in terabytes.

To overcome this problem in-memory computing software is designed from the ground up to store data in a distributed fashion, where the entire dataset is divided into individual computers' memory, each storing only a portion of the overall dataset. Once data is partitioned - parallel distributed processing becomes a technical necessity simply because data is stored this way.

And while it makes the development of in-memory computing software challenging (literally fewer than 10 companies in the world have mastered this type of software development) - end users of in-memory computing seeking dramatic performance and scalability increas benefit greatly from this technology.

In-Memory Computing: What Is It Good For?
Let's get this out of the way first: if one wants a 2-3x performance or scalability improvements - flash storage (SSD, Flash on PCI-E, Memory Channel Storage, etc.) can do the job. It is relatively cheap and can provide that kind of modest performance boost.

To see, however, what a difference in-memory computing can make, consider this real-live example...

Last year GridGain won an open tender for one of the largest banks in the world. The tender was for a risk analytics system to provide real-time analysis of risk for the bank's trading desk (common use case for in-memory computing in the financial industry). In this tender GridGain software demonstrated one billion (!) business transactions per second on 10 commodity servers with the total of 1TB of RAM. The total cost of these 10 commodity servers? Less than $25K.

Now, read the previous paragraph again: one billion financial transactions per second on $25K worth of hardware. That is the in-memory computing difference - not just 2-3x times faster; more than 100x faster than theoretically possible even with the most expensive flash-based storage available on today's market (forget about spinning disks). And 1TB of flash-based storage alone would cost 10x of entire hardware setup mentioned.

Importantly, that performance translates directly into the clear business value:

  • you can use less hardware to support the required performance and throughput SLAs, get better data center consolidation, and significantly reduce capital costs, as well as operational and infrastructure overhead, and
  • you can also significantly extend the lifetime of your existing hardware and software by getting increased performance and improve its ROI by using what you already have longer and making it go faster.

And that's what makes in-memory computing such a hot topic these days: the demand to process ever growing datasets in real-time can now be fulfilled with the extraordinary performance and scale of in-memory computing, with economics so compelling that the business case becomes clear and obvious.

In-Memory Computing: What Are the Best Use Cases?
I can only speak for GridGain here but our user base is big enough to be statistically significant. GridGain has production customers in a wide variety of industries:

  • Investment banking
  • Insurance claim processing & modeling
  • Real-time ad platforms
  • Real-time sentiment analysis
  • Merchant platform for online games
  • Hyper-local advertising
  • Geospatial/GIS processing
  • Medical imaging processing
  • Natural language processing & cognitive computing
  • Real-time machine learning
  • Complex event processing of streaming sensor data

And we're also seeing our solutions deployed for more mundane use cases, like speeding the response time of a student registration system from 45 seconds to under a half-second.

By looking at this list it becomes pretty obvious that the best use cases are defined not by specific industry but by the underlying technical need, i.e. the need to get the ultimate best and uncompromised performance and scalability for a given task.

In many of these real-life deployments in-memory computing was an enabling technology, the technology that made these particular systems possible to consider and ultimately possible to implement.

The bottom line is that in-memory computing is beginning to unleash a wave of innovation that's not built on Big Data per se, but on Big Ideas, ideas that are suddenly attainable. It's blowing up the costly economics of traditional computing that frankly can't keep up with either the growth of information or the scale of demand.

As the Internet expands from connecting people to connecting things, devices like refrigerators, thermostats, light bulbs, jet engines and even heart rate monitors are producing streams of information that will not just inform us, but also protect us, make us healthier and help us live richer lives. We'll begin to enjoy conveniences and experiences that only existed in science fiction novels. The technology to support this transformation exists today - and it's called in-memory computing.

More Stories By Nikita Ivanov

Nikita Ivanov is founder and CEO of GridGain Systems, started in 2007 and funded by RTP Ventures and Almaz Capital. Nikita has led GridGain to develop advanced and distributed in-memory data processing technologies – the top Java in-memory computing platform starting every 10 seconds around the world today. Nikita has over 20 years of experience in software application development, building HPC and middleware platforms, contributing to the efforts of other startups and notable companies including Adaptec, Visa and BEA Systems. Nikita was one of the pioneers in using Java technology for server side middleware development while working for one of Europe’s largest system integrators in 1996. He is an active member of Java middleware community, contributor to the Java specification, and holds a Master’s degree in Electro Mechanics from Baltic State Technical University, Saint Petersburg, Russia.

@ThingsExpo Stories
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...

ARMONK, N.Y., Nov. 20, 2014 /PRNewswire/ --  IBM (NYSE: IBM) today announced that it is bringing a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix, IBM's platform-as-a-service. The new platform enables developers to build ap...

An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the "Internet of Things" (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his General Session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of ...
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
"BSQUARE is in the business of selling software solutions for smart connected devices. It's obvious that IoT has moved from being a technology to being a fundamental part of business, and in the last 18 months people have said let's figure out how to do it and let's put some focus on it, " explained Dave Wagstaff, VP & Chief Architect, at BSQUARE Corporation, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Focused on this fast-growing market’s needs, Vitesse Semiconductor Corporation (Nasdaq: VTSS), a leading provider of IC solutions to advance "Ethernet Everywhere" in Carrier, Enterprise and Internet of Things (IoT) networks, introduced its IStaX™ software (VSC6815SDK), a robust protocol stack to simplify deployment and management of Industrial-IoT network applications such as Industrial Ethernet switching, surveillance, video distribution, LCD signage, intelligent sensors, and metering equipment. Leveraging technologies proven in the Carrier and Enterprise markets, IStaX is designed to work ac...
C-Labs LLC, a leading provider of remote and mobile access for the Internet of Things (IoT), announced the appointment of John Traynor to the position of chief operating officer. Previously a strategic advisor to the firm, Mr. Traynor will now oversee sales, marketing, finance, and operations. Mr. Traynor is based out of the C-Labs office in Redmond, Washington. He reports to Chris Muench, Chief Executive Officer. Mr. Traynor brings valuable business leadership and technology industry expertise to C-Labs. With over 30 years' experience in the high-tech sector, John Traynor has held numerous...
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades.