Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Yeshim Deniz, Zakia Bouachraoui

Related Topics: Microservices Expo, Industrial IoT

Microservices Expo: Article

The In-Memory Technologies Behind Business Intelligence Software

Understanding the in-memory technologies that are used in Business Intelligence software

If you follow trends in the business intelligence (BI) space, you'll notice that many analysts, independent bloggers and BI vendors talk about in-memory technology.

There are technical differences that separate one in-memory technology from another, some of which are listed on Boris Evelson's blog.

Some of the items on Boris' list are just as applicable to BI technologies that are not in-memory (‘Incremental updates', for example), but there is one item that merits much deeper discussion. Boris calls this characteristic ‘Memory Swapping' and describes it as, What the (BI) vendor's approach is for handling models that are larger than what fits into a single memory space.

Understanding Memory Swapping
The fundamental idea of in-memory BI technology is the ability to perform real-time calculations without having to perform slow disk operations during the execution of a query. For more details on this, visit my article describing how in-memory technology works.

Obviously, in order to perform calculations on data completely in memory, all the relevant data must reside in memory, i.e., in the computer's RAM. So the questions are: 1) how does the data get there? and 2) how long does it stay there?

These are probably the most important aspects of in-memory technology, as they have great implications on the BI solution as a whole.

Pure In-Memory Technology
Pure in-memory technologies are the class of in-memory technologies that load the entire data model into RAM before a single query can be executed by users. An example of a BI product which utilizes such a technology is QlikView.

QlikView's technology is described as "associative technology." That is a fancy way of saying that QlikView uses a simple tabular data model which is stored entirely in memory. For QlikView, much like any other pure in-memory technology, compression is very important. Compressing the data well makes it possible to hold more data inside a fixed amount of RAM

Pure in-memory technologies which do not compress the data they store in memory are usually quite useless for BI. They either handle amounts of data too small to extract interesting information from, or they break too often.

With or without compression, the fact remains that pure in-memory BI solutions become useless when RAM runs out for the entire data model, even if you're only looking to work with limited portions of it at any one time.

Just-In-Time In-Memory Technology
Just-In-Time In-Memory (or JIT In-Memory) technology only loads the portion of the data into RAM required for a particular query, on demand. An example of a BI product which utilizes this type of technology is SiSense.

Note: The term JIT is borrowed from Just-In-Time compilation, which is a method to improve the runtime performance of computer programs.

JIT in-memory technology involves a smart caching engine that loads selected data into RAM and releases it according to usage patterns.

This approach has obvious advantages:

  1. You have access to far more data than can fit in RAM at any one time
  2. It is easier to have a shared cache for multiple users
  3. It is easier to build solutions that are distributed across several machines

However, since JIT In-Memory loads data on demand, an obvious question arises: Won't the disk reads introduce unbearable performance issues?

The answer would be yes, if the data model used is tabular (as they are in RDBMSs such as SQL Server and Oracle, or pure in-memory technologies such as QlikView), but scalable JIT In-Memory solutions rely on a columnar database instead of a tabular database.

This fundamental ability of columnar databases to access only particular fields, or parts of fields, is what makes JIT In-Memory so powerful. In fact, the impact of columnar database technology on in-memory technology is so great, that many confuse the two.

The combination of JIT In-Memory technology and a columnar database structure delivers the performance of pure in-memory BI technology with the scalability of disk-based models, and is thus an ideal technological basis for large-scale and/or rapidly-growing BI data stores.


The ElastiCube Chronicles - Business Intelligence Blog

More Stories By Elad Israeli

Elad Israeli is co-founder of business intelligence software company, SiSense. SiSense has developed Prism, a next-generation business intelligence platform based on its own, unique ElastiCube BI technology. Elad is responsible for driving the vision and strategy of SiSense’s unique BI products. Before co-founding SiSense, Elad served as a Product Manager at global IT services firm Ness Technologies (NASDAQ: NSTC). Previously, Elad was a Product Manager at Anysoft and, before that, he co-founded and led technology development at BiSense, a BI technology company.

Microservices Articles
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.