Welcome!

Microservices Expo Authors: Zakia Bouachraoui, Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz

Related Topics: @DXWorldExpo, Microservices Expo, Containers Expo Blog, @CloudExpo, Apache, SDN Journal

@DXWorldExpo: Article

Big Data Good, Fast Big Data Better

Speed has become an integral part of the Big Data ethos, yet it is mentioned with comparative scarcity

This post is sponsored by The Business Value Exchange and HP Enterprise Services

The IT industry is nothing if not a breeding ground for an infinite variety of acronyms and neologisms. Alongside cloud computing today sits the term Big Data, which of course we understand to mean "that amount" of data which a traditional database would find hard to compute and process as a normal matter of job processing.

Neo-neologisms
But what is a neologism if you can't turn it into a neo-neologism? Big Data in its own right is a term that we are just about getting used to, but the sooner we move towards an appreciation of ‘fast Big Data' the better.

Technology analysts have been fond of the standard 'four Vs' definition used to describe the shape of Big Data, i.e., volume, velocity, variety and variability - but it is the ‘velocity' factor that sits somewhat incongruently among its V-shaped bedfellows, i.e., it is the only factor that describes speed or motion. Without a velocity layer, Big Data lies in a state of inertia.

In the new world of data 2.0 we find that the velocity factor is extremely important. More of our computing channels are described as real-time or near real-time (both definitions are important) as users demand applications that rely upon ubiquitous connections to the Internet, other users, other data events and other application services.

Suddenly speed has become an integral part of the Big Data ethos, yet it is mentioned with comparative scarcity. Press and analyst (and vendor) comment pieces talk up the zany incomprehensible world of zettabytes, petabytes and yottabytes. These are low hanging fruit and easy to comment on. Forget terabytes, they are so 2009.

Speed Is the Unloved Second Cousin of Big Data
If speed is the unloved second cousin of Big Data, it shouldn't be. Major enterprise players (the vendors, not the customers in the first instance) are spending their hard-earned acquisition and development dollars on the technology positioned as the antidote to our Big Data woes, namely "analytics" - and analytics without real-time analytics is like a car at full throttle without a steering wheel, i.e., we need to be able to react to data in the real world and navigate through it without crashing.

Of course the real fact of the matter here is that Big Data should be considered for its size, girth and overall hugeness as much as for its speed of movement. To contemplate an analysis of one without the other is fallacious and foolhardy. These two factors form two mutually interdependent sides of the contemporary data balancing equation that props up the Big Data economic model.

Software requirements in terms of compute capacity and depth of storage (okay that's hardware, we know) both increase proportionally as the economic values for data and time approach zero. As fast real time Big Data comes of age, we need more back office technology to support it.

None of this happens without layers of management technology and this is where much of the industry discussion is focused today with regard to Big Data. The trouble is, people aren't calling it fast Big Data yet. It will happen, but it needs to happen in real time and that means today.

More Stories By Adrian Bridgwater

Adrian Bridgwater is a freelance journalist and corporate content creation specialist focusing on cross platform software application development as well as all related aspects software engineering, project management and technology as a whole.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...