Click here to close now.


Microservices Expo Authors: Pat Romanski, AppDynamics Blog, XebiaLabs Blog, Elizabeth White, Liz McMillan

Related Topics: @BigDataExpo, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, SDN Journal

@BigDataExpo: Article

Archiving the Big Data Old Tail

At any point in time, half of your Big Data are more than two years old

Scenario #1: out of the blue, your boss calls, looking for some long-forgotten entry in a spreadsheet from 1989. Where do you look? Or consider scenario #2: said boss calls again, only this time she wants you to analyze customer purchasing behavior...going back to 1980. Similar problem, only instead of finding a single datum, you must find years of ancient information and prepare it for analysis with a modern business intelligence tool.

The answer, of course, is archiving. Fortunately, you (or your predecessor, or predecessor's predecessor) have been archiving important-or potentially important-corporate data since your organization first started using computers back in the 1960s. So all you have to do to keep your boss happy is find the appropriate archives, recover the necessary data, and you're good to go, right?

Not so fast. There are a number of gotchas to this story, some more obvious than others. Cloud to the rescue? Perhaps, but many archiving challenges remain, and the Cloud actually introduces some new speed bumps as well. Now factor in Big Data. Sure, Big Data are big, so archiving Big Data requires a big archive. Lucky you-vendors have already been knocking on your door peddling Big Data archiving solutions. Now can you finally breathe easy? Maybe, maybe not. Here's why.

Archiving: The Long View
So much of our digital lives have taken place over the last twenty years or so that we forget that digital computing dates back to the 1940s-and furthermore, we forget that this sixty-odd year lifetime of the Information Age is really only the first act of perhaps centuries of computing before humankind either evolves past zeroes and ones altogether or kills itself off in the process. Our technologies for archiving information, however, are woefully shortsighted, for several reasons:

  • Hardware obsolescence (three to five years) - Using a hard drive or tape drive for archiving? It won't be long till the hardware is obsolete. You may get more life out of the gear you own, but one it wears out, you'll be stuck. Anyone who archived to laser disk in the 1980s has been down this road.
  • File format obsolescence (five to ten years) - True, today's Office products can probably read that file originally saved in the Microsoft Excel version 1 file format back in the day, but what about those VisiCalc or Lotus 123 files? Tools that will convert such files to their modern equivalents will eventually grow increasingly scarce, and you always risk the possibility that they won't handle the conversion properly, leading to data corruption. If your data are encrypted, then your encryption format falls into the file format obsolescence bucket as well. And what about the programs themselves? From simple spreadsheet formulas to complex legacy spaghetti code, how do you archive algorithms in an obsolescence-proof format?
  • Media obsolescence (ten to fifteen years) - CD-ROMs and digital backup tapes have an expected lifetime. Keeping them cool and dry can extend their life, but actually using them will shorten it. Do you really want to rely upon a fifteen-year-old backup tape for critical information?
  • Computing paradigm obsolescence (fifty years perhaps; it's anybody's guess) - will quantum computing or biological processors or some other futuristic gear drive binary digital technologies into the Stone Age? Only time will tell. But if you are forward thinking enough to archive information for the 22nd century, there's no telling what you'll need to do to maintain the viability of your archives in a post-binary world.

Cloud to the Rescue?
On the surface, letting your Cloud Service Provider (CSP) archive your data solves many of these issues. Not only are the new archiving services like Amazon Glacier impressively cost-effective, but we can feel reasonably comfortable counting on today's CSPs to migrate our data from one hardware/media platform to the next over time as technology advances. So, can Cloud solve all your archiving issues?

At some point the answer may be yes, but Cloud Computing is still far too immature to jump to such a conclusion. Will your CSP still be in business decades from now? As the CSP market undergoes its inevitable consolidation phase, will the new CSP who bought out your old CSP handle your archive properly? Only time will tell.

But even if the CSPs rise to the archiving challenge, you may still have the file format challenge. Sure, archiving those old Lotus 123 files in the Cloud is a piece of cake, but that doesn't mean that your CSP will return them in Excel version 21.3 format ten years hence-an unfortunate and unintentional example of garbage in the Cloud.

The Big Data Old Tail
You might think that the challenges inherent in archiving Big Data are simply a matter of degree: bigger storage for bigger data sets, right? But thinking of Big Data as little more than extra-large data sets misses the big picture of the importance of Big Data.

The point to Big Data is that the indicated data sets continue to grow in size on an ongoing basis, continually pushing the limits of existing technology. The more capacity available for storage and processing, the larger the data sets we end up with. In other words, Big Data are by definition a moving target.

One familiar estimate states that the quantity of data in the world doubles every two years. Your organization's Big Data may grow somewhat faster or slower than this convenient benchmark, but in any case, the point is that Big Data growth is exponential. So, taking the two-year doubling factor as a rule of thumb, we can safely say that at any point in time, half of your Big Data are less than two years old, while the other half of your Big Data are more than two years old. And of course, this ZapFlash is concerned with the older half.

The Big Data archiving challenge, therefore, is breaking down the more-than-two-years-old Big Data sets. Remember that this two-year window is true at any point in time. Thinking about the problem mathematically, then, you can conclude that a quarter of your Big Data are more than four years old, an eighth are more than six years old, etc.

Combine this math with the lesson of the first part of this ZapFlash, and a critical point emerges: byte for byte, the cost of maintaining usable archives increases the older those archives become. And yet, the relative size of those archives is vanishingly small relative to today's and tomorrow's Big Data. Furthermore, this problem will only get worse over time, because the size of the Old Tail continues to grow exponentially.

We call this Big Data archiving problem the Big Data Old Tail. Similar to the Long Tail argument, which focuses on the value inherent in summing up the Long Tail of customer demand for niche products, the Big Data Old Tail focuses on the costs inherent in maintaining archives of increasingly small, yet increasingly costly data as we struggle to deal with older and older information. True, perhaps the fact that the Old Tail data sets from a particular time period are small will compensate for the fact that they are costly to archive, but remember that the Old Tail continues to grow over time. Unless we deal with the Old Tail, it threatens to overwhelm us.

The ZapThink Take
The obvious question that comes to mind is whether we need to save all those old data sets anyway. After all, who cares about, say, purchasing data from 1982? And of course, you may have a business reason for deleting old information. Since information you preserve may be subject to lawsuits or other unpleasantness, you may wish to delete data once it's legal to do so.

Fair enough. But there are perhaps far more examples of Big Data sets that your organization will wish to preserve indefinitely than data sets you're happy to delete. From scientific data to information on market behavior to social trends, the richness of our Big Data do not simply depend on the information from the last year or two or even ten. After all, if we forget the mistakes of the past then we are doomed to repeat them. Crunching today's Big Data can give us business intelligence, but only by crunching yesterday's Big Data as well can we ever expect to glean wisdom from our information.

More Stories By Jason Bloomberg

Jason Bloomberg is the leading expert on architecting agility for the enterprise. As president of Intellyx, Mr. Bloomberg brings his years of thought leadership in the areas of Cloud Computing, Enterprise Architecture, and Service-Oriented Architecture to a global clientele of business executives, architects, software vendors, and Cloud service providers looking to achieve technology-enabled business agility across their organizations and for their customers. His latest book, The Agile Architecture Revolution (John Wiley & Sons, 2013), sets the stage for Mr. Bloomberg’s groundbreaking Agile Architecture vision.

Mr. Bloomberg is perhaps best known for his twelve years at ZapThink, where he created and delivered the Licensed ZapThink Architect (LZA) SOA course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, the leading SOA advisory and analysis firm, which was acquired by Dovel Technologies in 2011. He now runs the successor to the LZA program, the Bloomberg Agile Architecture Course, around the world.

Mr. Bloomberg is a frequent conference speaker and prolific writer. He has published over 500 articles, spoken at over 300 conferences, Webinars, and other events, and has been quoted in the press over 1,400 times as the leading expert on agile approaches to architecture in the enterprise.

Mr. Bloomberg’s previous book, Service Orient or Be Doomed! How Service Orientation Will Change Your Business (John Wiley & Sons, 2006, coauthored with Ron Schmelzer), is recognized as the leading business book on Service Orientation. He also co-authored the books XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996).

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting).

@MicroservicesExpo Stories
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
If you are new to Python, you might be confused about the different versions that are available. Although Python 3 is the latest generation of the language, many programmers still use Python 2.7, the final update to Python 2, which was released in 2010. There is currently no clear-cut answer to the question of which version of Python you should use; the decision depends on what you want to achieve. While Python 3 is clearly the future of the language, some programmers choose to remain with Py...
Opinions on how best to package and deliver applications are legion and, like many other aspects of the software world, are subject to recurring trend cycles. On the server-side, the current favorite is container delivery: a “full stack” approach in which your application and everything it needs to run are specified in a container definition. That definition is then “compiled” down to a container image and deployed by retrieving the image and passing it to a container runtime to create a running...
Clearly the way forward is to move to cloud be it bare metal, VMs or containers. One aspect of the current public clouds that is slowing this cloud migration is cloud lock-in. Every cloud vendor is trying to make it very difficult to move out once a customer has chosen their cloud. In his session at 17th Cloud Expo, Naveen Nimmu, CEO of Clouber, Inc., will advocate that making the inter-cloud migration as simple as changing airlines would help the entire industry to quickly adopt the cloud wit...
As the world moves towards more DevOps and microservices, application deployment to the cloud ought to become a lot simpler. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. In his session at 17th Cloud Expo, Raghavan "Rags" Srinivas, an Architect/Developer Evangeli...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Bradley Holt, Developer Advocate at IBM Cloud Data Services, will demonstrate techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, ...
Despite all the talk about public cloud services and DevOps, you would think the move to cloud for enterprises is clear and simple. But in a survey of almost 1,600 IT decision makers across the USA and Europe, the state of the cloud in enterprise today is still fraught with considerable frustration. The business case for apps in the real world cloud is hybrid, bimodal, multi-platform, and difficult. Download this report commissioned by NTT Communications to see the insightful findings – registra...
Application availability is not just the measure of “being up”. Many apps can claim that status. Technically they are running and responding to requests, but at a rate which users would certainly interpret as being down. That’s because excessive load times can (and will be) interpreted as “not available.” That’s why it’s important to view ensuring application availability as requiring attention to all its composite parts: scalability, performance, and security.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
There once was a time when testers operated on their own, in isolation. They’d huddle as a group around the harsh glow of dozens of CRT monitors, clicking through GUIs and recording results. Anxiously, they’d wait for the developers in the other room to fix the bugs they found, yet they’d frequently leave the office disappointed as issues were filed away as non-critical. These teams would rarely interact, save for those scarce moments when a coder would wander in needing to reproduce a particula...
All we need to do is have our teams self-organize, and behold! Emergent design and/or architecture springs up out of the nothingness! If only it were that easy, right? I follow in the footsteps of so many people who have long wondered at the meanings of such simple words, as though they were dogma from on high. Emerge? Self-organizing? Profound, to be sure. But what do we really make of this sentence?
As we increasingly rely on technology to improve the quality and efficiency of our personal and professional lives, software has become the key business differentiator. Organizations must release software faster, as well as ensure the safety, security, and reliability of their applications. The option to make trade-offs between time and quality no longer exists—software teams must deliver quality and speed. To meet these expectations, businesses have shifted from more traditional approaches of d...
Information overload has infiltrated our lives. From the amount of news available and at our fingertips 24/7, to the endless choices we have when making a simple purchase, to the quantity of emails we receive on a given day, it’s increasingly difficult to sift out the details that really matter. When you envision your cloud monitoring system, the same thinking applies. We receive a lot of useless data that gets fed into the system, and the reality is no one in IT or DevOps has the time to manu...
Last month, my partners in crime – Carmen DeArdo from Nationwide, Lee Reid, my colleague from IBM and I wrote a 3-part series of blog posts on We titled our posts the Simple Math, Calculus and Art of DevOps. I would venture to say these are must-reads for any organization adopting DevOps. We examined all three ascpects – the Cultural, Automation and Process improvement side of DevOps. One of the key underlying themes of the three posts was the need for Cultural change – things like t...
It is with great pleasure that I am able to announce that Jesse Proudman, Blue Box CTO, has been appointed to the position of IBM Distinguished Engineer. Jesse is the first employee at Blue Box to receive this honor, and I’m quite confident there will be more to follow given the amazing talent at Blue Box with whom I have had the pleasure to collaborate. I’d like to provide an overview of what it means to become an IBM Distinguished Engineer.
I’ve been thinking a bit about microservices (μServices) recently. My immediate reaction is to think: “Isn’t this just yet another new term for the same stuff, Web Services->SOA->APIs->Microservices?” Followed shortly by the thought, “well yes it is, but there are some important differences/distinguishing factors.” Microservices is an evolutionary paradigm born out of the need for simplicity (i.e., get away from the ESB) and alignment with agile (think DevOps) and scalable (think Containerizati...
The cloud has reached mainstream IT. Those 18.7 million data centers out there (server closets to corporate data centers to colocation deployments) are moving to the cloud. In his session at 17th Cloud Expo, Achim Weiss, CEO & co-founder of ProfitBricks, will share how two companies – one in the U.S. and one in Germany – are achieving their goals with cloud infrastructure. More than a case study, he will share the details of how they prioritized their cloud computing infrastructure deployments ...
DevOps Summit at Cloud Expo 2014 Silicon Valley was a terrific event for us. The Qubell booth was crowded on all three days. We ran demos every 30 minutes with folks lining up to get a seat and usually standing around. It was great to meet and talk to over 500 people! My keynote was well received and so was Stan's joint presentation with RingCentral on Devops for BigData. I also participated in two Power Panels – ‘Women in Technology’ and ‘Why DevOps Is Even More Important than You Think,’ both ...
In a report titled “Forecast Analysis: Enterprise Application Software, Worldwide, 2Q15 Update,” Gartner analysts highlighted the increasing trend of application modernization among enterprises. According to a recent survey, 45% of respondents stated that modernization of installed on-premises core enterprise applications is one of the top five priorities. Gartner also predicted that by 2020, 75% of