Welcome!

Microservices Expo Authors: Pat Romanski, Liz McMillan, Elizabeth White, Gopala Krishna Behara, Sridhar Chalasani

Related Topics: Microservices Expo, @CloudExpo, Apache, Cloud Security

Microservices Expo: Article

The Data Explosion

Is data is growing out of control?

Data explosion is one of the biggest issues facing IT today. The amount of data that organizations store has grown exponentially in the last 10 years. According to Gartner research director April Adams, data capacity on average in enterprises grows at 40 percent to 60 percent year over year.

Data is the lifeblood of any business, and companies of all sizes are struggling with the increasing amount of data stored on their networks. Because storage capacity has increased and costs have declined, many IT administrators have become more lax about what they allow their users to store on the corporate network and for how long. While the ability to store increasing amounts of data empowers organizations, it also presents them with the challenge of managing all of that information. As network storage grows, users are also adding an additional layer of complexity as they become increasingly dependent on ubiquitous access: they want to be able to access their data from wherever they are and from a variety of devices, including smartphones, tablets and laptops.

One approach is to just back everything up, but this tactic actually impedes your ability to get operations back up and running when a failure takes place. Going through mounds of unorganized data just isn't feasible and can cause companies to waste valuable time during a disaster. Businesses simply can't afford to treat all data equally, and prioritization is key. Companies may encounter serious issues if they store huge amounts of data onto tapes or into the cloud indiscriminately.

In sum, tougher recovery demands compound the problem of growing data. Organizations are intolerant of any data loss or downtime, putting a lot of pressure on IT managers, who are working in environments in flux thanks to evolving technologies and a growing variety of endpoints that need to be protected.

The 10 Percent Rule
Not all data is created equal. There is some critical data that, when lost, will bring a business to a halt. On average, only 10 percent of an organization's data is critical. "Critical" means that a file is in active use or changes frequently. That's typically about 10% of a company's information and represents the items they access daily and need immediately when a disaster strikes. Critical varies from organization to organization, but every minute spent recovering this data means lost productivity and lost revenue.

Of course, this doesn't mean that you don't need to protect the other 90 percent. It just means that you should prioritize. Arguably, all data is important, but organizations need a structured or tiered approach to ensure critical applications and systems are operational first in the event of data failure. They should plan and prioritize their information in advance, ideally with the help of professional data support personnel, so that they can recover information efficiently in the event of a disaster.

This approach will reduce downtime in the event of a widespread failure. If data is not prioritized, much time will be squandered recovering non-critical data, extending the length of a down period.

A Real Life Example
The benefits of a well-planned recovery strategy are best illustrated using a real world scenario. Let's consider a management consulting firm that has over one terabyte of data. Some of that data is Microsoft Exchange email, some resides on a file server and some of it is from a proprietary application for their business, which runs on a SCO UNIX server.

Using the 10 percent rule as a guide, the firm determines that if it were to experience data loss as the result of a server crash or other disaster, they would need to recover the last three months of their email, the last year of their file server data and the last three months of their UNIX data in order to get their business back up and running immediately. The rest of their data could be restored a day or two later without interruption to their productivity.

Armed with this information in advance, the organization uses a cloud-based backup vendor to design the backup and construct archiving rules to reflect their recovery time objective (RTO):

  1. Local Storage for Instant Recovery

This firm has a dedicated network storage location, so their cloud vendor pushes a copy of the backups to this location while simultaneously sending encrypted data to its data center facility. Using local storage, the organization can restore files from the local copy over its local area network, making recovery as fast as a file transfer.

  1. Time-Based Archiving Rules

In order to control the amount of critical data that remains in the cloud vendor's online vault and manage costs, they create rules that automatically push older data to archive after a specified period of time.

  1. Delta Blocking for Short Backup Windows

Although the cloud vendor is protecting over 1TB of data for them, nightly backups usually run in under one hour, sometimes as fast as 20 minutes. This is due to delta-blocking technology, which identifies changes made to a file and backs up only those changes, rather than the entire file.

By designating which data needs to be restored immediately and which does not, the organization receives a customized backup and recovery strategy that fits their recovery objectives and cost requirements.

Conclusion
Putting together a comprehensive recovery strategy like the one outlined above requires a certain amount of expertise and lots of upfront planning. While the "set it, and forget it" mentality is very attractive, data is growing too quickly and technology is changing too rapidly for companies to simply entrust their backups to just any cloud provider. You may have access only to a written Q&A or a junior technology staff member reading from a script when you need help restoring your critical data. Recovery could take a long time if you try to bring back all of your data at the same time. That's why advance prioritization of data is so essential.

When disaster strikes, the last thing an IT administrator wants is to fill out online forms or talk to someone who's reading from a script. Companies need competent providers who know their data environment, understand their business needs and can help walk them through the process.

More Stories By Jennifer Walzer

Jennifer Walzer, CEO and Founder of BUMI (www.BUMI.com), has an extensive background in technology and business strategy consulting. Prior to founding BUMI, she spent her career helping organizations of all sizes (from start ups to Fortune 1000 companies) with their back office systems and online web presence. She also successfully launched and sold a software development company focused on developing interactive voice response systems for multi-employer benefit funds. She has been invited to speak on various topics such as disaster recovery and data security at major conferences across the country.

Jennifer is a 2011 graduate of The Entrepreneurial Masters Program (EMP), an executive educational program jointly hosted by the MIT Enterprise Forum and Entrepreneurs’ Organization (EO).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
The past few years have seen a huge increase in the amount of critical IT services that companies outsource to SaaS/IaaS/PaaS providers, be it security, storage, monitoring, or operations. Of course, along with any outsourcing to a service provider comes a Service Level Agreement (SLA) to ensure that the vendor is held financially responsible for any lapses in their service which affect the customer’s end users, and ultimately, their bottom line. SLAs can be very tricky to manage for a number ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things c...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
"WineSOFT is a software company making proxy server software, which is widely used in the telecommunication industry or the content delivery networks or e-commerce," explained Jonathan Ahn, COO of WineSOFT, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task ...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
In a recent post, titled “10 Surprising Facts About Cloud Computing and What It Really Is”, Zac Johnson highlighted some interesting facts about cloud computing in the SMB marketplace: Cloud Computing is up to 40 times more cost-effective for an SMB, compared to running its own IT system. 94% of SMBs have experienced security benefits in the cloud that they didn’t have with their on-premises service
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
"We started a Master of Science in business analytics - that's the hot topic. We serve the business community around San Francisco so we educate the working professionals and this is where they all want to be," explained Judy Lee, Associate Professor and Department Chair at Golden Gate University, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...