Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Flint Brenton, Karthick Viswanathan

Related Topics: Microservices Expo, @CloudExpo, Apache, Cloud Security

Microservices Expo: Article

The Data Explosion

Is data is growing out of control?

Data explosion is one of the biggest issues facing IT today. The amount of data that organizations store has grown exponentially in the last 10 years. According to Gartner research director April Adams, data capacity on average in enterprises grows at 40 percent to 60 percent year over year.

Data is the lifeblood of any business, and companies of all sizes are struggling with the increasing amount of data stored on their networks. Because storage capacity has increased and costs have declined, many IT administrators have become more lax about what they allow their users to store on the corporate network and for how long. While the ability to store increasing amounts of data empowers organizations, it also presents them with the challenge of managing all of that information. As network storage grows, users are also adding an additional layer of complexity as they become increasingly dependent on ubiquitous access: they want to be able to access their data from wherever they are and from a variety of devices, including smartphones, tablets and laptops.

One approach is to just back everything up, but this tactic actually impedes your ability to get operations back up and running when a failure takes place. Going through mounds of unorganized data just isn't feasible and can cause companies to waste valuable time during a disaster. Businesses simply can't afford to treat all data equally, and prioritization is key. Companies may encounter serious issues if they store huge amounts of data onto tapes or into the cloud indiscriminately.

In sum, tougher recovery demands compound the problem of growing data. Organizations are intolerant of any data loss or downtime, putting a lot of pressure on IT managers, who are working in environments in flux thanks to evolving technologies and a growing variety of endpoints that need to be protected.

The 10 Percent Rule
Not all data is created equal. There is some critical data that, when lost, will bring a business to a halt. On average, only 10 percent of an organization's data is critical. "Critical" means that a file is in active use or changes frequently. That's typically about 10% of a company's information and represents the items they access daily and need immediately when a disaster strikes. Critical varies from organization to organization, but every minute spent recovering this data means lost productivity and lost revenue.

Of course, this doesn't mean that you don't need to protect the other 90 percent. It just means that you should prioritize. Arguably, all data is important, but organizations need a structured or tiered approach to ensure critical applications and systems are operational first in the event of data failure. They should plan and prioritize their information in advance, ideally with the help of professional data support personnel, so that they can recover information efficiently in the event of a disaster.

This approach will reduce downtime in the event of a widespread failure. If data is not prioritized, much time will be squandered recovering non-critical data, extending the length of a down period.

A Real Life Example
The benefits of a well-planned recovery strategy are best illustrated using a real world scenario. Let's consider a management consulting firm that has over one terabyte of data. Some of that data is Microsoft Exchange email, some resides on a file server and some of it is from a proprietary application for their business, which runs on a SCO UNIX server.

Using the 10 percent rule as a guide, the firm determines that if it were to experience data loss as the result of a server crash or other disaster, they would need to recover the last three months of their email, the last year of their file server data and the last three months of their UNIX data in order to get their business back up and running immediately. The rest of their data could be restored a day or two later without interruption to their productivity.

Armed with this information in advance, the organization uses a cloud-based backup vendor to design the backup and construct archiving rules to reflect their recovery time objective (RTO):

  1. Local Storage for Instant Recovery

This firm has a dedicated network storage location, so their cloud vendor pushes a copy of the backups to this location while simultaneously sending encrypted data to its data center facility. Using local storage, the organization can restore files from the local copy over its local area network, making recovery as fast as a file transfer.

  1. Time-Based Archiving Rules

In order to control the amount of critical data that remains in the cloud vendor's online vault and manage costs, they create rules that automatically push older data to archive after a specified period of time.

  1. Delta Blocking for Short Backup Windows

Although the cloud vendor is protecting over 1TB of data for them, nightly backups usually run in under one hour, sometimes as fast as 20 minutes. This is due to delta-blocking technology, which identifies changes made to a file and backs up only those changes, rather than the entire file.

By designating which data needs to be restored immediately and which does not, the organization receives a customized backup and recovery strategy that fits their recovery objectives and cost requirements.

Conclusion
Putting together a comprehensive recovery strategy like the one outlined above requires a certain amount of expertise and lots of upfront planning. While the "set it, and forget it" mentality is very attractive, data is growing too quickly and technology is changing too rapidly for companies to simply entrust their backups to just any cloud provider. You may have access only to a written Q&A or a junior technology staff member reading from a script when you need help restoring your critical data. Recovery could take a long time if you try to bring back all of your data at the same time. That's why advance prioritization of data is so essential.

When disaster strikes, the last thing an IT administrator wants is to fill out online forms or talk to someone who's reading from a script. Companies need competent providers who know their data environment, understand their business needs and can help walk them through the process.

More Stories By Jennifer Walzer

Jennifer Walzer, CEO and Founder of BUMI (www.BUMI.com), has an extensive background in technology and business strategy consulting. Prior to founding BUMI, she spent her career helping organizations of all sizes (from start ups to Fortune 1000 companies) with their back office systems and online web presence. She also successfully launched and sold a software development company focused on developing interactive voice response systems for multi-employer benefit funds. She has been invited to speak on various topics such as disaster recovery and data security at major conferences across the country.

Jennifer is a 2011 graduate of The Entrepreneurial Masters Program (EMP), an executive educational program jointly hosted by the MIT Enterprise Forum and Entrepreneurs’ Organization (EO).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
With the rise of DevOps, containers are at the brink of becoming a pervasive technology in Enterprise IT to accelerate application delivery for the business. When it comes to adopting containers in the enterprise, security is the highest adoption barrier. Is your organization ready to address the security risks with containers for your DevOps environment? In his session at @DevOpsSummit at 21st Cloud Expo, Chris Van Tuin, Chief Technologist, NA West at Red Hat, will discuss: The top security r...
The last two years has seen discussions about cloud computing evolve from the public / private / hybrid split to the reality that most enterprises will be creating a complex, multi-cloud strategy. Companies are wary of committing all of their resources to a single cloud, and instead are choosing to spread the risk – and the benefits – of cloud computing across multiple providers and internal infrastructures, as they follow their business needs. Will this approach be successful? How large is the ...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
Many organizations adopt DevOps to reduce cycle times and deliver software faster; some take on DevOps to drive higher quality and better end-user experience; others look to DevOps for a clearer line-of-sight to customers to drive better business impacts. In truth, these three foundations go together. In this power panel at @DevOpsSummit 21st Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, industry experts will discuss how leading organizations build application success from all...
Most of the time there is a lot of work involved to move to the cloud, and most of that isn't really related to AWS or Azure or Google Cloud. Before we talk about public cloud vendors and DevOps tools, there are usually several technical and non-technical challenges that are connected to it and that every company needs to solve to move to the cloud. In his session at 21st Cloud Expo, Stefano Bellasio, CEO and founder of Cloud Academy Inc., will discuss what the tools, disciplines, and cultural...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
The nature of the technology business is forward-thinking. It focuses on the future and what’s coming next. Innovations and creativity in our world of software development strive to improve the status quo and increase customer satisfaction through speed and increased connectivity. Yet, while it's exciting to see enterprises embrace new ways of thinking and advance their processes with cutting edge technology, it rarely happens rapidly or even simultaneously across all industries.
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
DevOps at Cloud Expo – being held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real r...
Today companies are looking to achieve cloud-first digital agility to reduce time-to-market, optimize utilization of resources, and rapidly deliver disruptive business solutions. However, leveraging the benefits of cloud deployments can be complicated for companies with extensive legacy computing environments. In his session at 21st Cloud Expo, Craig Sproule, founder and CEO of Metavine, will outline the challenges enterprises face in migrating legacy solutions to the cloud. He will also prese...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
‘Trend’ is a pretty common business term, but its definition tends to vary by industry. In performance monitoring, trend, or trend shift, is a key metric that is used to indicate change. Change is inevitable. Today’s websites must frequently update and change to keep up with competition and attract new users, but such changes can have a negative impact on the user experience if not managed properly. The dynamic nature of the Internet makes it necessary to constantly monitor different metrics. O...
Hypertext Transfer Protocol, or HTTP, was first introduced by Tim Berners-Lee in 1991. The initial version HTTP/0.9 was designed to facilitate data transfers between a client and server. The protocol works on a request-response model over a TCP connection, but it’s evolved over the years to include several improvements and advanced features. The latest version is HTTP/2, which has introduced major advancements that prioritize webpage performance and speed.