Welcome!

Microservices Expo Authors: Derek Weeks, Mehdi Daoudi, Pat Romanski, Liz McMillan, Elizabeth White

Related Topics: Containers Expo Blog, Microservices Expo

Containers Expo Blog: Article

A New Era of Post-Production Data Management Software

Solving legacy problems of backup to tape

By Jerome M. Wendt

Nearly every small, medium or large organization is heading down the path of adopting disk-based data protection as a way to solve their legacy problems of backup to tape. But what many of these organizations have yet to recognize is that as they adopt disk to store these post-production copies of data, a new opportunity is presenting itself. They now have the option to manage and leverage post-production data in ways that were never possible when on tape but now lack the tools to do so.

Nearly every organization is somewhere on the journey towards implementing disk-based backup. But that does not mean every organization is implementing it in the same way or using the same technologies. Some are introducing disk in lieu of tape as a backup target. Others are using disk-based snapshots while still others are using some form of replication. Some are even using a combination all of these forms of disk-based data protection.

Then as they store this post-production data on disk they are looking to optimize how this data is stored. This is leading most organizations to try to deduplicate this data whenever possible and then use replication to move the data off-site.

But now that organizations are storing all of this data to disk, they have both a new opportunity and a new challenge in front of them. The opportunity is they now have near-real time copies of production data on disk.

These post-production copies of data are at most 24 hours old and may only be hours, minutes or seconds removed from the production copy of data from which they were derived. As such, they are excellent candidates for use for testing, development and disaster recovery.

At the same time, the challenge is finding software that can do this "post-production data management." After all, it is one thing to create a backup, snapshot or set up a replication job for a particular set of production data. But it is quite another job to then manage the tens, hundreds or even possibly thousands of these copies of post-production data in such a way that they are useable and easily managed by an organization.

These requirements are making it evident that a new category of software, which I refer to as "post-production data management software," needs to emerge. This software needs to give organizations the ability to manage this post-production data in such a way that they may perform tasks like:

  • Minimizing or eliminating backup and recovery windows by ensuring that data is on the right tier of disk at the right time
  • Ensuring that these post-production copies of data (backups, snapshots, replicated) are deduplicated whenever possible to conserve storage capacity
  • Data is replicated locally or remotely so business continuity and disaster recovery can occur automatically with minimal need for setup or ongoing management

Already we are seeing these next generation types of post-production data management capabilities begin to emerge. This last week FalconStor announced its new RecoverTrac technology that is now part of its Continuous Data Protector (CDP) software. RecoverTrac's purpose is to help CDP's snapshots and replication features deliver real world, turnkey business continuity and disaster recovery functionality without requiring organizations to spend tens or hundreds of thousands of dollars and numerous man hours in order to deploy, test, implement and manage it.

InMage is another company that has also been doing something similar for some time. In a series of two blog entries I wrote a couple of years ago I covered how a Dr. James Tu, an Information Security Officer at a real estate company, was using InMage Scout's replication capabilities to replicate data between two sites. Then within Scout he was creating virtual mount points so he could use them to perform recoveries at the secondary site.

Yet possibly the most interesting entrant in this emerging space is a new company called Actifio. It has built its entire platform around this concept of post-production data management and has gone beyond just acting as a backup target, replicating production data or creating snapshots of production data. While it can perform all three of these functions (it can act as either a backup target or a production file system which replicates or takes snapshots of production data) its ultimate objective is to optimally manage the post-production data under its control.

Once it has these post-production copies of data in whatever form they take (backup, snapshot or replicated,) Actifio can then deduplicate this data, put it on the appropriate tier of disk for recoveries or replicate it offsite for DR.

What specifically makes Actifio unique is its virtualization technology to re-purpose a unique copy of all post-production data for each of the different operational requirements. Instead of the complexity and cost of multiple tools for backup, snapshot, deduplication, disaster recovery or business continuance, Actifio is a single solution that manages data through its entire life cycle.

In the next few years I expect almost every size company to complete the transition from tape-based backup to using some form of disk-based data protection. However once that transition is complete, these organizations are going to recognize that just having all of their post-production data stored on disk adds insufficient value to their organization.

Instead they will want the option to easily and centrally manage where this post-production data is placed to facilitate business continuity, disaster recoveries and even operational testing and development of production applications. It is for this reason that I believe that companies will rapidly move beyond just implementing disk-based data protection and that a new era of organizations wanting and needing software that automates the management and placement of their post-production data is dawning.

More Stories By Derek Harris

Derek Harris Sr. is a senior technology writer and blogger with more than 20 years experience in journalism.

While covering a broad spectrum of technology segments, his focus is weighted on enterprise technologies in the data storage, security and infrastructure spaces.

@MicroservicesExpo Stories
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
In our first installment of this blog series, we went over the different types of applications migrated to the cloud and the benefits IT organizations hope to achieve by moving applications to the cloud. Unfortunately, IT can’t just press a button or even whip up a few lines of code to move applications to the cloud. Like any strategic move by IT, a cloud migration requires advanced planning.
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
“Why didn’t testing catch this” must become “How did this make it to testing?” Traditional quality teams are the crutch and excuse keeping organizations from making the necessary investment in people, process, and technology to accelerate test automation. Just like societies that did not build waterways because the labor to keep carrying the water was so cheap, we have created disincentives to automate. In her session at @DevOpsSummit at 20th Cloud Expo, Anne Hungate, President of Daring System...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies? We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measure...
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...
If you are part of the cloud development community, you certainly know about “serverless computing”, almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity. We came from monolithic computing to client-server to services to microservices to serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-func...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong...
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
In his session at 20th Cloud Expo, Chris Carter, CEO of Approyo, discussed the basic set up and solution for an SAP solution in the cloud and what it means to the viability of your company. Chris Carter is CEO of Approyo. He works with business around the globe, to assist them in their journey to the usage of Big Data in the forms of Hadoop (Cloudera and Hortonwork's) and SAP HANA. At Approyo, we support firms who are looking for knowledge to grow through current business process, where even 1%...
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way. The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tool...