Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Mehdi Daoudi, Yeshim Deniz

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, Apache, Cloud Security

@CloudExpo: Blog Feed Post

Disaster Recovery Ascends to the Cloud | Part 2: Deployment Considerations

Realizing an economical alternative to traditional DR

As mentioned in Part I of this series, cloud technology has introduced a viable alternative to the practice of creating secondary sites for disaster recovery (DR), promising to save IT organizations hundreds of thousands or even millions of dollars in infrastructure and maintenance. While the cost reduction associated with replacing dedicated DR infrastructure is intuitive, the ability of cloud solutions to meet the recovery times (RTOs and RPOs) dictated by businesses is often less well understood.

Part I suggested two key considerations in recovering IT operations from a disaster are (1) regaining access to data and (2) regaining access to applications. Today’s cloud integrated storage or cloud storage gateways can push backups or live data sets to the cloud easily and securely, enabling the first element of a cloud DR solution. With this in mind, let’s examine two strategies for application recovery using cloud-based DR:

Strategy 1: Data copies in-cloud, application recovery off-cloud
One of the simpler approaches to cloud-based DR stores data copies in the cloud and allows external, off-cloud access by applications in the case of a primary site outage. With data in the cloud accessible from nearly anywhere, applications may be recovered at a secondary site if they cannot be recovered at the primary site.

The advantage of this approach is the elimination of dedicated secondary storage infrastructure for DR. The disadvantage is the requirement for a secondary site for application recovery.

An improvement to this approach involves leveraging a hosting provider as the application recovery site, where new application servers can be provisioned on-demand in case of a disaster. Using a hosted recovery site can be considerably faster than restoring and rebuilding the original application environment and more economical than maintaining a dedicated secondary site. However, recovery times may be impacted by the time it takes for the hosting provider to provision new servers.

Application recovery off-cloud versus in-cloud

Strategy 2: Data copies in-cloud, application recovery in-cloud
Perhaps a more ideal approach to cloud-based DR enables both data and application recovery in the cloud without the need for a secondary site for applications or storage. Cloud compute as-a-service represents an attractive environment for recovering applications by rapidly spinning up new virtual servers.

When using a cloud storage gateway to replicate data to the cloud, consider cloud gateways with the ability to run in the cloud. Cloud servers can then attach to the gateway to facilitate application recovery.

The process of application recovery may involve activating servers and applications via a cloud provider’s catalog. Although this process is much faster than provisioning new physical hardware, it can still be time consuming, particularly when attempting to recover tens or hundreds of servers.

Alternatively, virtual machines that resided on-premise can be reinstantiated in the cloud, similar to failover of virtual machines between hypervisors. This is possible if the same hypervisor runs on-premise and in the cloud. However, while moving virtual machine (VM) images between like hypervisors is generally straightforward, many cloud providers may not offer sufficient administrative privilege in their virtual compute environments or may not be compatible with on-premise hypervisors.

To get around these limitations and incompatibilities, an emerging option involves importing on-premise VMs into the cloud via conversion scripts and tools. An important consideration is ensuring that these conversion scripts and tools operate bidirectionally, meaning they allow a way to eventually export VMs back to the on-premise environment.

The keys to success are testing and working with a partner you trust
While there are a variety of ways to deploy DR in the cloud, there are many subtleties and details to consider. Not surprisingly, the devil is often in the details.

Keep in mind that an important aspect of any DR strategy is conducting regular testing and validation. Additionally, working with technology partners who understand the advantages and tradeoffs of DR in the cloud can be particularly helpful.

Like any major IT undertaking, DR in the cloud requires significant planning — but the payoff can be substantial if reducing disaster recovery costs and improving availability are important to your business.

Read the original blog entry...

More Stories By Nicos Vekiarides

Nicos Vekiarides is the Chief Executive Officer & Co-Founder of TwinStrata. He has spent over 20 years in enterprise data storage, both as a business manager and as an entrepreneur and founder in startup companies.

Prior to TwinStrata, he served as VP of Product Strategy and Technology at Incipient, Inc., where he helped deliver the industry's first storage virtualization solution embedded in a switch. Prior to Incipient, he was General Manager of the storage virtualization business at Hewlett-Packard. Vekiarides came to HP with the acquisition of StorageApps where he was the founding VP of Engineering. At StorageApps, he built a team that brought to market the industry's first storage virtualization appliance. Prior to StorageApps, he spent a number of years in the data storage industry working at Sun Microsystems and Encore Computer. At Encore, he architected and delivered Encore Computer's SP data replication products that were a key factor in the acquisition of Encore's storage division by Sun Microsystems.

Microservices Articles
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and...
Containers, microservices and DevOps are all the rage lately. You can read about how great they are and how they’ll change your life and the industry everywhere. So naturally when we started a new company and were deciding how to architect our app, we went with microservices, containers and DevOps. About now you’re expecting a story of how everything went so smoothly, we’re now pushing out code ten times a day, but the reality is quite different.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
"Peak 10 is a hybrid infrastructure provider across the nation. We are in the thick of things when it comes to hybrid IT," explained , Chief Technology Officer at Peak 10, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...