Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Anders Wallgren, Martin Etmajer

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, API Journal, Agile Computing, Cloud Security

@CloudExpo: Blog Feed Post

Using Cloud for Disaster Recovery

Business Case - Best Practices and Lessons Learned

Use of cloud for DR solutions is becoming more common, even the organizations which are not using cloud for mission critical production applications are moving towards using cloud for application DR.

Business Case for Using Cloud for the DR

  1. Faster Recovery Time Objective (RTO): Typically DR requires lengthy manual processes to fully restore the business applications at the DR site.  Having backup data and servers at the DR site is easy, however, restoring the entire application or service takes time.  E.g. full application restoration requires starting services in specified order, performing dns and other configuration updates etc.  In Cloud, the IaaS APIs provide ability to use automation solutions like Kaavo IMOD to fully restore the business applications automatically without manual intervention.  As a result organizations get predictable recovery and reduced RTO.  Automating the service or application recovery can reduce RTO to minutes from hours or days.

  2. Shorter Recovery Point Objective (RPO): Instead of relying on offsite tape backups, organizations can reduce their RPO to minutes by maintaining near real-time data backups in the Cloud.  For faster transfer of large data dedicated lines can be established between the customer datacenters and the cloud.  The cost of the dedicated line depends on the distance of the customer datacenter from the cloud providers' peering point.  For most use cases VPN lines over internet are sufficient for transferring data between customer datacenter and the cloud.

  3. Lower Costs: Typically organizations pay high price for standby infrastructure, especially servers at the DR site.  Using cloud there is no need to pay for the servers when they are not in use at the DR site.  Pay as you use infrastructure model significantly reduces DR costs without compromising the service levels.

Following are some of the best practices and lessons learned from the Cloud DR solutions we have implemented so far:

Cloud DR is Different than Traditional DR
Unlike traditional DR solutions which relies on having a backup infrastructure for the entire datacenter requiring large and costly implementation, Cloud DR can be implemented incrementally application by application.  For example it is common for organizations to have a large shared database with multiple schemas supporting various applications.  In majority of cases this sharing is driven by server consolidation to increase the utilization of internal infrastructure.  Not all applications using a shared database have same service level requirements.  Some applications are more critical than others, so as long as schemas and application data is different, it is better to remove the dependency on shared database by having the right size database for each application in the cloud.  This allows optimal prioritization and incremental delivery of the DR project based on the service levels of the individual applications.

Migration of Applications Using Single Sign-on with LDAP
When planning DR for individual applications it is important to identify the dependent services and making sure that the dependent services would be available as a part of the DR solution.  Enterprise customers typically use Single Sign-on with LDAP for managing authentication.  So best practice is to treat the Single Sign-on Service as the critical application and implement the DR solution for bringing up the Single Sign-on Service first during the DR process.  An automation solution like Kaavo IMOD enables customers to restore applications and services in the specified order automatically during DR without any manual intervention. During a real DR scenario there are many things going and it is easy to make mistakes under pressure if the application restoration process is not fully automated.  To prevent surprises during actual DR, it is important to have a fully automated solution for restoring applications and services.

Restoring Back to Normal Operations after DR
This is one area which is often overlooked or under planned in DR projects.  For companies using their own datacenters for production applications and using cloud for DR, processes and automation must be implemented to fully restore the applications in the customer production datacenter using the latest data from the cloud DR once the primary datacenter is back online.  This step is not required for applications which are using cloud as their primary site.  E.g. if an application is running in one cloud zone and after DR it is running in a different cloud zone there is no need to restore it back to the first cloud zone as long as service levels for both cloud zones are same.  If you are deploying new applications it best to design for failure.  E.g. a distributed application running across various regions and cloud providers eliminate the need for traditional DR planning for the application as handling of failure of individual components is built in the design and deployment model of the application.

Handling Compliance in Cloud, e.g., HIPAA, PCI, SOX, SAS-70 etc.
Using available security technologies and processes several companies have implemented applications in the cloud compliant to various compliance standards, e.g. HIPAA, PCI, SOX, SAS-70 etc.  Each compliance standard has its own nuances; basically with proper planning you can address all compliance related issues.  This is a big topic on its own so please contact us if you have specific questions about this.  Cloud providers have published various case studies and best practices, e.g. white paper by Amazon on HIPAA compliance.

Handling Public and Private DNS
A common use case for enterprise applications is to have a public DNS for public access and a private DNS over internal network for accessing the backend services and databases etc.  In these situations it is best to use virtual private cloud like AWS VPC or to overlay a private network with the same IP address range as internal datacenter on any public cloud using Open Source solutions (refer to this blog - Building a Private Cloud within a Public Cloud for details on how to implement a secure private network on any public cloud).  For updating the public DNS entries for the restored application in the cloud we use DNS automation services like AWS Route 53 or EasyDNS.  Leveraging these services, Kaavo IMOD automatically updates the Public DNS for the applications as a part of the restoration during DR.

Keeping Application Database Up-To-Date
It is common for applications to have large databases.  Moving the data to the cloud and keeping it current requires first loading the entire database in cloud and then sending and merging incremental data to the database in the cloud.  To address this use case instead of maintaining a hot backup we use Kaavo IMOD to automatically bring up the database servers in cloud whenever the new incremental backup is available and merge the incremental backup then save the merged database and shutdown the servers in the cloud.  This way in case of DR we always have the latest merged database available for restoring the application. This approach provides reasonable RTO without incurring the additional costs of maintaining a hot database backup.

Applying and Maintaining Patches
A typical application requires following two types of updates during its lifecycle:

  1. Updating Application Code: This is quite easy as using Kaavo IMOD we setup automation to pick up the latest code and configuration for the application from the production deployment.  This automation ensures that the application code and configuration changes for the new release of the application or service are available in the cloud for the DR.

  2. OS Patches and Third-Party Software Updates: Sometimes custom patches or updates to third party software or OS are required.  For these types of changes it is best to include them as a part of change control process requiring sign-off from the team owning the DR process.  The DR team can review the change and if required make and test the needed changes to DR automation for the application.

Read the original blog entry...

More Stories By Jamal Mazhar

Jamal Mazhar is Founder & CEO of Kaavo. He possesses more than 15 years of experience in technology, engineering and consulting with a range of Fortune 500 companies including GE and ING. He established ING’s “Center of Excellence for B2B” which streamlined $2 billion per month in electronic money transfer operations. As Lead Architect at GE Capital e-Business team, Jamal directed analysis and implementation efforts and improved the performance of the website generating more than $1 billion in annual lease revenues. At Trilogy he provided technical and managerial expertise for several large scale e-business implementation projects for companies such as Boeing, NCR, Gartner, British Airways, Quantas Airways and Alltel. Jamal has BS in Electrical and Computer Engineering from the University of Texas at Austin and MBA from NYU Stern School of Business.

@MicroservicesExpo Stories
Sensors and effectors of IoT are solving problems in new ways, but small businesses have been slow to join the quantified world. They’ll need information from IoT using applications as varied as the businesses themselves. In his session at @ThingsExpo, Roger Meike, Distinguished Engineer, Director of Technology Innovation at Intuit, showed how IoT manufacturers can use open standards, public APIs and custom apps to enable the Quantified Small Business. He used a Raspberry Pi to connect sensors...
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed...
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
How is your DevOps transformation coming along? How do you measure Agility? Reliability? Efficiency? Quality? Success?! How do you optimize your processes? This morning on #c9d9 we talked about some of the metrics that matter for the different stakeholders throughout the software delivery pipeline. Our panelists shared their best practices.
SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful...
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 ad...
The (re?)emergence of Microservices was especially prominent in this week’s news. What are they good for? do they make sense for your application? should you take the plunge? and what do Microservices mean for your DevOps and Continuous Delivery efforts? Continue reading for more on Microservices, containers, DevOps culture, and more top news from the past week. As always, stay tuned to all the news coming from@ElectricCloud on DevOps and Continuous Delivery throughout the week and retweet/favo...
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Avere delivers a more modern architectural approach to storage that doesn’t require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbuilding of data centers ...
In a previous article, I demonstrated how to effectively and efficiently install the Dynatrace Application Monitoring solution using Ansible. In this post, I am going to explain how to achieve the same results using Chef with our official dynatrace cookbook available on GitHub and on the Chef Supermarket. In the following hands-on tutorial, we’ll also apply what we see as good practice on working with and extending our deployment automation blueprints to suit your needs.
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management...
SYS-CON Events announced today that VAI, a leading ERP software provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. VAI (Vormittag Associates, Inc.) is a leading independent mid-market ERP software developer renowned for its flexible solutions and ability to automate critical business functions for the distribution, manufacturing, specialty retail and service sectors. An IBM Premier Business Part...
In most cases, it is convenient to have some human interaction with a web (micro-)service, no matter how small it is. A traditional approach would be to create an HTTP interface, where user requests will be dispatched and HTML/CSS pages must be served. This approach is indeed very traditional for a web site, but not really convenient for a web service, which is not intended to be good looking, 24x7 up and running and UX-optimized. Instead, talking to a web service in a chat-bot mode would be muc...
With an estimated 50 billion devices connected to the Internet by 2020, several industries will begin to expand their capabilities for retaining end point data at the edge to better utilize the range of data types and sheer volume of M2M data generated by the Internet of Things. In his session at @ThingsExpo, Don DeLoach, CEO and President of Infobright, will discuss the infrastructures businesses will need to implement to handle this explosion of data by providing specific use cases for filte...
SYS-CON Events announced today that AppNeta, the leader in performance insight for business-critical web applications, will exhibit and present at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. AppNeta is the only application performance monitoring (APM) company to provide solutions for all applications – applications you develop internally, business-critical SaaS applications you use and the networks that deli...
Cognitive Computing is becoming the foundation for a new generation of solutions that have the potential to transform business. Unlike traditional approaches to building solutions, a cognitive computing approach allows the data to help determine the way applications are designed. This contrasts with conventional software development that begins with defining logic based on the current way a business operates. In her session at 18th Cloud Expo, Judith S. Hurwitz, President and CEO of Hurwitz & ...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes ho...
CIOs and those charged with running IT Operations are challenged to deliver secure, audited, and reliable compute environments for the applications and data for the business. Behind the scenes these tasks are often accomplished by following onerous time-consuming processes and often the management of these environments and processes will be outsourced to multiple IT service providers. In addition, the division of work is often siloed into traditional "towers" that are not well integrated for cro...
If we look at slow, traditional IT and jump to the conclusion that just because we found its issues intractable before, that necessarily means we will again, then it’s time for a rethink. As a matter of fact, the world of IT has changed over the last ten years or so. We’ve been experiencing unprecedented innovation across the board – innovation in technology as well as in how people organize and accomplish tasks. Let’s take a look at three differences between today’s modern, digital context...
The principles behind DevOps are not new - for decades people have been automating system administration and decreasing the time to deploy apps and perform other management tasks. However, only recently did we see the tools and the will necessary to share the benefits and power of automation with a wider circle of people. In his session at DevOps Summit, Bernard Sanders, Chief Technology Officer at CloudBolt Software, explored the latest tools including Puppet, Chef, Docker, and CMPs needed to...
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, will discuss how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved effi...