Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Liz McMillan, Charles Araujo, Flint Brenton

Article

Tips for Disaster Recovery Planning

(Part 1 of 2)

1. Getting Started
Typically the first step of getting started with business continuity planning (BCP) is to organize the BCP stakeholders and get executive buy in to the concept. There are several exercises moving through this process and it all depends on the level of executive support you have for this type of program, and how much you have to sell them on the concept. It is important to be prepared, as you will need to cost justify by presenting some number that identifies the cost of downtime and how much company revenue is at risk if business systems become unavailable for an extended period of time.

2. Why You Need a Plan
This is really the easy part. We all know why we need a business continuity plan; to prevent extended period of outages that will cost the company money. The number one priority of any business continuity plan is protecting the most valuable assets, the health and safety of the employees. The second priority, but equally as important, is the rapid recovery and or restoration of business critical systems. If you have ever had your messaging system go down for any period of time you likely received a call from an executive pretty quickly wondering why they aren’t receiving any BlackBerry messages.

3. Defining the Right Plan
This primarily starts with understanding what keeps your business running and prioritizing the recovery of different systems that are most critical. This is usually conducted in the risk analysis and business impact study and you don’t need to be a rocket scientist to pull this together. It is highly likely you already know and could create this list in your sleep.

4. Top Mistakes Made
There are many mistakes that are made when preparing for business continuity planning, and the most common is not allowing enough time to identify, plan or prepare for the design, implementation and/or exercise of the system. Regularly exercising or testing the business continuity plan can be, and often is, the most costly mistake. Just because you have successfully implemented recovery and restoration procedures doesn’t mean you are done. Every time a system update or change control process is initiated the business continuity plan should be re-tested to see if it has been impacted and still functions as designed. Do not skimp on exercising your business continuity plan just because you can’t seem to find the downtime. This is where using a virtualization platform, such as Microsoft Hyper-V is extremely helpful as you can spin up a virtual disaster recovery target and test without impacting the actual production system. This is accomplished through the virtualization technology that allows the machines to be segmented from the production network and create a virtual DR test bed.

5. Real Life Lessons

Over the past nine years I have been either directly or indirectly involved with over 1,600 business continuity implementations, and there was always something to learn with each scenario. One such situation was planning a backup for the backup. During a disaster recovery implementation for over 70 virtual servers, the batteries of the UPS (that was the backup power supply for the datacenter) ended up exploding. Because the main power supply ran through the UPS, it took out the power to the entire datacenter and about 40 servers that were offline. Luckily we had just finished the implementation, but hadn’t actually completed the exercise training so we had to do it as a live test. Thanks to the brilliant engineers I work with and the fact we had implemented these solutions a few hundred times, we were able to bring up all the business critical systems at a disaster recovery facility within fifteen minutes. Hazmat was called to begin cleaning up the contents of the exploded batteries in the datacenter, and we were able to recover all the data center operations to the original data center about 5 days later. This shows that even though you have a backup plan, you don’t necessarily have a backup.

More Stories By Mike Talon

Mike Talon is a technology professional living and working in New York City. Having worked for companies from individual consult firms through Fortune 500 organizations, he’s had the opportunity to design systems from all over the technological spectrum. This has included day-to-day systems solutions engineering through advanced Disaster Recovery and Business Continuity Planning work. Currently a Subject Matter Expert in Microsoft Exchange technologies for Double-Take Software, Mike is constantly learning to live life well in these very interesting times.

@MicroservicesExpo Stories
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
"We view the cloud not as a specific technology but as a way of doing business and that way of doing business is transforming the way software, infrastructure and services are being delivered to business," explained Matthew Rosen, CEO and Director at Fusion, in this SYS-CON.tv interview at 18th Cloud Expo (http://www.CloudComputingExpo.com), held June 7-9 at the Javits Center in New York City, NY.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual b...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
Don’t go chasing waterfall … development, that is. According to a recent post by Madison Moore on Medium featuring insights from several software delivery industry leaders, waterfall is – while still popular – not the best way to win in the marketplace. With methodologies like Agile, DevOps and Continuous Delivery becoming ever more prominent over the past 15 years or so, waterfall is old news. Or, is it? Moore cites a recent study by Gartner: “According to Gartner’s IT Key Metrics Data report, ...
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Archi...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service.
We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...