Welcome!

Microservices Expo Authors: Liz McMillan, Elizabeth White, Anders Wallgren, Martin Etmajer, Jason Bloomberg

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, @BigDataExpo

@CloudExpo: Article

Should Cloud Be Part of Your Backup and Disaster Recovery Plan?

How Cloud enables a fast, agile and cost-effective recovery process

Recent times have witnessed a huge shift in paradigm of data storage for backup and recovery. As the legendary Steve Jobs said, "The truth lies in the Cloud" - the introduction of the Cloud has enabled the fast and agile data recovery process which is can be more efficient, flexible and cost-effective than restoring data or systems from physical drives or tapes, as is the standard practice.

Cloud backup is the new approach to data storage and backup which allows the users to store a copy of the data on an offsite server - accessible via the network. The network that hosts the server may be private or a public one, and is often managed by some third-party service provider. Therefore, the provision of cloud solution for the data recovery services is a flourishing business market whereby the service provider charges the users in exchange for server access, storage space and bandwidth, etc.

The online backup systems typically are schedule-based; however continual backup is a possibility. Depending on the requirements of the system and application, the backup is updated at preset intermittent levels; with the aim of efficient time and bandwidth utilization. The popularity of the Cloud backup (or managed backup service) business lies in the convenience it offers. The cost is reduced due to elimination of physical resources such as hard disks from the scenario with the added benefit of the automatic execution of the backup.

Cloud-based disaster recovery are a highly viable and useful approach for ensuring business continuity.  Using a completely virtualized environment and techniques such as replicated data, services such as LAN Doctors, Inc., a New Jersey-based managed backup service was able to provide 100% uptime when one of their largest clients - a major processor of insurance claims, was hit by a hurricane, lost internet connectivity - and was unable to process claims.

This kind of near-realtime "off-site" disaster recovery capability is now available to organizations of all sizes - not just those large enough to afford redundant data centers with high speed network connections.

The use of Cloud for backup and disaster recovery will grow - the increase in demand of the cloud storage is due mainly to the exponential increase in the more critical data amounts of the organizations over time. Increasingly, organizations are replicating not only data - but entire virtual systems to the Cloud.  Adding to the Cloud's advantages is the reduced price, flexibility of repeated testing and the non-rigid structure of the Cloud which gives you full opportunity to scale up or down as per your requirements.  The flexibility to restore from physical to Cloud-based virtual machine adds to the attraction.

Why Cloud Is Better
The most common traditional backup mechanism used is to store the data backup offsite.  For small business owners, sometimes that means putting a tape or disk drive in the computer bag and bringing it home.  For others, tapes/disks are sent overnight to a secure location. The most common problems with this approach are that either the data is not being stored offsite (due to human or procedural error), or else the data and systems are not being backed up frequently enough.  Furthermore, when a recovery is necessary, the media typically need to be transported back on-site.  If the data backup is stored locally, then there is the chance of a regional problem impacting the ability to recover. In retrospect, cloud offers a complete regionally-immune mechanism for online data recovery by creating a backup online at a remote site and enabling prompt data recovery when required.  Backups can be done as often as required.

Other Cloud-based recovery services include fail-over servers. In this scenario, in the event of server failure, a virtualized server and all the data can be spun up - while the failed server is recovered.

The Cloud provides significant advantages to many organizations - it enables a full data recovery mechanism by using backups, fail-over servers and a storage site remotely placed so as to keep it safe from the local or regional factors.  Meanwhile, the organizations avoid the cost and effort associated with maintaining all that backup infrastructure.

The large corporations - those which can afford redundant and remote compute capacity, and typically already have sophisticated recovery mechanisms running, can benefit by leveraging the Cloud where appropriate - and hence experience even better results than before. Of course, for a large organization to exercise and experience benefits of Cloud to its full in this area, it would need to consider the architecture and applications of their systems and the kind of technology deployed.

Or Is It?
The biggest concern for people and enterprises when it comes to the Cloud is the security of their data and the issue of their privacy.  Data from IDC show that 93 percent of US companies are backing up at least some data to the Cloud; whereas that number falls to about 63% in Western Europe and even further (57%) in Asia-Pacific region.  The biggest reason European and Asia-Pacific organizations give for not leveraging Cloud for backup?  Security.

There can also be latency issues in dealing with effectively streaming large amounts of data to the Cloud - versus (for example) having a data storage appliance with built-in deduplication and data compression.

Cloud or Local?  The Verdict
The answer is clearly "it depends".  Backup should never be treated as a "one-size fits all" thing.  Your backup and recovery mechanisms need to be matched to your particular technological and business needs.  There's simply no substitute for knowing your own requirements, the capability of various technologies, and carrying out a thorough evaluation.  Don't be surprised if you end up with both Cloud and local - some systems simply require local backup (either for business, regulatory or technological reasons).

With the average size of an organization's data growing at 40% a year, one thing is certain -  there is a lot of backing up that needs to get done, both locally and on the Cloud.

More Stories By Hollis Tibbetts

Hollis Tibbetts, or @SoftwareHollis as his 50,000+ followers know him on Twitter, is listed on various “top 100 expert lists” for a variety of topics – ranging from Cloud to Technology Marketing, Hollis is by day Evangelist & Software Technology Director at Dell Software. By night and weekends he is a commentator, speaker and all-round communicator about Software, Data and Cloud in their myriad aspects. You can also reach Hollis on LinkedIn – linkedin.com/in/SoftwareHollis. His latest online venture is OnlineBackupNews - a free reference site to help organizations protect their data, applications and systems from threats. Every year IT Downtime Costs $26.5 Billion In Lost Revenue. Even with such high costs, 56% of enterprises in North America and 30% in Europe don’t have a good disaster recovery plan. Online Backup News aims to make sure you all have the news and tips needed to keep your IT Costs down and your information safe by providing best practices, technology insights, strategies, real-world examples and various tips and techniques from a variety of industry experts.

Hollis is a regularly featured blogger at ebizQ, a venue focused on enterprise technologies, with over 100,000 subscribers. He is also an author on Social Media Today "The World's Best Thinkers on Social Media", and maintains a blog focused on protecting data: Online Backup News.
He tweets actively as @SoftwareHollis

Additional information is available at HollisTibbetts.com

All opinions expressed in the author's articles are his own personal opinions vs. those of his employer.

@MicroservicesExpo Stories
Microservices are all the rage right now — and the industry is still learning, experimenting, and developing patterns, for successfully designing, deploying and managing Microservices in the real world. Are you considering jumping on the Microservices-wagon? Do Microservices make sense for your particular use case? What are some of the “gotchas” you should be aware of? This morning on #c9d9 we had experts from popular chat app Kik, SMB SaaS platform Yodle and hosted CI solution Semaphore sha...
WebSocket is effectively a persistent and fat pipe that is compatible with a standard web infrastructure; a "TCP for the Web." If you think of WebSocket in this light, there are other more hugely interesting applications of WebSocket than just simply sending data to a browser. In his session at 18th Cloud Expo, Frank Greco, Director of Technology for Kaazing Corporation, will compare other modern web connectivity methods such as HTTP/2, HTTP Streaming, Server-Sent Events and new W3C event APIs ...
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
The (re?)emergence of Microservices was especially prominent in this week’s news. What are they good for? do they make sense for your application? should you take the plunge? and what do Microservices mean for your DevOps and Continuous Delivery efforts? Continue reading for more on Microservices, containers, DevOps culture, and more top news from the past week. As always, stay tuned to all the news coming from@ElectricCloud on DevOps and Continuous Delivery throughout the week and retweet/favo...
How is your DevOps transformation coming along? How do you measure Agility? Reliability? Efficiency? Quality? Success?! How do you optimize your processes? This morning on #c9d9 we talked about some of the metrics that matter for the different stakeholders throughout the software delivery pipeline. Our panelists shared their best practices.
In a previous article, I demonstrated how to effectively and efficiently install the Dynatrace Application Monitoring solution using Ansible. In this post, I am going to explain how to achieve the same results using Chef with our official dynatrace cookbook available on GitHub and on the Chef Supermarket. In the following hands-on tutorial, we’ll also apply what we see as good practice on working with and extending our deployment automation blueprints to suit your needs.
Sensors and effectors of IoT are solving problems in new ways, but small businesses have been slow to join the quantified world. They’ll need information from IoT using applications as varied as the businesses themselves. In his session at @ThingsExpo, Roger Meike, Distinguished Engineer, Director of Technology Innovation at Intuit, showed how IoT manufacturers can use open standards, public APIs and custom apps to enable the Quantified Small Business. He used a Raspberry Pi to connect sensors...
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, will discuss how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved effi...
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, will provide an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profes...
Father business cycles and digital consumers are forcing enterprises to respond faster to customer needs and competitive demands. Successful integration of DevOps and Agile development will be key for business success in today’s digital economy. In his session at DevOps Summit, Pradeep Prabhu, Co-Founder & CEO of Cloudmunch, covered the critical practices that enterprises should consider to seamlessly integrate Agile and DevOps processes, barriers to implementing this in the enterprise, and pr...
If we look at slow, traditional IT and jump to the conclusion that just because we found its issues intractable before, that necessarily means we will again, then it’s time for a rethink. As a matter of fact, the world of IT has changed over the last ten years or so. We’ve been experiencing unprecedented innovation across the board – innovation in technology as well as in how people organize and accomplish tasks. Let’s take a look at three differences between today’s modern, digital context...
The principles behind DevOps are not new - for decades people have been automating system administration and decreasing the time to deploy apps and perform other management tasks. However, only recently did we see the tools and the will necessary to share the benefits and power of automation with a wider circle of people. In his session at DevOps Summit, Bernard Sanders, Chief Technology Officer at CloudBolt Software, explored the latest tools including Puppet, Chef, Docker, and CMPs needed to...
Cognitive Computing is becoming the foundation for a new generation of solutions that have the potential to transform business. Unlike traditional approaches to building solutions, a cognitive computing approach allows the data to help determine the way applications are designed. This contrasts with conventional software development that begins with defining logic based on the current way a business operates. In her session at 18th Cloud Expo, Judith S. Hurwitz, President and CEO of Hurwitz & ...
Data-as-a-Service is the complete package for the transformation of raw data into meaningful data assets and the delivery of those data assets. In her session at 18th Cloud Expo, Lakshmi Randall, an industry expert, analyst and strategist, will address: What is DaaS (Data-as-a-Service)? Challenges addressed by DaaS Vendors that are enabling DaaS Architecture options for DaaS
One of the bewildering things about DevOps is integrating the massive toolchain including the dozens of new tools that seem to crop up every year. Part of DevOps is Continuous Delivery and having a complex toolchain can add additional integration and setup to your developer environment. In his session at @DevOpsSummit at 18th Cloud Expo, Miko Matsumura, Chief Marketing Officer of Gradle Inc., will discuss which tools to use in a developer stack, how to provision the toolchain to minimize onboa...
Microservices are a type of software architecture where large applications are made up of small, self-contained units working together through APIs that are not dependent on a specific language. Each service has a limited scope, concentrates on a specific task and is highly independent. This setup allows IT managers and developers to build systems in a modular way. In his book, “Building Microservices,” Sam Newman said microservices are small, focused components built to do a single thing very w...
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed...
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 ad...
SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful...