Click here to close now.

Welcome!

@MicroservicesE Blog Authors: Elizabeth White, Pat Romanski, Lori MacVittie, Liz McMillan, Cloud Best Practices Network

Related Topics: @CloudExpo Blog, Java IoT, @MicroservicesE Blog, Linux Containers, @ContainersExpo, @BigDataExpo Blog

@CloudExpo Blog: Article

Should Cloud Be Part of Your Backup and Disaster Recovery Plan?

How Cloud enables a fast, agile and cost-effective recovery process

Recent times have witnessed a huge shift in paradigm of data storage for backup and recovery. As the legendary Steve Jobs said, "The truth lies in the Cloud" - the introduction of the Cloud has enabled the fast and agile data recovery process which is can be more efficient, flexible and cost-effective than restoring data or systems from physical drives or tapes, as is the standard practice.

Cloud backup is the new approach to data storage and backup which allows the users to store a copy of the data on an offsite server - accessible via the network. The network that hosts the server may be private or a public one, and is often managed by some third-party service provider. Therefore, the provision of cloud solution for the data recovery services is a flourishing business market whereby the service provider charges the users in exchange for server access, storage space and bandwidth, etc.

The online backup systems typically are schedule-based; however continual backup is a possibility. Depending on the requirements of the system and application, the backup is updated at preset intermittent levels; with the aim of efficient time and bandwidth utilization. The popularity of the Cloud backup (or managed backup service) business lies in the convenience it offers. The cost is reduced due to elimination of physical resources such as hard disks from the scenario with the added benefit of the automatic execution of the backup.

Cloud-based disaster recovery are a highly viable and useful approach for ensuring business continuity.  Using a completely virtualized environment and techniques such as replicated data, services such as LAN Doctors, Inc., a New Jersey-based managed backup service was able to provide 100% uptime when one of their largest clients - a major processor of insurance claims, was hit by a hurricane, lost internet connectivity - and was unable to process claims.

This kind of near-realtime "off-site" disaster recovery capability is now available to organizations of all sizes - not just those large enough to afford redundant data centers with high speed network connections.

The use of Cloud for backup and disaster recovery will grow - the increase in demand of the cloud storage is due mainly to the exponential increase in the more critical data amounts of the organizations over time. Increasingly, organizations are replicating not only data - but entire virtual systems to the Cloud.  Adding to the Cloud's advantages is the reduced price, flexibility of repeated testing and the non-rigid structure of the Cloud which gives you full opportunity to scale up or down as per your requirements.  The flexibility to restore from physical to Cloud-based virtual machine adds to the attraction.

Why Cloud Is Better
The most common traditional backup mechanism used is to store the data backup offsite.  For small business owners, sometimes that means putting a tape or disk drive in the computer bag and bringing it home.  For others, tapes/disks are sent overnight to a secure location. The most common problems with this approach are that either the data is not being stored offsite (due to human or procedural error), or else the data and systems are not being backed up frequently enough.  Furthermore, when a recovery is necessary, the media typically need to be transported back on-site.  If the data backup is stored locally, then there is the chance of a regional problem impacting the ability to recover. In retrospect, cloud offers a complete regionally-immune mechanism for online data recovery by creating a backup online at a remote site and enabling prompt data recovery when required.  Backups can be done as often as required.

Other Cloud-based recovery services include fail-over servers. In this scenario, in the event of server failure, a virtualized server and all the data can be spun up - while the failed server is recovered.

The Cloud provides significant advantages to many organizations - it enables a full data recovery mechanism by using backups, fail-over servers and a storage site remotely placed so as to keep it safe from the local or regional factors.  Meanwhile, the organizations avoid the cost and effort associated with maintaining all that backup infrastructure.

The large corporations - those which can afford redundant and remote compute capacity, and typically already have sophisticated recovery mechanisms running, can benefit by leveraging the Cloud where appropriate - and hence experience even better results than before. Of course, for a large organization to exercise and experience benefits of Cloud to its full in this area, it would need to consider the architecture and applications of their systems and the kind of technology deployed.

Or Is It?
The biggest concern for people and enterprises when it comes to the Cloud is the security of their data and the issue of their privacy.  Data from IDC show that 93 percent of US companies are backing up at least some data to the Cloud; whereas that number falls to about 63% in Western Europe and even further (57%) in Asia-Pacific region.  The biggest reason European and Asia-Pacific organizations give for not leveraging Cloud for backup?  Security.

There can also be latency issues in dealing with effectively streaming large amounts of data to the Cloud - versus (for example) having a data storage appliance with built-in deduplication and data compression.

Cloud or Local?  The Verdict
The answer is clearly "it depends".  Backup should never be treated as a "one-size fits all" thing.  Your backup and recovery mechanisms need to be matched to your particular technological and business needs.  There's simply no substitute for knowing your own requirements, the capability of various technologies, and carrying out a thorough evaluation.  Don't be surprised if you end up with both Cloud and local - some systems simply require local backup (either for business, regulatory or technological reasons).

With the average size of an organization's data growing at 40% a year, one thing is certain -  there is a lot of backing up that needs to get done, both locally and on the Cloud.

More Stories By Hollis Tibbetts

Hollis Tibbetts, or @SoftwareHollis as his 50,000+ followers know him on Twitter, is listed on various “top 100 expert lists” for a variety of topics – ranging from Cloud to Technology Marketing, Hollis is by day Evangelist & Software Technology Director at Dell Software. By night and weekends he is a commentator, speaker and all-round communicator about Software, Data and Cloud in their myriad aspects. You can also reach Hollis on LinkedIn – linkedin.com/in/SoftwareHollis. His latest online venture is OnlineBackupNews - a free reference site to help organizations protect their data, applications and systems from threats. Every year IT Downtime Costs $26.5 Billion In Lost Revenue. Even with such high costs, 56% of enterprises in North America and 30% in Europe don’t have a good disaster recovery plan. Online Backup News aims to make sure you all have the news and tips needed to keep your IT Costs down and your information safe by providing best practices, technology insights, strategies, real-world examples and various tips and techniques from a variety of industry experts.

Hollis is a regularly featured blogger at ebizQ, a venue focused on enterprise technologies, with over 100,000 subscribers. He is also an author on Social Media Today "The World's Best Thinkers on Social Media", and maintains a blog focused on protecting data: Online Backup News.
He tweets actively as @SoftwareHollis

Additional information is available at HollisTibbetts.com

All opinions expressed in the author's articles are his own personal opinions vs. those of his employer.

@MicroservicesExpo Stories
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. ...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of...
The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend...
Conferences agendas. Event navigation. Specific tasks, like buying a house or getting a car loan. If you've installed an app for any of these things you've installed what's known as a "disposable mobile app" or DMA. Apps designed for a single use-case and with the expectation they'll be "thrown away" like brochures. Deleted until needed again. These apps are necessarily small, agile and highly volatile. Sometimes existing only for a short time - say to support an event like an election, the Wor...
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
Data center models are changing. A variety of technical trends and business demands are forcing that change, most of them centered on the explosive growth of applications. That means, in turn, that the requirements for application delivery are changing. Certainly application delivery needs to be agile, not waterfall. It needs to deliver services in hours, not weeks or months. It needs to be more cost efficient. And more than anything else, it needs to be really, dc infra axisreally, super focus...
Sharding has become a popular means of achieving scalability in application architectures in which read/write data separation is not only possible, but desirable to achieve new heights of concurrency. The premise is that by splitting up read and write duties, it is possible to get better overall performance at the cost of a slight delay in consistency. That is, it takes a bit of time to replicate changes initiated by a "write" to the read-only master database. It's eventually consistent, and it'...
Many people recognize DevOps as an enormous benefit – faster application deployment, automated toolchains, support of more granular updates, better cooperation across groups. However, less appreciated is the journey enterprise IT groups need to make to achieve this outcome. The plain fact is that established IT processes reflect a very different set of goals: stability, infrequent change, hands-on administration, and alignment with ITIL. So how does an enterprise IT organization implement change...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations migh...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Mashape is bringing real-time analytics to microservices with the release of Mashape Analytics. First built internally to analyze the performance of more than 13,000 APIs served by the mashape.com marketplace, this new tool provides developers with robust visibility into their APIs and how they function within microservices. A purpose-built, open analytics platform designed specifically for APIs and microservices architectures, Mashape Analytics also lets developers and DevOps teams understand w...
Sumo Logic has announced comprehensive analytics capabilities for organizations embracing DevOps practices, microservices architectures and containers to build applications. As application architectures evolve toward microservices, containers continue to gain traction for providing the ideal environment to build, deploy and operate these applications across distributed systems. The volume and complexity of data generated by these environments make monitoring and troubleshooting an enormous chall...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud envir...
Containers and Docker are all the rage these days. In fact, containers — with Docker as the leading container implementation — have changed how we deploy systems, especially those comprised of microservices. Despite all the buzz, however, Docker and other containers are still relatively new and not yet mainstream. That being said, even early Docker adopters need a good monitoring tool, so last month we added Docker monitoring to SPM. We built it on top of spm-agent – the extensible framework f...
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.