Welcome!

Microservices Expo Authors: Jyoti Bansal, Pat Romanski, Elizabeth White, AppNeta Blog, Liz McMillan

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, @BigDataExpo

@CloudExpo: Article

Should Cloud Be Part of Your Backup and Disaster Recovery Plan?

How Cloud enables a fast, agile and cost-effective recovery process

Recent times have witnessed a huge shift in paradigm of data storage for backup and recovery. As the legendary Steve Jobs said, "The truth lies in the Cloud" - the introduction of the Cloud has enabled the fast and agile data recovery process which is can be more efficient, flexible and cost-effective than restoring data or systems from physical drives or tapes, as is the standard practice.

Cloud backup is the new approach to data storage and backup which allows the users to store a copy of the data on an offsite server - accessible via the network. The network that hosts the server may be private or a public one, and is often managed by some third-party service provider. Therefore, the provision of cloud solution for the data recovery services is a flourishing business market whereby the service provider charges the users in exchange for server access, storage space and bandwidth, etc.

The online backup systems typically are schedule-based; however continual backup is a possibility. Depending on the requirements of the system and application, the backup is updated at preset intermittent levels; with the aim of efficient time and bandwidth utilization. The popularity of the Cloud backup (or managed backup service) business lies in the convenience it offers. The cost is reduced due to elimination of physical resources such as hard disks from the scenario with the added benefit of the automatic execution of the backup.

Cloud-based disaster recovery are a highly viable and useful approach for ensuring business continuity.  Using a completely virtualized environment and techniques such as replicated data, services such as LAN Doctors, Inc., a New Jersey-based managed backup service was able to provide 100% uptime when one of their largest clients - a major processor of insurance claims, was hit by a hurricane, lost internet connectivity - and was unable to process claims.

This kind of near-realtime "off-site" disaster recovery capability is now available to organizations of all sizes - not just those large enough to afford redundant data centers with high speed network connections.

The use of Cloud for backup and disaster recovery will grow - the increase in demand of the cloud storage is due mainly to the exponential increase in the more critical data amounts of the organizations over time. Increasingly, organizations are replicating not only data - but entire virtual systems to the Cloud.  Adding to the Cloud's advantages is the reduced price, flexibility of repeated testing and the non-rigid structure of the Cloud which gives you full opportunity to scale up or down as per your requirements.  The flexibility to restore from physical to Cloud-based virtual machine adds to the attraction.

Why Cloud Is Better
The most common traditional backup mechanism used is to store the data backup offsite.  For small business owners, sometimes that means putting a tape or disk drive in the computer bag and bringing it home.  For others, tapes/disks are sent overnight to a secure location. The most common problems with this approach are that either the data is not being stored offsite (due to human or procedural error), or else the data and systems are not being backed up frequently enough.  Furthermore, when a recovery is necessary, the media typically need to be transported back on-site.  If the data backup is stored locally, then there is the chance of a regional problem impacting the ability to recover. In retrospect, cloud offers a complete regionally-immune mechanism for online data recovery by creating a backup online at a remote site and enabling prompt data recovery when required.  Backups can be done as often as required.

Other Cloud-based recovery services include fail-over servers. In this scenario, in the event of server failure, a virtualized server and all the data can be spun up - while the failed server is recovered.

The Cloud provides significant advantages to many organizations - it enables a full data recovery mechanism by using backups, fail-over servers and a storage site remotely placed so as to keep it safe from the local or regional factors.  Meanwhile, the organizations avoid the cost and effort associated with maintaining all that backup infrastructure.

The large corporations - those which can afford redundant and remote compute capacity, and typically already have sophisticated recovery mechanisms running, can benefit by leveraging the Cloud where appropriate - and hence experience even better results than before. Of course, for a large organization to exercise and experience benefits of Cloud to its full in this area, it would need to consider the architecture and applications of their systems and the kind of technology deployed.

Or Is It?
The biggest concern for people and enterprises when it comes to the Cloud is the security of their data and the issue of their privacy.  Data from IDC show that 93 percent of US companies are backing up at least some data to the Cloud; whereas that number falls to about 63% in Western Europe and even further (57%) in Asia-Pacific region.  The biggest reason European and Asia-Pacific organizations give for not leveraging Cloud for backup?  Security.

There can also be latency issues in dealing with effectively streaming large amounts of data to the Cloud - versus (for example) having a data storage appliance with built-in deduplication and data compression.

Cloud or Local?  The Verdict
The answer is clearly "it depends".  Backup should never be treated as a "one-size fits all" thing.  Your backup and recovery mechanisms need to be matched to your particular technological and business needs.  There's simply no substitute for knowing your own requirements, the capability of various technologies, and carrying out a thorough evaluation.  Don't be surprised if you end up with both Cloud and local - some systems simply require local backup (either for business, regulatory or technological reasons).

With the average size of an organization's data growing at 40% a year, one thing is certain -  there is a lot of backing up that needs to get done, both locally and on the Cloud.

More Stories By Hollis Tibbetts

Hollis Tibbetts, or @SoftwareHollis as his 50,000+ followers know him on Twitter, is listed on various “top 100 expert lists” for a variety of topics – ranging from Cloud to Technology Marketing, Hollis is by day Evangelist & Software Technology Director at Dell Software. By night and weekends he is a commentator, speaker and all-round communicator about Software, Data and Cloud in their myriad aspects. You can also reach Hollis on LinkedIn – linkedin.com/in/SoftwareHollis. His latest online venture is OnlineBackupNews - a free reference site to help organizations protect their data, applications and systems from threats. Every year IT Downtime Costs $26.5 Billion In Lost Revenue. Even with such high costs, 56% of enterprises in North America and 30% in Europe don’t have a good disaster recovery plan. Online Backup News aims to make sure you all have the news and tips needed to keep your IT Costs down and your information safe by providing best practices, technology insights, strategies, real-world examples and various tips and techniques from a variety of industry experts.

Hollis is a regularly featured blogger at ebizQ, a venue focused on enterprise technologies, with over 100,000 subscribers. He is also an author on Social Media Today "The World's Best Thinkers on Social Media", and maintains a blog focused on protecting data: Online Backup News.
He tweets actively as @SoftwareHollis

Additional information is available at HollisTibbetts.com

All opinions expressed in the author's articles are his own personal opinions vs. those of his employer.

@MicroservicesExpo Stories
Software development is a moving target. You have to keep your eye on trends in the tech space that haven’t even happened yet just to stay current. Consider what’s happened with augmented reality (AR) in this year alone. If you said you were working on an AR app in 2015, you might have gotten a lot of blank stares or jokes about Google Glass. Then Pokémon GO happened. Like AR, the trends listed below have been building steam for some time, but they’ll be taking off in surprising new directions b...
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server? Sounds magical, and it is! In his session at 20th Cloud Expo, Chris Munns, Senior Developer Advocate for Serverless Applications at Amazon Web Services, will show how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverle...
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
The IT industry is undergoing a significant evolution to keep up with cloud application demand. We see this happening as a mindset shift, from traditional IT teams to more well-rounded, cloud-focused job roles. The IT industry has become so cloud-minded that Gartner predicts that by 2020, this cloud shift will impact more than $1 trillion of global IT spending. This shift, however, has left some IT professionals feeling a little anxious about what lies ahead. The good news is that cloud computin...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership abi...
The essence of cloud computing is that all consumable IT resources are delivered as services. In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transi...
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists l...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as C...
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...
We've all had that feeling before: The feeling that you're missing something that everyone else is in on. For today's IT leaders, that feeling might come up when you hear talk about cloud brokers. Meanwhile, you head back into your office and deal with your ever-growing shadow IT problem. But the cloud-broker whispers and your shadow IT issues are linked. If you're wondering "what the heck is a cloud broker?" we've got you covered.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace. Traditional approaches for driving innovation are now woefully inadequate for keeping up with the breadth of disruption and change facing...
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...