Click here to close now.




















Welcome!

Microservices Expo Authors: VictorOps Blog, Pat Romanski, Elizabeth White, Liz McMillan, Trevor Parsons

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Open Source Cloud, Containers Expo Blog, Apache

@CloudExpo: Article

Bursting the Cloudbursting Bubble

Cloudbursting is mostly marketing vaporware, & even as the Cloud marketplace matures, may only be of limited applicability

You're the widget product manager for Widgetco, who sells about 500 widgets per day on your Web site, some days a dozen more, some days a dozen less. Everything is fine until you pick up a copy of USA Today. Right there on the front page, in brilliant color-on-newsprint, is Justin Bieber. And what is the Biebster holding in his hand? One of your widgets.

Dream scenario? No, you think, more like nightmare scenario. Widgetco hosts its Web site in its own data center, as it has done since 1997. It can take maybe, say, a thousand or two transactions per day at most. But sure enough, Bieber Fever crashes your site, on the one day you could make your entire quarterly sales quota, if only you could fulfill the demand.

You should have listened to your CIO, who recommended Cloudbursting as a way of dealing with unexpected spikes in demand. Cloudbursting is being able to maintain on-premise or Private Cloud capacity for normal capacity requirements, while a Public Cloud automatically handles excess demand. Cloudbursting is supposed to be an economical way of leveraging the Public Cloud, because you only pay the Cloud provider when you require excess capacity. On normal days, however, your existing, already-paid-for infrastructure handles the load quite well.

A straightforward value proposition, right? Any on-premise or Private Cloud-based app that is subject to spikes in demand that existing infrastructure can't handle should be able to benefit, so the argument goes. Unfortunately, however, Cloudbursting has a number of problems, making it challenging for even the most suitable scenarios-and furthermore, such scenarios are rarer than you think. Bottom line: Cloudbursting is mostly marketing vaporware, and even as the Cloud marketplace matures, may only be of limited applicability.

A Closer Look at Cloudbursting
Cloudbursting depends upon workload migration: when your on-premise system bogs down, you must move your entire application to the Cloud-data, business logic, and user interface. Over an Internet connection to the Cloud. Even the most basic workloads might take hours to migrate, and in the meantime, your customers are left out in the cold.

The obvious way to mitigate the workload migration problem is to set up a copy of your application environment in the Cloud ahead of time. That way when the Bieber effect kicks in, all you need to do is fire up the Cloud copy and reconfigure your DNS to direct traffic to it, right?

Not so fast. First you'll need to synchronize your data. There are tools for that, true, but it still takes time, and you now have the challenge of maintaining the true version of the data. For example, let's say you have 5,000 widgets in inventory (as reported by your ERP application) when your site goes down. You can't migrate the whole ERP to the Cloud, so you copy over your master inventory table. Now you're fulfilling orders in the Cloud as well as on-premise, since your on-premise site has recovered now that you've lightened its load. The result? Each site sells 3,000 widgets before the next data synchronization cycle, and once again you're in trouble.

OK, so that won't work either. Instead, you integrate the Cloud app with your ERP system, so that you can handle orders in real time, instead of waiting to synchronize your data. In other words, you set up a Hybrid Cloud. Yes, you can do that-after all, many organizations are moving to Hybrid Cloud models-but then you ask yourself: does it really make sense to put in all the time and effort to set up a Hybrid Cloud solely for handling Cloudbursting? If you're going to all that trouble, why not keep the Cloud-based app live all the time?

There's the rub with Cloudbursting: you might think you're saving money by only using a Public Cloud for handling peak demand, but in reality, you get better Total Cost of Ownership by using the Cloud all the time, either via a Hybrid model, or by migrating your entire app to the Cloud. The Hybrid model provides additional benefits as well, namely a measure of failover, increasing your overall availability. It's always better to have two (or more) geographically distributed instances of an app serving your customers, in case something happens to one of them. And if you want to offer seamless availability, you should have all the instances running at once, with a load balancer distributing the traffic. Chances are, you can get load balancing from the Cloud provider as well.

So you've convinced your CIO that Cloudbursting might not be the best alternative. Instead, you're discussing moving your entire site to the Cloud when your CEO walks into the room. Her concern is for compliance and security. You're taking customer credit card numbers, so you must be PCI compliant. And everybody knows Public Clouds are less secure than Private ones, right?

The problem here is that if these concerns are valid then they rule out Cloudbursting as well. Being PCI compliant except during peak demand is just another way of saying you're not PCI compliant. On the other hand, if your Public Cloud provider offers PCI compliance, then it would apply equally well to Cloudbursting as to a Hybrid approach or migrating to the Public Cloud. The same argument applies to security concerns.

There are a few more pitfalls to Cloudbursting worth mentioning. If you're thinking of putting your app in a Private Cloud and using a Public Cloud for Cloudbursting, then what you're really saying is that you didn't plan your Private Cloud properly in the first place. After all, what's the point in setting up a Private Cloud unless it can provide sufficient elasticity to meet your needs? You might as well just stick to a traditional on-premise hosted environment.

You also need to work through the details of the Cloudbursting event itself. Does your on-premise app need to fail for Cloudbursting to take place, or do you have a way of bursting as your existing app nears a critical threshold, but before it actually goes down? The latter requires careful management, and even with all the appropriate management tools in place, you may still have a failure-based scenario. The question then is whether the on-premise failure will impede your ability to successfully Cloudburst. For example, if the Bieber effect causes your database server to crash requiring a reboot, you may not be able to synchronize your data in order to begin the Cloudbursting. In other words, you've designed your Cloudbursting to fail just when you need it.

The ZapThink Take
Let's say you've made it to this point in this ZapFlash and you're still not convinced. You remain confident that Cloudbursting is practical in your situation. OK, then, what kind of situations might be appropriate for Cloudbursting?

Our Widgetco example required some legacy integration, which obviously complicates Cloudbursting enormously. Cloudbursting would clearly be more suitable for standalone applications that didn't require such integration. But on the other hand, you would only need Cloudbursting if you have an app that is susceptible to spikes in demand-and virtually all such apps have public-facing Web interfaces. And thirdly, Cloudbursting would clearly not be appropriate for any app that should obviously be entirely Cloud-based from the get go, namely a SaaS or PaaS app.

We've essentially crossed off every kind of application from the list. Any sort of app that processes customer transactions is out of consideration, because they either require legacy integration or should run as SaaS apps in order to process transactions in the Cloud. All that remain are free, public-facing Web applications that have unpredictable traffic patterns and yet have an on-premise component that you don't want to move to the Cloud. You have one minute to think of one. Ready? Go!

Image credit: oskaree

More Stories By Jason Bloomberg

Jason Bloomberg is the leading expert on architecting agility for the enterprise. As president of Intellyx, Mr. Bloomberg brings his years of thought leadership in the areas of Cloud Computing, Enterprise Architecture, and Service-Oriented Architecture to a global clientele of business executives, architects, software vendors, and Cloud service providers looking to achieve technology-enabled business agility across their organizations and for their customers. His latest book, The Agile Architecture Revolution (John Wiley & Sons, 2013), sets the stage for Mr. Bloomberg’s groundbreaking Agile Architecture vision.

Mr. Bloomberg is perhaps best known for his twelve years at ZapThink, where he created and delivered the Licensed ZapThink Architect (LZA) SOA course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, the leading SOA advisory and analysis firm, which was acquired by Dovel Technologies in 2011. He now runs the successor to the LZA program, the Bloomberg Agile Architecture Course, around the world.

Mr. Bloomberg is a frequent conference speaker and prolific writer. He has published over 500 articles, spoken at over 300 conferences, Webinars, and other events, and has been quoted in the press over 1,400 times as the leading expert on agile approaches to architecture in the enterprise.

Mr. Bloomberg’s previous book, Service Orient or Be Doomed! How Service Orientation Will Change Your Business (John Wiley & Sons, 2006, coauthored with Ron Schmelzer), is recognized as the leading business book on Service Orientation. He also co-authored the books XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996).

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting).

@MicroservicesExpo Stories
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device acce...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
Microservice architecture is fast becoming a go-to solution for enterprise applications, but it's not always easy to make the transition from an established, monolithic infrastructure. Lightweight and loosely coupled, building a set of microservices is arguably more difficult than building a monolithic application. However, once established, microservices offer a series of advantages over traditional architectures as deployment times become shorter and iterating becomes easier.
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...