Welcome!

Microservices Expo Authors: Liz McMillan, Elizabeth White, Pat Romanski, Yeshim Deniz, Zakia Bouachraoui

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Blog Post

Cloud Computing: What NOT to Do

It's important to weigh a Cloud solution versus the traditional approach through the lens of business value

A shift is finally occurring in the Cloud Computing discourse. Topics like "what is it" have been supplanted by more useful questions such as "should I do it?", "how should I do it?", and "when should I do it?" In appreciation of this new phase of Cloud Computing adoption, we'd like to offer some thoughts on some of the common pitfalls of both public and private Clouds so that you might avoid them as you evaluate and implement cloud solutions.

1. Not understanding the business value
It's important to weigh a Cloud solution versus the traditional approach through the lens of business value. If this is an existing service, can the Cloud truly provide a better solution than what you're currently doing? If new, what are the important business requirements that may drive you to select a Cloud deployment model? In the event of a service outage or a security breech, what's going to happen: is your plane going to drop out of the sky or just experience some minor turbulence?

2. Assuming server virtualization is enough
Running applications on virtualized servers increases performance metrics such as utilization and application portability, but server virtualization is just one component of a truly dynamic IT environment, one that can flex with changing application loads and business requirements. Holistic virtualization requires virtualization at all layers: network, server, storage, memory, data, middleware, and application. A narrow deployment of virtualization capabilities will reduce both the economic and performance advantages of a Cloud deployment, as well as limit your flexibility when dealing with external providers (think vendor lock-in).

3. Not understanding service dependencies
Given the challenges most enterprises have with executing server consolidation and/or data center optimization activities, there's no reason to believe that this will be done properly as companies migrate applications to private or public clouds. Put simply, most enterprises don't understand the multitude of interactions between services, data sources, and other applications that support their business functions over a daily/weekly/monthly cycle. Any attempt to port applications to the Cloud without this knowledge will likely result in broken applications, or worse...difficult to diagnose suboptimal performance.

4. Leveraging traditional monitoring
Even today, many enterprises rely on their end users to notify them of service outages or slowdowns (sometimes referred to as brownouts). When they finally become aware of a critical issue, the individuals responsible for each technology silo circle up on a conference bridge until the problem is isolated and resolved. The Cloud makes this remediation process difficult to execute: good luck trying to get your Cloud provider on that conference bridge! Cloud makes the underlying infrastructure supporting an application/service fluid, rendering traditional resource-based monitoring tools useless as in a diagnostic process.

5. Not understanding internal/external costs
It's a common misconception that moving an application to the Cloud relieves the enterprise of the burden of the people, process, and technology previously supporting that application. However, unless you can answer questions like "which operations person is going to be let go now that we've moved this application to the cloud?" or "what server/rack/row/datacenter can we shut down now that this application is in the Cloud?" you haven't reduced your costs - you've actually increased them. The harsh reality is unless you have the luxury of zero legacy IT, adopting a public Cloud service just increases the internal allocation of costs to the applications that remain in-house. Moving to a private Cloud has a different economic model, but still necessitates substantially different methods for cost tracking and allocation.

Our intent in sharing this was not to dissuade anyone from moving forward in the Cloud - quite the contrary. There are extraordinary business and IT benefits that can be reaped from effective leverage of a Cloud paradigm. The key word is effective - hopefully this post will help you more thoroughly consider the less-obvious elements necessary for successful Cloud adoption.

In the coming weeks we'll explore each one of these topics in more detail, providing specific insights on the challenges and offering potential solutions.

More Stories By James Houghton

James Houghton is Co-Founder & Chief Technology Officer of Adaptivity. In his CTO capacity Jim interacts with key technology providers to evolve capabilities and partnerships that enable Adaptivity to offer its complete SOIT, RTI, and Utility Computing solutions. In addition, he engages with key clients to ensure successful leverage of the ADIOS methodology.

Most recently, Houghton was the SVP Architecture & Strategy Executive for the infrastructure organization at Bank of America, where he drove legacy infrastructure transformation initiatives across 40+ data centers. Prior to that he was the Head of Wachovia’s Utility Product Management, where he drove the design, services, and offering for SOA and Utility Computing for the technology division of Wachovia’s Corporate & Investment Bank. He has also led leading-edge consulting practices at IBM Global Technology Services and Deloitte Consulting.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...