Microservices Expo Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Pat Romanski, Zakia Bouachraoui

Related Topics: Microservices Expo, IBM Cloud, @CloudExpo

Microservices Expo: Blog Post

Cloud Computing Isn't a Substitute For Due Diligence

Setting Expectations Straight

Well, everyone knew it wouldn't be long before cloud computing got thrown under the proverbial bus after the latest Sidekick failure. Observers point at this specific failure, as they have with Gmail, Amazon, and other cloud provider outages in the past, as a broader problem. Some like to use these service outages as an opportunity to initiate a full-fledged attack on the idea of cloud computing. However, can we really just blame cloud computing and move on?

If cloud computing plays a part in the blame game for these outages, it's because of the hype around the industry. If the cloud is being portrayed as a magical bag of beans that can solve all IT ills then that is a problem. Companies need to take a hard look at the ever increasing cloud offerings to understand if and when they can leverage particular cloud technologies. Cloud computing should be viewed as something that can enhance, not necessarily replace, a company's current IT solutions. Potential users should understand that cloud isn't about simply turning your operations over to a service provider for hosting (the infamous "your mess for less" model). It's more about driving efficiencies into the way we procure, utilize, and manage services within IT. These efficiencies can mean savings in costs, increased agility, and intensified focus on the services and activities that a company derives its competitive advantage from.

These service outages, that some use as a platform from which to attack cloud computing, only reinforce the fact that cloud does not magically solve some of the more basic IT issues. Concerns that come with a solution built around distributed services hold true for most solutions that include or are wholly comprised of a cloud service. As an example, many of the cases where we hear about these "cloud failures" we are essentially dealing with cloud-based storage services. Companies looking to incorporate cloud-based storage would naturally want to ask questions such as:

  1. What is the data replication story? How many replicas are there and where are the replicas hosted with respect to the primary copies?
  2. Is failover automatically handled?
  3. How are spikes in volume handled?
  4. If the cloud service is off-premise, what are the options for copying data between the off-premise provider and an on-premise data center?

These are very basic questions, but important nonetheless, and it's just the tip of the iceberg. I just mean to illustrate that if a company utilizes a cloud service it doesn't mean these concerns simply disappear. 

The outages that get the most press are from providers that have no doubt done more than their due diligence with respect to designing and implementing a robust, fault-tolerant, stress-hardened cloud service. So what then does that say about cloud computing? Simply that some outages cannot be avoided, and this is nothing new to either the non-cloud or cloud world. Perhaps cloud criticism comes so quickly because unrealistic expectations have been set. If that's the case, it's time to remind ourselves that cloud computing can certainly help, but it's not a magic cure-all.

Potential cloud consumers should ignore vendor hype and set expectations based on a realistic understanding of the advantages and disadvantages of the particular cloud services they look to adopt. Maybe then we can get past the cloud blame game every time there is a vendor outage. After all let's be realistic, these occasional outages, whether in a cloud or not, aren't going to totally disappear.

More Stories By Dustin Amrhein

Dustin Amrhein joined IBM as a member of the development team for WebSphere Application Server. While in that position, he worked on the development of Web services infrastructure and Web services programming models. In his current role, Dustin is a technical specialist for cloud, mobile, and data grid technology in IBM's WebSphere portfolio. He blogs at http://dustinamrhein.ulitzer.com. You can follow him on Twitter at http://twitter.com/damrhein.

Microservices Articles
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development organizations.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...