Welcome!

Microservices Expo Authors: Mehdi Daoudi, Pat Romanski, Elizabeth White, Steve Wilson, Stackify Blog

Related Topics: @CloudExpo

@CloudExpo: Article

The Five Pillars of Cloud Computing

Cloud computing requires a dynamic computing infrastructure - there are four other pillars, too

Cloud computing is getting tons of press these days. Everyone has a different perspective and understanding of the technology, and there are myriad variations on the definition of the cloud- William Fellows and John Barr at the 451 Group define cloud computing as the intersection of grid, virtualization, SaaS, and utility computing models. James Staten of Forrester Research describes it as a pool of abstracted, highly scalable, and managed compute infrastructure capable of hosting end-customer applications and billed by consumption. Let's take it a step further and examine the core principles, or pillars, that uniquely define cloud computing.

Pillar 1: Dynamic Computing Infrastructure
Cloud computing requires a dynamic computing infrastructure. The foundation for the dynamic infrastructure is a standardized, scalable, and secure physical infrastructure. There should be levels of redundancy to ensure high levels of availability, but mostly it must be easy to extend as usage growth demands it, without requiring architecture rework. Next, it must be virtualized. Today, virtualized environments leverage server virtualization (typically from VMware, Microsoft, or Xen) as the basis for running services. These services need to be easily provisioned and de-provisioned via software automation. These service workloads need to be moved from one physical server to another as capacity demands increase or decrease. Finally, this infrastructure should be highly utilized, whether provided by an external cloud provider or an internal IT department. The infrastructure must deliver business value over and above the investment.

A dynamic computing infrastructure is critical to effectively supporting the elastic nature of service provisioning and de-provisioning as requested by users while maintaining high levels of reliability and security. The consolidation provided by virtualization, coupled with provisioning automation, creates a high level of utilization and reuse, ultimately yielding a very effective use of capital equipment

Pillar 2: IT Service-Centric Approach
Cloud computing is IT (or business) service-centric. This is in stark contrast to more traditional system- or server- centric models. In most cases, users of the cloud generally want to run some business service or application for a specific, timely purpose; they don't want to get bogged down in the system and network administration of the environment. They would prefer to quickly and easily access a dedicated instance of an application or service. By abstracting away the server-centric view of the infrastructure, system users can easily access powerful pre-defined computing environments designed specifically around their service.

An IT Service Centric approach enables user adoption and business agility - the easier and faster a user can perform an administrative task the more expedient the business moves, reducing costs or driving revenue.

Pillar 3: Self-Service Based Usage Model
Interacting with the cloud requires some level of user self-service. Best of breed self-service provides users the ability to upload, build, deploy, schedule, manage, and report on their business services on demand. Self-service cloud offerings must provide easy-to-use, intuitive user interfaces that equip users to productively manage the service delivery lifecycle.

The benefit of self service from the users' perspective is a level of empowerment and independence that yields significant business agility. One benefit often overlooked from the service provider's or IT team's perspective is that the more self service that can be delegated to users, the less administrative involvement is necessary. This saves time and money and allows administrative staff to focus on more strategic, high-valued responsibilities.

Pillar 4: Minimally or Self-Managed Platform
In order for an IT team or a service provider to efficiently provide a cloud for its constituents, they must leverage a technology platform that is self managed. Best-of-breed clouds enable self-management via software automation, leveraging the following capabilities:

  • A provisioning engine for deploying services and tearing them down recovering resources for high levels of reuse
  • Mechanisms for scheduling and reserving resource capacity
  • Capabilities for configuring, managing, and reporting to ensure resources can be allocated and reallocated to multiple groups of users
  • Tools for controlling access to resources and policies for how resources can be used or operations can be performed

All of these capabilities enable business agility while simultaneously enacting critical and necessary administrative control. This balance of control and delegation maintains security and uptime, minimizes the level of IT administrative effort, and keeps operating expenses low, freeing up resources to focus on higher value projects.

Pillar 5: Consumption-Based Billing
Finally, cloud computing is usage-driven. Consumers pay for only what resources they use and therefore are charged or billed on a consumption-based model. Cloud computing platforms must provide mechanisms to capture usage information that enables chargeback reporting and/or integration with billing systems.

The value here from a user's perspective is the ability for them to pay only for the resources they use, ultimately helping them keep their costs down. From a provider's perspective, it allows them to track usage for charge back and billing purposes.

In summary, all of these five pillars are necessary in producing an enterprise private cloud capable of achieving compelling business value which includes savings on capital equipment and operating costs, reduced support costs, and significantly increased business agility. All of these enable corporations to improve their profit margins and competitiveness in the markets they serve.

More Stories By Dave Malcolm

Dave Malcolm is Vice President & Chief Technologist for Virtualization and Cloud at Quest. With more than 20 years of experience in high tech and enterprise software development, he drives product and technology strategy. Most recently, Malcolm served as the CTO of Surgient, where he led the development team responsible for the creation, delivery, and implementation of the enterprise-class Cloud Automation Platform. With a keen focus on both innovation and practical application, Malcolm and his team have developed a robust infrastructure-as-a-service cloud automation platform and multiple granted cloud computing patents.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...
There are several reasons why businesses migrate their operations to the cloud. Scalability and price are among the most important factors determining this transition. Unlike legacy systems, cloud based businesses can scale on demand. The database and applications in the cloud are not rendered simply from one server located in your headquarters, but is instead distributed across several servers across the world. Such CDNs also bring about greater control in times of uncertainty. A database hack ...
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
API Security is complex! Vendors like Forum Systems, IBM, CA and Axway have invested almost 2 decades of engineering effort and significant capital in building API Security stacks to lockdown APIs. The API Security stack diagram shown below is a building block for rapidly locking down APIs. The four fundamental pillars of API Security - SSL, Identity, Content Validation and deployment architecture - are discussed in detail below.
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...