Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: @CloudExpo

@CloudExpo: Article

The Five Pillars of Cloud Computing

Cloud computing requires a dynamic computing infrastructure - there are four other pillars, too

Cloud computing is getting tons of press these days. Everyone has a different perspective and understanding of the technology, and there are myriad variations on the definition of the cloud- William Fellows and John Barr at the 451 Group define cloud computing as the intersection of grid, virtualization, SaaS, and utility computing models. James Staten of Forrester Research describes it as a pool of abstracted, highly scalable, and managed compute infrastructure capable of hosting end-customer applications and billed by consumption. Let's take it a step further and examine the core principles, or pillars, that uniquely define cloud computing.

Pillar 1: Dynamic Computing Infrastructure
Cloud computing requires a dynamic computing infrastructure. The foundation for the dynamic infrastructure is a standardized, scalable, and secure physical infrastructure. There should be levels of redundancy to ensure high levels of availability, but mostly it must be easy to extend as usage growth demands it, without requiring architecture rework. Next, it must be virtualized. Today, virtualized environments leverage server virtualization (typically from VMware, Microsoft, or Xen) as the basis for running services. These services need to be easily provisioned and de-provisioned via software automation. These service workloads need to be moved from one physical server to another as capacity demands increase or decrease. Finally, this infrastructure should be highly utilized, whether provided by an external cloud provider or an internal IT department. The infrastructure must deliver business value over and above the investment.

A dynamic computing infrastructure is critical to effectively supporting the elastic nature of service provisioning and de-provisioning as requested by users while maintaining high levels of reliability and security. The consolidation provided by virtualization, coupled with provisioning automation, creates a high level of utilization and reuse, ultimately yielding a very effective use of capital equipment

Pillar 2: IT Service-Centric Approach
Cloud computing is IT (or business) service-centric. This is in stark contrast to more traditional system- or server- centric models. In most cases, users of the cloud generally want to run some business service or application for a specific, timely purpose; they don't want to get bogged down in the system and network administration of the environment. They would prefer to quickly and easily access a dedicated instance of an application or service. By abstracting away the server-centric view of the infrastructure, system users can easily access powerful pre-defined computing environments designed specifically around their service.

An IT Service Centric approach enables user adoption and business agility - the easier and faster a user can perform an administrative task the more expedient the business moves, reducing costs or driving revenue.

Pillar 3: Self-Service Based Usage Model
Interacting with the cloud requires some level of user self-service. Best of breed self-service provides users the ability to upload, build, deploy, schedule, manage, and report on their business services on demand. Self-service cloud offerings must provide easy-to-use, intuitive user interfaces that equip users to productively manage the service delivery lifecycle.

The benefit of self service from the users' perspective is a level of empowerment and independence that yields significant business agility. One benefit often overlooked from the service provider's or IT team's perspective is that the more self service that can be delegated to users, the less administrative involvement is necessary. This saves time and money and allows administrative staff to focus on more strategic, high-valued responsibilities.

Pillar 4: Minimally or Self-Managed Platform
In order for an IT team or a service provider to efficiently provide a cloud for its constituents, they must leverage a technology platform that is self managed. Best-of-breed clouds enable self-management via software automation, leveraging the following capabilities:

  • A provisioning engine for deploying services and tearing them down recovering resources for high levels of reuse
  • Mechanisms for scheduling and reserving resource capacity
  • Capabilities for configuring, managing, and reporting to ensure resources can be allocated and reallocated to multiple groups of users
  • Tools for controlling access to resources and policies for how resources can be used or operations can be performed

All of these capabilities enable business agility while simultaneously enacting critical and necessary administrative control. This balance of control and delegation maintains security and uptime, minimizes the level of IT administrative effort, and keeps operating expenses low, freeing up resources to focus on higher value projects.

Pillar 5: Consumption-Based Billing
Finally, cloud computing is usage-driven. Consumers pay for only what resources they use and therefore are charged or billed on a consumption-based model. Cloud computing platforms must provide mechanisms to capture usage information that enables chargeback reporting and/or integration with billing systems.

The value here from a user's perspective is the ability for them to pay only for the resources they use, ultimately helping them keep their costs down. From a provider's perspective, it allows them to track usage for charge back and billing purposes.

In summary, all of these five pillars are necessary in producing an enterprise private cloud capable of achieving compelling business value which includes savings on capital equipment and operating costs, reduced support costs, and significantly increased business agility. All of these enable corporations to improve their profit margins and competitiveness in the markets they serve.

More Stories By Dave Malcolm

Dave Malcolm is Vice President & Chief Technologist for Virtualization and Cloud at Quest. With more than 20 years of experience in high tech and enterprise software development, he drives product and technology strategy. Most recently, Malcolm served as the CTO of Surgient, where he led the development team responsible for the creation, delivery, and implementation of the enterprise-class Cloud Automation Platform. With a keen focus on both innovation and practical application, Malcolm and his team have developed a robust infrastructure-as-a-service cloud automation platform and multiple granted cloud computing patents.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...