Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Pat Romanski, JP Morgenthal, Aruna Ravichandran

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Steps for Improving Capacity Management in the Age of Cloud Computing

Realize the full promise of cloud computing

When you wake up in the morning and flip on a light switch, you don't think about whether the local power company has enough electricity available to power the light. Likewise, when you switch on the coffee pot or turn on your stove to make breakfast, you don't wonder about the available capacity of your local power grid.

Similarly, cloud computing is rapidly making the delivery of business services second nature to business users. They are able to access the services they need, when they need them, because the dynamic nature of the cloud offers unprecedented, highly elastic computing capacity.

Is your business able to fully exploit this flexibility? Cloud computing certainly brings with it new challenges for IT infrastructure management, particularly for capacity management. To get the most value from the cloud, you must continually balance capacity utilization, cost, and service quality. How can you make this happen? By transforming capacity management from a siloed, technology-oriented approach to one that is holistic and business aware.

This article examines six essential steps that will guide you through this transformation. These steps will help your organization to achieve the full benefits of cloud computing, including greater IT agility, higher service quality, and lower costs.

Traditional Approaches Are No Longer Sufficient
Traditionally, IT organizations have viewed capacity management as a dedicated, full-time job performed by specialized capacity planners and analysts. These highly skilled staffers are event-driven, responding to such occurrences as new application deployments and hardware changes.

Because of the sheer size and complexity of today's data centers and the small number of capacity planners in the typical IT organization, capacity planners often limit their focus to mission-critical systems. These planners typically manage capacity using a siloed, technology-based approach in which some planners focus on servers, others on storage, and still others on network devices. The problem is that not much communication takes place among groups, resulting in a capacity planning process that is a patchwork of disjointed activities, many of which are manual.

Traditional capacity management may have served well in the past, but if you are considering or already have begun the move to virtualization and cloud computing, traditional approaches are no longer adequate. Virtualization and cloud computing radically alter the character of the IT infrastructure. With cloud computing, the infrastructure is viewed as a pool of resources that are dynamically combined to deliver business services on demand and then returned to the pool when the services are no longer needed. In essence, the cloud provides a source of highly elastic computing capacity that can be applied when and where it's needed.

Consequently, capacity management is crucial to a successful cloud implementation. The aggregate capacity of the cloud must be sufficient to accommodate the dynamic assignment of workloads while still maintaining agreed-upon performance levels. Moreover, you must provide this capacity without over-buying equipment.

What's Needed: A Holistic, Business-Aware Approach
Traditional capacity management does not enable you to fully exploit the unprecedented capacity elasticity offered by cloud computing. Instead, you need a holistic approach that encompasses all data center resources - server, storage, and network - and links capacity utilization to business key performance indicators (KPIs). To meet this requirement, you need to accomplish the following six transformational steps.

1. Take a Broad, Continuous View
Transforming your approach to capacity planning requires making a shift in both scope and timing. With respect to scope, it's important to broaden your capacity planning focus from mission-critical systems to include the entire IT infrastructure. Cloud computing puts the aggregate capacity of the entire infrastructure at your disposal, enabling you to apply it when and where it is needed. To take full advantage of this flexibility, you need to know how much total capacity is out there and how it's being used.

With respect to timing, consider the highly dynamic nature of the infrastructure. Residual capacity is continually changing as workloads shift and changes occur in the physical infrastructure. Consequently, you need to manage capacity on a continual basis rather than only in reaction to certain events.

2. Shift to a Business Service Orientation
Most business users view the IT infrastructure simply as a source of business services. They want to request business services and have them delivered quickly with performance as defined in the associated service level agreements (SLAs). These users are not concerned with the underlying devices that make up a service.

Traditionally, IT has managed capacity from a technology perspective and has communicated in the language of IT managers. In transitioning to cloud computing, you need to manage capacity from a business service perspective and communicate in the language of business managers and users. For example, a capacity planner should be able to answer such questions as, "How many additional customer orders can my infrastructure support before running into capacity and response-time problems?"

To answer that question, the capacity planner must understand the relationship of capacity to business requirements. This is especially challenging in a cloud environment in which the delivery of business services involves the orchestrated combination of multiple resources that may include applications, servers, storage devices, and network equipment. To meet the challenge, you need to take a holistic approach that encompasses all the devices and the application layer that make up each service.

Transitioning from technology-oriented to business-service-oriented capacity management requires a corresponding shift in the metrics you gather and communicate. That shift necessitates an expansion of the scope and reach of analytics and reporting from technology metrics to business metrics, so that business demand becomes the driving factor for capacity management.

Reporting in the cloud environment, therefore, should link IT resources (physical and virtual servers, databases, applications, storage, networks, and facilities) to measurable business data, such as the costs and KPIs of the business. This linkage enables IT to communicate capacity issues to business leaders in a meaningful way. As a result, leaders can make informed, cost-effective choices in requesting capacity. For example, communicating the relationship of service workload capacity to cost discourages business users from requesting more capacity than they actually need.

The transition to a business service orientation requires a parallel transition in the makeup and skill set of capacity planners. Instead of a highly specialized, device-oriented capacity planning guru, you need generalist IT users, such as cloud service architects. These generalists should work closely with business users on one side and with the infrastructure experts on the other to establish capacity requirements based on business workloads.

3. Automate, Automate, Automate
The unfortunate reality in many organizations is that capacity planning involves numerous manual processes that are both inefficient and time intensive. Capacity planners may collect technology-oriented usage and performance data from a monitoring infrastructure and manually import the data into spreadsheets - a laborious and time-consuming process. The data collection task consumes much of the planners' time, leaving little time for analysis and capacity planning.

The transition to business-service-oriented capacity management makes the traditional manual approach impractical. That's because capacity planners must not only elevate their analysis, reports, and recommendations to a business-service level, but they must also extend their reporting from mission-critical servers to include the entire infrastructure. This requires an automated approach that encompasses data gathering, translation, and reporting in a form that is meaningful to business managers. This increase in automation helps boost quality and operational efficiency. Ultimately, automation boosts staff productivity and increases the relevance and importance of capacity management to the business.

4. Adopt Integrated, Shared Processes Across the Enterprise
A major shortcoming of traditional capacity management is that capacity planning and analysis are performed by a small cadre of planners who are siloed based on technology, such as server specialists, storage experts, and network specialists. This fragmentation makes it difficult to manage capacity based on overall business service impact. For example, how can you ensure that sufficient capacity will be available to support an anticipated spike in workload in an order-entry service due to an upcoming promotional event?

The move to the cloud environment requires a transition from siloed processes that are performed solely by capacity planners to integrated processes that are extended to and shared by other IT groups, such as application developers and database administrators. This transition permits you to leverage the expertise of capacity planners while making capacity planning a universal, shared responsibility that transcends functional IT groups.

5. Employ Predictive Analytics
Most IT organizations are moving toward the cloud environment incrementally. This typically comprises two major phases. The first phase involves migrating physical systems to virtual machines. Here, IT simply virtualizes selected physical systems with the primary goal of cutting data center costs by reducing the number of physical devices. For example, you may already have a virtual server farm in place and want to virtualize a current physical workload to that farm. You approach this by determining which physical host servers can best accommodate the additional workload(s).

The second phase involves optimizing the virtual workloads by determining the most effective placement of virtual workloads in the cloud. You could test various combinations in a laboratory environment, but it would be difficult and expensive to duplicate the real-world environment in your data center. You need the ability to gauge the impact of deploying various virtual/physical combinations in your production environment without actually implementing them.

Analysis and "what-if" modeling tools can help you in both phases. These tools enable you to preview various virtual/physical configurations and combinations before deploying them in production. In addition, modeling workloads also permits you to assess the impact of infrastructure changes without actually making the changes. For example, you can assess the impact of upgrading a server's CPU with another, more powerful one. What's more, predictive analysis of workload trends helps you ensure that needed capacity will be available in the future when and where it's needed to meet anticipated growth.

6. Integrate Capacity Management with Other Solutions, Tools, and Processes
Effective management of the cloud environment implies a broad perspective that encompasses the entire IT infrastructure as well as multiple IT disciplines. That requires the integration of capacity management tools and processes with other Business Service Management (BSM) tools and processes. BSM is a comprehensive approach and unified platform that helps IT organizations cut cost, reduce risk, and drive business profit.

The integration of capacity management solutions with discovery and dependency mapping tools gives you broad visibility into all the physical and virtual resources currently deployed in your data center. Not only will you see what's out there, but you will also understand how it is being used.

By integrating capacity management solutions with a configuration management database (CMDB), you can leverage the business service relationships stored in the CMDB for precise capacity analysis, reporting, and planning. Shared use of the configuration items (CIs) and service relationships defined in the CMDB ensures consistency across multiple IT disciplines and eliminates the need to maintain duplicate information in multiple tools.

Integration of capacity management with performance management solutions gives capacity planners real-time and historical data on business-service performance. The planners can leverage this data to maintain an ongoing balance between performance and resource utilization.

By integrating capacity management processes with change and configuration management processes, IT can ensure that all capacity-related changes made either automatically or manually to the cloud infrastructure are in compliance with internal policies and external regulations.

With an overall Business Service Management (BSM) approach, you can integrate capacity management with other IT disciplines and processes. In this way, you can effectively and efficiently manage business services throughout their entire lifecycle - across physical, virtual, and cloud-based resources.

Optimal Use of the Cloud Is Within Reach
Cloud computing is transforming the IT infrastructure into a highly elastic resource that quickly and continually adapts to changing business needs. This transformation fundamentally changes the way organizations deliver IT services and enables IT to be far more responsive to the demands of the business.

Working your way through the six transitional steps described here will help you ensure optimal use of the capacity of your cloud infrastructure - a key success factor for your cloud initiatives. By making the transition from a siloed, technology-oriented approach to a holistic, business-aware approach to capacity management, you can position your organization to realize the full promise of cloud computing.

More Stories By Fabio Violante

Fabio Violante, senior director of product development and member of the CTO Office at BMC Software, began his career with a PhD in Computer Engineering, specializing in IT performance valuation. He then went on to gaining extensive consulting experience in IT architectures while working with Accenture, Sun, and Hewlett-Packard.

In 2000, Violante co-founded Neptuny, a leading solutions provider of IT Performance Optimization and Capacity Management solutions and the first company to be incubated by Politecnico di Milano. Neptuny’s flagship product, Caplan, which is now part of BMC capacity Management, revolutionized the capacity management landscape by introducing a business oriented approach to capacity management. In October 2010, Neptuny’s software business was acquired by BMC Software, extending BMC’s leadership in capacity management and enhancing the company’s dynamic Business Service Management portfolio and cloud management offerings.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
The proper isolation of resources is essential for multi-tenant environments. The traditional approach to isolate resources is, however, rather heavyweight. In his session at 18th Cloud Expo, Igor Drobiazko, co-founder of elastic.io, drew upon his own experience with operating a Docker container-based infrastructure on a large scale and present a lightweight solution for resource isolation using microservices. He also discussed the implementation of microservices in data and application integrat...
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
Here’s a novel, but controversial statement, “it’s time for the CEO, COO, CIO to start to take joint responsibility for application platform decisions.” For too many years now technical meritocracy has led the decision-making for the business with regard to platform selection. This includes, but is not limited to, servers, operating systems, virtualization, cloud and application platforms. In many of these cases the decision has not worked in favor of the business with regard to agility and cost...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
An overall theme of Cloud computing and the specific practices within it is fundamentally one of automation. The core value of technology is to continually automate low level procedures to free up people to work on more value add activities, ultimately leading to the utopian goal of full Autonomic Computing. For example a great way to define your plan for DevOps tool chain adoption is through this lens. In this TechTarget article they outline a simple maturity model for planning this.
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...