Welcome!

Microservices Expo Authors: Gerardo A Dada, Liz McMillan, Carmen Gonzalez, Elizabeth White, Pat Romanski

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Steps for Improving Capacity Management in the Age of Cloud Computing

Realize the full promise of cloud computing

When you wake up in the morning and flip on a light switch, you don't think about whether the local power company has enough electricity available to power the light. Likewise, when you switch on the coffee pot or turn on your stove to make breakfast, you don't wonder about the available capacity of your local power grid.

Similarly, cloud computing is rapidly making the delivery of business services second nature to business users. They are able to access the services they need, when they need them, because the dynamic nature of the cloud offers unprecedented, highly elastic computing capacity.

Is your business able to fully exploit this flexibility? Cloud computing certainly brings with it new challenges for IT infrastructure management, particularly for capacity management. To get the most value from the cloud, you must continually balance capacity utilization, cost, and service quality. How can you make this happen? By transforming capacity management from a siloed, technology-oriented approach to one that is holistic and business aware.

This article examines six essential steps that will guide you through this transformation. These steps will help your organization to achieve the full benefits of cloud computing, including greater IT agility, higher service quality, and lower costs.

Traditional Approaches Are No Longer Sufficient
Traditionally, IT organizations have viewed capacity management as a dedicated, full-time job performed by specialized capacity planners and analysts. These highly skilled staffers are event-driven, responding to such occurrences as new application deployments and hardware changes.

Because of the sheer size and complexity of today's data centers and the small number of capacity planners in the typical IT organization, capacity planners often limit their focus to mission-critical systems. These planners typically manage capacity using a siloed, technology-based approach in which some planners focus on servers, others on storage, and still others on network devices. The problem is that not much communication takes place among groups, resulting in a capacity planning process that is a patchwork of disjointed activities, many of which are manual.

Traditional capacity management may have served well in the past, but if you are considering or already have begun the move to virtualization and cloud computing, traditional approaches are no longer adequate. Virtualization and cloud computing radically alter the character of the IT infrastructure. With cloud computing, the infrastructure is viewed as a pool of resources that are dynamically combined to deliver business services on demand and then returned to the pool when the services are no longer needed. In essence, the cloud provides a source of highly elastic computing capacity that can be applied when and where it's needed.

Consequently, capacity management is crucial to a successful cloud implementation. The aggregate capacity of the cloud must be sufficient to accommodate the dynamic assignment of workloads while still maintaining agreed-upon performance levels. Moreover, you must provide this capacity without over-buying equipment.

What's Needed: A Holistic, Business-Aware Approach
Traditional capacity management does not enable you to fully exploit the unprecedented capacity elasticity offered by cloud computing. Instead, you need a holistic approach that encompasses all data center resources - server, storage, and network - and links capacity utilization to business key performance indicators (KPIs). To meet this requirement, you need to accomplish the following six transformational steps.

1. Take a Broad, Continuous View
Transforming your approach to capacity planning requires making a shift in both scope and timing. With respect to scope, it's important to broaden your capacity planning focus from mission-critical systems to include the entire IT infrastructure. Cloud computing puts the aggregate capacity of the entire infrastructure at your disposal, enabling you to apply it when and where it is needed. To take full advantage of this flexibility, you need to know how much total capacity is out there and how it's being used.

With respect to timing, consider the highly dynamic nature of the infrastructure. Residual capacity is continually changing as workloads shift and changes occur in the physical infrastructure. Consequently, you need to manage capacity on a continual basis rather than only in reaction to certain events.

2. Shift to a Business Service Orientation
Most business users view the IT infrastructure simply as a source of business services. They want to request business services and have them delivered quickly with performance as defined in the associated service level agreements (SLAs). These users are not concerned with the underlying devices that make up a service.

Traditionally, IT has managed capacity from a technology perspective and has communicated in the language of IT managers. In transitioning to cloud computing, you need to manage capacity from a business service perspective and communicate in the language of business managers and users. For example, a capacity planner should be able to answer such questions as, "How many additional customer orders can my infrastructure support before running into capacity and response-time problems?"

To answer that question, the capacity planner must understand the relationship of capacity to business requirements. This is especially challenging in a cloud environment in which the delivery of business services involves the orchestrated combination of multiple resources that may include applications, servers, storage devices, and network equipment. To meet the challenge, you need to take a holistic approach that encompasses all the devices and the application layer that make up each service.

Transitioning from technology-oriented to business-service-oriented capacity management requires a corresponding shift in the metrics you gather and communicate. That shift necessitates an expansion of the scope and reach of analytics and reporting from technology metrics to business metrics, so that business demand becomes the driving factor for capacity management.

Reporting in the cloud environment, therefore, should link IT resources (physical and virtual servers, databases, applications, storage, networks, and facilities) to measurable business data, such as the costs and KPIs of the business. This linkage enables IT to communicate capacity issues to business leaders in a meaningful way. As a result, leaders can make informed, cost-effective choices in requesting capacity. For example, communicating the relationship of service workload capacity to cost discourages business users from requesting more capacity than they actually need.

The transition to a business service orientation requires a parallel transition in the makeup and skill set of capacity planners. Instead of a highly specialized, device-oriented capacity planning guru, you need generalist IT users, such as cloud service architects. These generalists should work closely with business users on one side and with the infrastructure experts on the other to establish capacity requirements based on business workloads.

3. Automate, Automate, Automate
The unfortunate reality in many organizations is that capacity planning involves numerous manual processes that are both inefficient and time intensive. Capacity planners may collect technology-oriented usage and performance data from a monitoring infrastructure and manually import the data into spreadsheets - a laborious and time-consuming process. The data collection task consumes much of the planners' time, leaving little time for analysis and capacity planning.

The transition to business-service-oriented capacity management makes the traditional manual approach impractical. That's because capacity planners must not only elevate their analysis, reports, and recommendations to a business-service level, but they must also extend their reporting from mission-critical servers to include the entire infrastructure. This requires an automated approach that encompasses data gathering, translation, and reporting in a form that is meaningful to business managers. This increase in automation helps boost quality and operational efficiency. Ultimately, automation boosts staff productivity and increases the relevance and importance of capacity management to the business.

4. Adopt Integrated, Shared Processes Across the Enterprise
A major shortcoming of traditional capacity management is that capacity planning and analysis are performed by a small cadre of planners who are siloed based on technology, such as server specialists, storage experts, and network specialists. This fragmentation makes it difficult to manage capacity based on overall business service impact. For example, how can you ensure that sufficient capacity will be available to support an anticipated spike in workload in an order-entry service due to an upcoming promotional event?

The move to the cloud environment requires a transition from siloed processes that are performed solely by capacity planners to integrated processes that are extended to and shared by other IT groups, such as application developers and database administrators. This transition permits you to leverage the expertise of capacity planners while making capacity planning a universal, shared responsibility that transcends functional IT groups.

5. Employ Predictive Analytics
Most IT organizations are moving toward the cloud environment incrementally. This typically comprises two major phases. The first phase involves migrating physical systems to virtual machines. Here, IT simply virtualizes selected physical systems with the primary goal of cutting data center costs by reducing the number of physical devices. For example, you may already have a virtual server farm in place and want to virtualize a current physical workload to that farm. You approach this by determining which physical host servers can best accommodate the additional workload(s).

The second phase involves optimizing the virtual workloads by determining the most effective placement of virtual workloads in the cloud. You could test various combinations in a laboratory environment, but it would be difficult and expensive to duplicate the real-world environment in your data center. You need the ability to gauge the impact of deploying various virtual/physical combinations in your production environment without actually implementing them.

Analysis and "what-if" modeling tools can help you in both phases. These tools enable you to preview various virtual/physical configurations and combinations before deploying them in production. In addition, modeling workloads also permits you to assess the impact of infrastructure changes without actually making the changes. For example, you can assess the impact of upgrading a server's CPU with another, more powerful one. What's more, predictive analysis of workload trends helps you ensure that needed capacity will be available in the future when and where it's needed to meet anticipated growth.

6. Integrate Capacity Management with Other Solutions, Tools, and Processes
Effective management of the cloud environment implies a broad perspective that encompasses the entire IT infrastructure as well as multiple IT disciplines. That requires the integration of capacity management tools and processes with other Business Service Management (BSM) tools and processes. BSM is a comprehensive approach and unified platform that helps IT organizations cut cost, reduce risk, and drive business profit.

The integration of capacity management solutions with discovery and dependency mapping tools gives you broad visibility into all the physical and virtual resources currently deployed in your data center. Not only will you see what's out there, but you will also understand how it is being used.

By integrating capacity management solutions with a configuration management database (CMDB), you can leverage the business service relationships stored in the CMDB for precise capacity analysis, reporting, and planning. Shared use of the configuration items (CIs) and service relationships defined in the CMDB ensures consistency across multiple IT disciplines and eliminates the need to maintain duplicate information in multiple tools.

Integration of capacity management with performance management solutions gives capacity planners real-time and historical data on business-service performance. The planners can leverage this data to maintain an ongoing balance between performance and resource utilization.

By integrating capacity management processes with change and configuration management processes, IT can ensure that all capacity-related changes made either automatically or manually to the cloud infrastructure are in compliance with internal policies and external regulations.

With an overall Business Service Management (BSM) approach, you can integrate capacity management with other IT disciplines and processes. In this way, you can effectively and efficiently manage business services throughout their entire lifecycle - across physical, virtual, and cloud-based resources.

Optimal Use of the Cloud Is Within Reach
Cloud computing is transforming the IT infrastructure into a highly elastic resource that quickly and continually adapts to changing business needs. This transformation fundamentally changes the way organizations deliver IT services and enables IT to be far more responsive to the demands of the business.

Working your way through the six transitional steps described here will help you ensure optimal use of the capacity of your cloud infrastructure - a key success factor for your cloud initiatives. By making the transition from a siloed, technology-oriented approach to a holistic, business-aware approach to capacity management, you can position your organization to realize the full promise of cloud computing.

More Stories By Fabio Violante

Fabio Violante, senior director of product development and member of the CTO Office at BMC Software, began his career with a PhD in Computer Engineering, specializing in IT performance valuation. He then went on to gaining extensive consulting experience in IT architectures while working with Accenture, Sun, and Hewlett-Packard.

In 2000, Violante co-founded Neptuny, a leading solutions provider of IT Performance Optimization and Capacity Management solutions and the first company to be incubated by Politecnico di Milano. Neptuny’s flagship product, Caplan, which is now part of BMC capacity Management, revolutionized the capacity management landscape by introducing a business oriented approach to capacity management. In October 2010, Neptuny’s software business was acquired by BMC Software, extending BMC’s leadership in capacity management and enhancing the company’s dynamic Business Service Management portfolio and cloud management offerings.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Today’s IT environments are increasingly heterogeneous, with Linux, Java, Oracle and MySQL considered nearly as common as traditional Windows environments. In many cases, these platforms have been integrated into an organization’s Windows-based IT department by way of an acquisition of a company that leverages one of those platforms. In other cases, the applications may have been part of the IT department for years, but managed by a separate department or singular administrator. Still, whether...
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
Cloud Expo, Inc. has announced today that Andi Mann returns to 'DevOps at Cloud Expo 2017' as Conference Chair The @DevOpsSummit at Cloud Expo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great t...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to delive...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud: This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
I’m a huge fan of open source DevOps tools. I’m also a huge fan of scaling open source tools for the enterprise. But having talked with my fair share of companies over the years, one important thing I’ve learned is that you can’t scale your release process using open source tools alone. They simply require too much scripting and maintenance when used that way. Scripting may be fine for smaller organizations, but it’s not ok in an enterprise environment that includes many independent teams and to...
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2017 New York. The 20th Cloud Expo and 7th @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Internet to enable us all to im...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.