Click here to close now.

Welcome!

Microservices Journal Authors: Pat Romanski, Liz McMillan, AppDynamics Blog, Carmen Gonzalez, Elizabeth White

Related Topics: Cloud Expo, Microservices Journal

Cloud Expo: Article

Steps for Improving Capacity Management in the Age of Cloud Computing

Realize the full promise of cloud computing

When you wake up in the morning and flip on a light switch, you don't think about whether the local power company has enough electricity available to power the light. Likewise, when you switch on the coffee pot or turn on your stove to make breakfast, you don't wonder about the available capacity of your local power grid.

Similarly, cloud computing is rapidly making the delivery of business services second nature to business users. They are able to access the services they need, when they need them, because the dynamic nature of the cloud offers unprecedented, highly elastic computing capacity.

Is your business able to fully exploit this flexibility? Cloud computing certainly brings with it new challenges for IT infrastructure management, particularly for capacity management. To get the most value from the cloud, you must continually balance capacity utilization, cost, and service quality. How can you make this happen? By transforming capacity management from a siloed, technology-oriented approach to one that is holistic and business aware.

This article examines six essential steps that will guide you through this transformation. These steps will help your organization to achieve the full benefits of cloud computing, including greater IT agility, higher service quality, and lower costs.

Traditional Approaches Are No Longer Sufficient
Traditionally, IT organizations have viewed capacity management as a dedicated, full-time job performed by specialized capacity planners and analysts. These highly skilled staffers are event-driven, responding to such occurrences as new application deployments and hardware changes.

Because of the sheer size and complexity of today's data centers and the small number of capacity planners in the typical IT organization, capacity planners often limit their focus to mission-critical systems. These planners typically manage capacity using a siloed, technology-based approach in which some planners focus on servers, others on storage, and still others on network devices. The problem is that not much communication takes place among groups, resulting in a capacity planning process that is a patchwork of disjointed activities, many of which are manual.

Traditional capacity management may have served well in the past, but if you are considering or already have begun the move to virtualization and cloud computing, traditional approaches are no longer adequate. Virtualization and cloud computing radically alter the character of the IT infrastructure. With cloud computing, the infrastructure is viewed as a pool of resources that are dynamically combined to deliver business services on demand and then returned to the pool when the services are no longer needed. In essence, the cloud provides a source of highly elastic computing capacity that can be applied when and where it's needed.

Consequently, capacity management is crucial to a successful cloud implementation. The aggregate capacity of the cloud must be sufficient to accommodate the dynamic assignment of workloads while still maintaining agreed-upon performance levels. Moreover, you must provide this capacity without over-buying equipment.

What's Needed: A Holistic, Business-Aware Approach
Traditional capacity management does not enable you to fully exploit the unprecedented capacity elasticity offered by cloud computing. Instead, you need a holistic approach that encompasses all data center resources - server, storage, and network - and links capacity utilization to business key performance indicators (KPIs). To meet this requirement, you need to accomplish the following six transformational steps.

1. Take a Broad, Continuous View
Transforming your approach to capacity planning requires making a shift in both scope and timing. With respect to scope, it's important to broaden your capacity planning focus from mission-critical systems to include the entire IT infrastructure. Cloud computing puts the aggregate capacity of the entire infrastructure at your disposal, enabling you to apply it when and where it is needed. To take full advantage of this flexibility, you need to know how much total capacity is out there and how it's being used.

With respect to timing, consider the highly dynamic nature of the infrastructure. Residual capacity is continually changing as workloads shift and changes occur in the physical infrastructure. Consequently, you need to manage capacity on a continual basis rather than only in reaction to certain events.

2. Shift to a Business Service Orientation
Most business users view the IT infrastructure simply as a source of business services. They want to request business services and have them delivered quickly with performance as defined in the associated service level agreements (SLAs). These users are not concerned with the underlying devices that make up a service.

Traditionally, IT has managed capacity from a technology perspective and has communicated in the language of IT managers. In transitioning to cloud computing, you need to manage capacity from a business service perspective and communicate in the language of business managers and users. For example, a capacity planner should be able to answer such questions as, "How many additional customer orders can my infrastructure support before running into capacity and response-time problems?"

To answer that question, the capacity planner must understand the relationship of capacity to business requirements. This is especially challenging in a cloud environment in which the delivery of business services involves the orchestrated combination of multiple resources that may include applications, servers, storage devices, and network equipment. To meet the challenge, you need to take a holistic approach that encompasses all the devices and the application layer that make up each service.

Transitioning from technology-oriented to business-service-oriented capacity management requires a corresponding shift in the metrics you gather and communicate. That shift necessitates an expansion of the scope and reach of analytics and reporting from technology metrics to business metrics, so that business demand becomes the driving factor for capacity management.

Reporting in the cloud environment, therefore, should link IT resources (physical and virtual servers, databases, applications, storage, networks, and facilities) to measurable business data, such as the costs and KPIs of the business. This linkage enables IT to communicate capacity issues to business leaders in a meaningful way. As a result, leaders can make informed, cost-effective choices in requesting capacity. For example, communicating the relationship of service workload capacity to cost discourages business users from requesting more capacity than they actually need.

The transition to a business service orientation requires a parallel transition in the makeup and skill set of capacity planners. Instead of a highly specialized, device-oriented capacity planning guru, you need generalist IT users, such as cloud service architects. These generalists should work closely with business users on one side and with the infrastructure experts on the other to establish capacity requirements based on business workloads.

3. Automate, Automate, Automate
The unfortunate reality in many organizations is that capacity planning involves numerous manual processes that are both inefficient and time intensive. Capacity planners may collect technology-oriented usage and performance data from a monitoring infrastructure and manually import the data into spreadsheets - a laborious and time-consuming process. The data collection task consumes much of the planners' time, leaving little time for analysis and capacity planning.

The transition to business-service-oriented capacity management makes the traditional manual approach impractical. That's because capacity planners must not only elevate their analysis, reports, and recommendations to a business-service level, but they must also extend their reporting from mission-critical servers to include the entire infrastructure. This requires an automated approach that encompasses data gathering, translation, and reporting in a form that is meaningful to business managers. This increase in automation helps boost quality and operational efficiency. Ultimately, automation boosts staff productivity and increases the relevance and importance of capacity management to the business.

4. Adopt Integrated, Shared Processes Across the Enterprise
A major shortcoming of traditional capacity management is that capacity planning and analysis are performed by a small cadre of planners who are siloed based on technology, such as server specialists, storage experts, and network specialists. This fragmentation makes it difficult to manage capacity based on overall business service impact. For example, how can you ensure that sufficient capacity will be available to support an anticipated spike in workload in an order-entry service due to an upcoming promotional event?

The move to the cloud environment requires a transition from siloed processes that are performed solely by capacity planners to integrated processes that are extended to and shared by other IT groups, such as application developers and database administrators. This transition permits you to leverage the expertise of capacity planners while making capacity planning a universal, shared responsibility that transcends functional IT groups.

5. Employ Predictive Analytics
Most IT organizations are moving toward the cloud environment incrementally. This typically comprises two major phases. The first phase involves migrating physical systems to virtual machines. Here, IT simply virtualizes selected physical systems with the primary goal of cutting data center costs by reducing the number of physical devices. For example, you may already have a virtual server farm in place and want to virtualize a current physical workload to that farm. You approach this by determining which physical host servers can best accommodate the additional workload(s).

The second phase involves optimizing the virtual workloads by determining the most effective placement of virtual workloads in the cloud. You could test various combinations in a laboratory environment, but it would be difficult and expensive to duplicate the real-world environment in your data center. You need the ability to gauge the impact of deploying various virtual/physical combinations in your production environment without actually implementing them.

Analysis and "what-if" modeling tools can help you in both phases. These tools enable you to preview various virtual/physical configurations and combinations before deploying them in production. In addition, modeling workloads also permits you to assess the impact of infrastructure changes without actually making the changes. For example, you can assess the impact of upgrading a server's CPU with another, more powerful one. What's more, predictive analysis of workload trends helps you ensure that needed capacity will be available in the future when and where it's needed to meet anticipated growth.

6. Integrate Capacity Management with Other Solutions, Tools, and Processes
Effective management of the cloud environment implies a broad perspective that encompasses the entire IT infrastructure as well as multiple IT disciplines. That requires the integration of capacity management tools and processes with other Business Service Management (BSM) tools and processes. BSM is a comprehensive approach and unified platform that helps IT organizations cut cost, reduce risk, and drive business profit.

The integration of capacity management solutions with discovery and dependency mapping tools gives you broad visibility into all the physical and virtual resources currently deployed in your data center. Not only will you see what's out there, but you will also understand how it is being used.

By integrating capacity management solutions with a configuration management database (CMDB), you can leverage the business service relationships stored in the CMDB for precise capacity analysis, reporting, and planning. Shared use of the configuration items (CIs) and service relationships defined in the CMDB ensures consistency across multiple IT disciplines and eliminates the need to maintain duplicate information in multiple tools.

Integration of capacity management with performance management solutions gives capacity planners real-time and historical data on business-service performance. The planners can leverage this data to maintain an ongoing balance between performance and resource utilization.

By integrating capacity management processes with change and configuration management processes, IT can ensure that all capacity-related changes made either automatically or manually to the cloud infrastructure are in compliance with internal policies and external regulations.

With an overall Business Service Management (BSM) approach, you can integrate capacity management with other IT disciplines and processes. In this way, you can effectively and efficiently manage business services throughout their entire lifecycle - across physical, virtual, and cloud-based resources.

Optimal Use of the Cloud Is Within Reach
Cloud computing is transforming the IT infrastructure into a highly elastic resource that quickly and continually adapts to changing business needs. This transformation fundamentally changes the way organizations deliver IT services and enables IT to be far more responsive to the demands of the business.

Working your way through the six transitional steps described here will help you ensure optimal use of the capacity of your cloud infrastructure - a key success factor for your cloud initiatives. By making the transition from a siloed, technology-oriented approach to a holistic, business-aware approach to capacity management, you can position your organization to realize the full promise of cloud computing.

More Stories By Fabio Violante

Fabio Violante, senior director of product development and member of the CTO Office at BMC Software, began his career with a PhD in Computer Engineering, specializing in IT performance valuation. He then went on to gaining extensive consulting experience in IT architectures while working with Accenture, Sun, and Hewlett-Packard.

In 2000, Violante co-founded Neptuny, a leading solutions provider of IT Performance Optimization and Capacity Management solutions and the first company to be incubated by Politecnico di Milano. Neptuny’s flagship product, Caplan, which is now part of BMC capacity Management, revolutionized the capacity management landscape by introducing a business oriented approach to capacity management. In October 2010, Neptuny’s software business was acquired by BMC Software, extending BMC’s leadership in capacity management and enhancing the company’s dynamic Business Service Management portfolio and cloud management offerings.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
SYS-CON Events announced today Sematext Group, Inc., a Brooklyn-based Performance Monitoring and Log Management solution provider, will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), search analytics (S...
SYS-CON Media announced today that Blue Box as launched a popular blog feed on Cloud Computing Journal. Cloud Computing Journal aims to help open the eyes of Enterprise IT professionals to the economics and strategies that utility/cloud computing provides. Blue Box Cloud gives you unequaled agility, without the burden of designing, deploying and managing your own infrastructure. It’s the right choice when public cloud just won’t do. Blue Box Cloud is a managed Private Cloud as a Service (...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
Docker is an open platform for developers and sysadmins of distributed applications that enables them to build, ship, and run any app anywhere. Docker allows applications to run on any platform irrespective of what tools were used to build it making it easy to distribute, test, and run software. I found this 5 Minute Docker video, which is very helpful when you want to get a quick and digestible overview. If you want to learn more, you can go to Docker’s web page and start with this Docker intro...
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the...
Over the years, a variety of methodologies have emerged in order to overcome the challenges related to project constraints. The successful use of each methodology seems highly context-dependent. However, communication seems to be the common denominator of the many challenges that project management methodologies intend to resolve. In this respect, Information and Communication Technologies (ICTs) can be viewed as powerful tools for managing projects. Few research papers have focused on the way...
As the world moves from DevOps to NoOps, application deployment to the cloud ought to become a lot simpler. However, applications have been architected with a much tighter coupling than it needs to be which makes deployment in different environments and migration between them harder. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, Netflix and so on is at the heart of CloudFoundry – a complete developer-oriented Platform as a Service (PaaS...
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises a...
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo – to be held June 3-5, 2015, at the Javits Center in New York City – will expand the DevOps community, enable a wide...
T-Mobile has been transforming the wireless industry with its “Uncarrier” initiatives. Today as T-Mobile’s IT organization works to transform itself in a like manner, technical foundations built over the last couple of years are now key to their drive for more Agile delivery practices. In his session at DevOps Summit, Martin Krienke, Sr Development Manager at T-Mobile, will discuss where they started their Continuous Delivery journey, where they are today, and where they are going in an effort ...
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding bu...
Cloud Expo, Inc. has announced today that Andi Mann returns to DevOps Summit 2015 as Conference Chair. The 4th International DevOps Summit will take place on June 9-11, 2015, at the Javits Center in New York City. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at ...
I’ve been thinking a bit about microservices (μServices) recently. My immediate reaction is to think: “Isn’t this just yet another new term for the same stuff, Web Services->SOA->APIs->Microservices?” Followed shortly by the thought, “well yes it is, but there are some important differences/distinguishing factors.” Microservices is an evolutionary paradigm born out of the need for simplicity (i.e., get away from the ESB) and alignment with agile (think DevOps) and scalable (think Containerizati...
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers ...
Converging digital disruptions is creating a major sea change - Cisco calls this the Internet of Everything (IoE). IoE is the network connection of People, Process, Data and Things, fueled by Cloud, Mobile, Social, Analytics and Security, and it represents a $19Trillion value-at-stake over the next 10 years. In her keynote at @ThingsExpo, Manjula Talreja, VP of Cisco Consulting Services, will discuss IoE and the enormous opportunities it provides to public and private firms alike. She will shar...
Software development, like manufacturing, is a craft that requires the application of creative approaches to solve problems given a wide range of constraints. However, while engineering design may be craftwork, the production of most designed objects relies on a standardized and automated manufacturing process. By contrast, much of moving an application from prototype to production and, indeed, maintaining the application through its lifecycle has often remained craftwork. In his session at Dev...
How can you compare one technology or tool to its competitors? Usually, there is no objective comparison available. So how do you know which is better? Eclipse or IntelliJ IDEA? Java EE or Spring? C# or Java? All you can usually find is a holy war and biased comparisons on vendor sites. But luckily, sometimes, you can find a fair comparison. How does this come to be? By having it co-authored by the stakeholders. The binary repository comparison matrix is one of those rare resources. It is edite...
The integration between the 2 solutions is handled by a module provided by XebiaLabs that will ensure the containers are correctly defined in the XL Deloy repository based on the information managed by Puppet. It uses the REST API offered by the XL Deploy server: so the security permissions are checked as a operator could do it using the GUI or the CLI. This article shows you how use the xebialabs/xldeploy Puppet module. The Production environment is based on 2 tomcats instances (tomcat1 &...
SYS-CON Events announced today that EnterpriseDB (EDB), the leading worldwide provider of enterprise-class Postgres products and database compatibility solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. EDB is the largest provider of Postgres software and services that provides enterprise-class performance and scalability and the open source freedom to divert budget from more costly traditiona...