|By Fabio Violante||
|September 6, 2011 04:00 PM EDT||
When you wake up in the morning and flip on a light switch, you don't think about whether the local power company has enough electricity available to power the light. Likewise, when you switch on the coffee pot or turn on your stove to make breakfast, you don't wonder about the available capacity of your local power grid.
Similarly, cloud computing is rapidly making the delivery of business services second nature to business users. They are able to access the services they need, when they need them, because the dynamic nature of the cloud offers unprecedented, highly elastic computing capacity.
Is your business able to fully exploit this flexibility? Cloud computing certainly brings with it new challenges for IT infrastructure management, particularly for capacity management. To get the most value from the cloud, you must continually balance capacity utilization, cost, and service quality. How can you make this happen? By transforming capacity management from a siloed, technology-oriented approach to one that is holistic and business aware.
This article examines six essential steps that will guide you through this transformation. These steps will help your organization to achieve the full benefits of cloud computing, including greater IT agility, higher service quality, and lower costs.
Traditional Approaches Are No Longer Sufficient
Traditionally, IT organizations have viewed capacity management as a dedicated, full-time job performed by specialized capacity planners and analysts. These highly skilled staffers are event-driven, responding to such occurrences as new application deployments and hardware changes.
Because of the sheer size and complexity of today's data centers and the small number of capacity planners in the typical IT organization, capacity planners often limit their focus to mission-critical systems. These planners typically manage capacity using a siloed, technology-based approach in which some planners focus on servers, others on storage, and still others on network devices. The problem is that not much communication takes place among groups, resulting in a capacity planning process that is a patchwork of disjointed activities, many of which are manual.
Traditional capacity management may have served well in the past, but if you are considering or already have begun the move to virtualization and cloud computing, traditional approaches are no longer adequate. Virtualization and cloud computing radically alter the character of the IT infrastructure. With cloud computing, the infrastructure is viewed as a pool of resources that are dynamically combined to deliver business services on demand and then returned to the pool when the services are no longer needed. In essence, the cloud provides a source of highly elastic computing capacity that can be applied when and where it's needed.
Consequently, capacity management is crucial to a successful cloud implementation. The aggregate capacity of the cloud must be sufficient to accommodate the dynamic assignment of workloads while still maintaining agreed-upon performance levels. Moreover, you must provide this capacity without over-buying equipment.
What's Needed: A Holistic, Business-Aware Approach
Traditional capacity management does not enable you to fully exploit the unprecedented capacity elasticity offered by cloud computing. Instead, you need a holistic approach that encompasses all data center resources - server, storage, and network - and links capacity utilization to business key performance indicators (KPIs). To meet this requirement, you need to accomplish the following six transformational steps.
1. Take a Broad, Continuous View
Transforming your approach to capacity planning requires making a shift in both scope and timing. With respect to scope, it's important to broaden your capacity planning focus from mission-critical systems to include the entire IT infrastructure. Cloud computing puts the aggregate capacity of the entire infrastructure at your disposal, enabling you to apply it when and where it is needed. To take full advantage of this flexibility, you need to know how much total capacity is out there and how it's being used.
With respect to timing, consider the highly dynamic nature of the infrastructure. Residual capacity is continually changing as workloads shift and changes occur in the physical infrastructure. Consequently, you need to manage capacity on a continual basis rather than only in reaction to certain events.
2. Shift to a Business Service Orientation
Most business users view the IT infrastructure simply as a source of business services. They want to request business services and have them delivered quickly with performance as defined in the associated service level agreements (SLAs). These users are not concerned with the underlying devices that make up a service.
Traditionally, IT has managed capacity from a technology perspective and has communicated in the language of IT managers. In transitioning to cloud computing, you need to manage capacity from a business service perspective and communicate in the language of business managers and users. For example, a capacity planner should be able to answer such questions as, "How many additional customer orders can my infrastructure support before running into capacity and response-time problems?"
To answer that question, the capacity planner must understand the relationship of capacity to business requirements. This is especially challenging in a cloud environment in which the delivery of business services involves the orchestrated combination of multiple resources that may include applications, servers, storage devices, and network equipment. To meet the challenge, you need to take a holistic approach that encompasses all the devices and the application layer that make up each service.
Transitioning from technology-oriented to business-service-oriented capacity management requires a corresponding shift in the metrics you gather and communicate. That shift necessitates an expansion of the scope and reach of analytics and reporting from technology metrics to business metrics, so that business demand becomes the driving factor for capacity management.
Reporting in the cloud environment, therefore, should link IT resources (physical and virtual servers, databases, applications, storage, networks, and facilities) to measurable business data, such as the costs and KPIs of the business. This linkage enables IT to communicate capacity issues to business leaders in a meaningful way. As a result, leaders can make informed, cost-effective choices in requesting capacity. For example, communicating the relationship of service workload capacity to cost discourages business users from requesting more capacity than they actually need.
The transition to a business service orientation requires a parallel transition in the makeup and skill set of capacity planners. Instead of a highly specialized, device-oriented capacity planning guru, you need generalist IT users, such as cloud service architects. These generalists should work closely with business users on one side and with the infrastructure experts on the other to establish capacity requirements based on business workloads.
3. Automate, Automate, Automate
The unfortunate reality in many organizations is that capacity planning involves numerous manual processes that are both inefficient and time intensive. Capacity planners may collect technology-oriented usage and performance data from a monitoring infrastructure and manually import the data into spreadsheets - a laborious and time-consuming process. The data collection task consumes much of the planners' time, leaving little time for analysis and capacity planning.
The transition to business-service-oriented capacity management makes the traditional manual approach impractical. That's because capacity planners must not only elevate their analysis, reports, and recommendations to a business-service level, but they must also extend their reporting from mission-critical servers to include the entire infrastructure. This requires an automated approach that encompasses data gathering, translation, and reporting in a form that is meaningful to business managers. This increase in automation helps boost quality and operational efficiency. Ultimately, automation boosts staff productivity and increases the relevance and importance of capacity management to the business.
4. Adopt Integrated, Shared Processes Across the Enterprise
A major shortcoming of traditional capacity management is that capacity planning and analysis are performed by a small cadre of planners who are siloed based on technology, such as server specialists, storage experts, and network specialists. This fragmentation makes it difficult to manage capacity based on overall business service impact. For example, how can you ensure that sufficient capacity will be available to support an anticipated spike in workload in an order-entry service due to an upcoming promotional event?
The move to the cloud environment requires a transition from siloed processes that are performed solely by capacity planners to integrated processes that are extended to and shared by other IT groups, such as application developers and database administrators. This transition permits you to leverage the expertise of capacity planners while making capacity planning a universal, shared responsibility that transcends functional IT groups.
5. Employ Predictive Analytics
Most IT organizations are moving toward the cloud environment incrementally. This typically comprises two major phases. The first phase involves migrating physical systems to virtual machines. Here, IT simply virtualizes selected physical systems with the primary goal of cutting data center costs by reducing the number of physical devices. For example, you may already have a virtual server farm in place and want to virtualize a current physical workload to that farm. You approach this by determining which physical host servers can best accommodate the additional workload(s).
The second phase involves optimizing the virtual workloads by determining the most effective placement of virtual workloads in the cloud. You could test various combinations in a laboratory environment, but it would be difficult and expensive to duplicate the real-world environment in your data center. You need the ability to gauge the impact of deploying various virtual/physical combinations in your production environment without actually implementing them.
Analysis and "what-if" modeling tools can help you in both phases. These tools enable you to preview various virtual/physical configurations and combinations before deploying them in production. In addition, modeling workloads also permits you to assess the impact of infrastructure changes without actually making the changes. For example, you can assess the impact of upgrading a server's CPU with another, more powerful one. What's more, predictive analysis of workload trends helps you ensure that needed capacity will be available in the future when and where it's needed to meet anticipated growth.
6. Integrate Capacity Management with Other Solutions, Tools, and Processes
Effective management of the cloud environment implies a broad perspective that encompasses the entire IT infrastructure as well as multiple IT disciplines. That requires the integration of capacity management tools and processes with other Business Service Management (BSM) tools and processes. BSM is a comprehensive approach and unified platform that helps IT organizations cut cost, reduce risk, and drive business profit.
The integration of capacity management solutions with discovery and dependency mapping tools gives you broad visibility into all the physical and virtual resources currently deployed in your data center. Not only will you see what's out there, but you will also understand how it is being used.
By integrating capacity management solutions with a configuration management database (CMDB), you can leverage the business service relationships stored in the CMDB for precise capacity analysis, reporting, and planning. Shared use of the configuration items (CIs) and service relationships defined in the CMDB ensures consistency across multiple IT disciplines and eliminates the need to maintain duplicate information in multiple tools.
Integration of capacity management with performance management solutions gives capacity planners real-time and historical data on business-service performance. The planners can leverage this data to maintain an ongoing balance between performance and resource utilization.
By integrating capacity management processes with change and configuration management processes, IT can ensure that all capacity-related changes made either automatically or manually to the cloud infrastructure are in compliance with internal policies and external regulations.
With an overall Business Service Management (BSM) approach, you can integrate capacity management with other IT disciplines and processes. In this way, you can effectively and efficiently manage business services throughout their entire lifecycle - across physical, virtual, and cloud-based resources.
Optimal Use of the Cloud Is Within Reach
Cloud computing is transforming the IT infrastructure into a highly elastic resource that quickly and continually adapts to changing business needs. This transformation fundamentally changes the way organizations deliver IT services and enables IT to be far more responsive to the demands of the business.
Working your way through the six transitional steps described here will help you ensure optimal use of the capacity of your cloud infrastructure - a key success factor for your cloud initiatives. By making the transition from a siloed, technology-oriented approach to a holistic, business-aware approach to capacity management, you can position your organization to realize the full promise of cloud computing.
[video] Infrastructure as a Toolbox By @SoftLayer at @CloudExpo New York | #IoT #API #Containers #Microservices
Countless business models have spawned from the IaaS industry. Resell Web hosting, blogs, public cloud, and on and on. With the overwhelming amount of tools available to us, it's sometimes easy to overlook that many of them are just new skins of resources we've had for a long time. In his General Session at 16th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, broke down what we've got to work with and discuss the benefits and pitfalls to discover how we can best use them to d...
Jul. 7, 2015 10:30 PM EDT Reads: 1,648
[slides] Policy in Hybrid Cloud By @DerekCollison | @CloudExpo #API #DevOps #Containers #Microservices
Enterprises are turning to the hybrid cloud to drive greater scalability and cost-effectiveness. But enterprises should beware as the definition of “policy” varies wildly. Some say it’s the ability to control the resources apps’ use or where the apps run. Others view policy as governing the permissions and delivering security. Policy is all of that and more. In his session at 16th Cloud Expo, Derek Collison, founder and CEO of Apcera, explained what policy is, he showed how policy should be arch...
Jul. 7, 2015 09:30 PM EDT Reads: 559
The causality question behind Conway’s Law is less about how changing software organizations can lead to better software, but rather how companies can best leverage changing technology in order to transform their organizations. Hints at how to answer this question surprisingly come from the world of devops – surprising because the focus of devops is ostensibly on building and deploying better software more quickly. Be that as it may, there’s no question that technology change is a primary fac...
Jul. 7, 2015 08:00 PM EDT Reads: 1,189
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at 16th Cloud Expo, Jake Moshenko, Product Manager at CoreOS, examined how CoreOS + Quay.io fit into the development lifecycle from pushing gi...
Jul. 7, 2015 07:15 PM EDT Reads: 1,655
This week we're attending SYS-CON Event's DevOps Summit in New York City. It's a great conference and energy behind DevOps is enormous. Thousands of attendees from every company you can imagine are focused on automation, the challenges of DevOps, and how to bring greater agility to software delivery. But, even with the energy behind DevOps there's something missing from the movement. For all the talk of deployment automation, continuous integration, and cloud infrastructure I'm still not se...
Jul. 7, 2015 07:00 PM EDT Reads: 1,901
[slides] IoT Middleware for a Faster Time to Market By @robomq | @ThingsExpo #IoT #M2M #InternetOfThings
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of robomq.io, and Fred Yatzeck, principal architect leading product development at robomq.io, discussed how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at th...
Jul. 7, 2015 07:00 PM EDT Reads: 1,309
SYS-CON Events announced today that Harbinger Systems will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Harbinger Systems is a global company providing software technology services. Since 1990, Harbinger has developed a strong customer base worldwide. Its customers include software product companies ranging from hi-tech start-ups in Silicon Valley to leading product companies in the US a...
Jul. 7, 2015 05:45 PM EDT Reads: 1,683
Announcing @ProfitBricksUSA to Exhibit at @CloudExpo Silicon Valley | #IoT #API #DevOps #Microservices
SYS-CON Events announced today that ProfitBricks, the provider of painless cloud infrastructure, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool for visual control over the...
Jul. 7, 2015 05:00 PM EDT Reads: 1,523
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than
Jul. 7, 2015 05:00 PM EDT Reads: 1,221
[video] Logging and Monitoring with @Sematext Founder @OtisG | @DevOpsSummit #DevOps #Logging #Monitoring
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Jul. 7, 2015 04:45 PM EDT Reads: 559
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises ar...
Jul. 7, 2015 04:00 PM EDT Reads: 794
Microservices are individual units of executable code that work within a limited framework. They are extremely useful when placed within an architecture of numerous microservices. On June 24th, 2015 I attended a webinar titled “How to Share Share-Nothing Microservices,” hosted by Jason Bloomberg, the President of Intellyx, and Scott Edwards, Director Product Marketing for Service Virtualization at CA Technologies. The webinar explained how to use microservices to your advantage in order to deliv...
Jul. 7, 2015 03:45 PM EDT Reads: 366
What’s New in the World of Application Analytics By @MikeAnand | @DevOpsSummit #DevOps #API #APM #Microservices
Software is eating the world. The more it eats, the bigger the mountain of data and wealth of valuable insights to digest and act on. Forward facing customer-centric IT organizations, leaders and professionals are looking to answer questions like how much revenue was lost today from platinum users not converting because they experienced poor mobile app performance. This requires a single, real-time pane of glass for end-to-end analytics covering business, customer, and IT operational data.
Jul. 7, 2015 03:45 PM EDT Reads: 573
In the midst of the widespread popularity and adoption of cloud computing, it seems like everything is being offered “as a Service” these days: Infrastructure? Check. Platform? You bet. Software? Absolutely. Toaster? It’s only a matter of time. With service providers positioning vastly differing offerings under a generic “cloud” umbrella, it’s all too easy to get confused about what’s actually being offered. In his session at 16th Cloud Expo, Kevin Hazard, Director of Digital Content for SoftL...
Jul. 7, 2015 03:45 PM EDT Reads: 1,298
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
Jul. 7, 2015 03:30 PM EDT Reads: 670
[slides] Workloads and Public Cloud at @CloudExpo By @utollwi | @ProfitBricksUSA #DevOps #Containers #Microservices
Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...
Jul. 7, 2015 03:00 PM EDT Reads: 1,742
DevOps Summit, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development...
Jul. 7, 2015 02:45 PM EDT Reads: 886
Agile, which started in the development organization, has gradually expanded into other areas downstream - namely IT and Operations. Teams – then teams of teams – have streamlined processes, improved feedback loops and driven a much faster pace into IT departments which have had profound effects on the entire organization. In his session at DevOps Summit, Anders Wallgren, Chief Technology Officer of Electric Cloud, will discuss how DevOps and Continuous Delivery have emerged to help connect dev...
Jul. 7, 2015 02:30 PM EDT Reads: 859
One of the hottest new terms in the world of enterprise computing is the microservice. Starting with the seminal 2014 article by James Lewis and Martin Fowler of ThoughtWorks, microservices have taken on a life of their own – and as with any other overhyped term, they have generated their fair share of confusion as well. Perhaps the best definition of microservices comes from Janakiram MSV, Principal at Janakiram & Associates. “Microservices are fine-grained units of execution. They are designe...
Jul. 7, 2015 02:30 PM EDT Reads: 735
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager - Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, reviewed next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discussed how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has been engaged in t...
Jul. 7, 2015 02:15 PM EDT Reads: 1,703