|By Fabio Violante||
|September 6, 2011 04:00 PM EDT||
When you wake up in the morning and flip on a light switch, you don't think about whether the local power company has enough electricity available to power the light. Likewise, when you switch on the coffee pot or turn on your stove to make breakfast, you don't wonder about the available capacity of your local power grid.
Similarly, cloud computing is rapidly making the delivery of business services second nature to business users. They are able to access the services they need, when they need them, because the dynamic nature of the cloud offers unprecedented, highly elastic computing capacity.
Is your business able to fully exploit this flexibility? Cloud computing certainly brings with it new challenges for IT infrastructure management, particularly for capacity management. To get the most value from the cloud, you must continually balance capacity utilization, cost, and service quality. How can you make this happen? By transforming capacity management from a siloed, technology-oriented approach to one that is holistic and business aware.
This article examines six essential steps that will guide you through this transformation. These steps will help your organization to achieve the full benefits of cloud computing, including greater IT agility, higher service quality, and lower costs.
Traditional Approaches Are No Longer Sufficient
Traditionally, IT organizations have viewed capacity management as a dedicated, full-time job performed by specialized capacity planners and analysts. These highly skilled staffers are event-driven, responding to such occurrences as new application deployments and hardware changes.
Because of the sheer size and complexity of today's data centers and the small number of capacity planners in the typical IT organization, capacity planners often limit their focus to mission-critical systems. These planners typically manage capacity using a siloed, technology-based approach in which some planners focus on servers, others on storage, and still others on network devices. The problem is that not much communication takes place among groups, resulting in a capacity planning process that is a patchwork of disjointed activities, many of which are manual.
Traditional capacity management may have served well in the past, but if you are considering or already have begun the move to virtualization and cloud computing, traditional approaches are no longer adequate. Virtualization and cloud computing radically alter the character of the IT infrastructure. With cloud computing, the infrastructure is viewed as a pool of resources that are dynamically combined to deliver business services on demand and then returned to the pool when the services are no longer needed. In essence, the cloud provides a source of highly elastic computing capacity that can be applied when and where it's needed.
Consequently, capacity management is crucial to a successful cloud implementation. The aggregate capacity of the cloud must be sufficient to accommodate the dynamic assignment of workloads while still maintaining agreed-upon performance levels. Moreover, you must provide this capacity without over-buying equipment.
What's Needed: A Holistic, Business-Aware Approach
Traditional capacity management does not enable you to fully exploit the unprecedented capacity elasticity offered by cloud computing. Instead, you need a holistic approach that encompasses all data center resources - server, storage, and network - and links capacity utilization to business key performance indicators (KPIs). To meet this requirement, you need to accomplish the following six transformational steps.
1. Take a Broad, Continuous View
Transforming your approach to capacity planning requires making a shift in both scope and timing. With respect to scope, it's important to broaden your capacity planning focus from mission-critical systems to include the entire IT infrastructure. Cloud computing puts the aggregate capacity of the entire infrastructure at your disposal, enabling you to apply it when and where it is needed. To take full advantage of this flexibility, you need to know how much total capacity is out there and how it's being used.
With respect to timing, consider the highly dynamic nature of the infrastructure. Residual capacity is continually changing as workloads shift and changes occur in the physical infrastructure. Consequently, you need to manage capacity on a continual basis rather than only in reaction to certain events.
2. Shift to a Business Service Orientation
Most business users view the IT infrastructure simply as a source of business services. They want to request business services and have them delivered quickly with performance as defined in the associated service level agreements (SLAs). These users are not concerned with the underlying devices that make up a service.
Traditionally, IT has managed capacity from a technology perspective and has communicated in the language of IT managers. In transitioning to cloud computing, you need to manage capacity from a business service perspective and communicate in the language of business managers and users. For example, a capacity planner should be able to answer such questions as, "How many additional customer orders can my infrastructure support before running into capacity and response-time problems?"
To answer that question, the capacity planner must understand the relationship of capacity to business requirements. This is especially challenging in a cloud environment in which the delivery of business services involves the orchestrated combination of multiple resources that may include applications, servers, storage devices, and network equipment. To meet the challenge, you need to take a holistic approach that encompasses all the devices and the application layer that make up each service.
Transitioning from technology-oriented to business-service-oriented capacity management requires a corresponding shift in the metrics you gather and communicate. That shift necessitates an expansion of the scope and reach of analytics and reporting from technology metrics to business metrics, so that business demand becomes the driving factor for capacity management.
Reporting in the cloud environment, therefore, should link IT resources (physical and virtual servers, databases, applications, storage, networks, and facilities) to measurable business data, such as the costs and KPIs of the business. This linkage enables IT to communicate capacity issues to business leaders in a meaningful way. As a result, leaders can make informed, cost-effective choices in requesting capacity. For example, communicating the relationship of service workload capacity to cost discourages business users from requesting more capacity than they actually need.
The transition to a business service orientation requires a parallel transition in the makeup and skill set of capacity planners. Instead of a highly specialized, device-oriented capacity planning guru, you need generalist IT users, such as cloud service architects. These generalists should work closely with business users on one side and with the infrastructure experts on the other to establish capacity requirements based on business workloads.
3. Automate, Automate, Automate
The unfortunate reality in many organizations is that capacity planning involves numerous manual processes that are both inefficient and time intensive. Capacity planners may collect technology-oriented usage and performance data from a monitoring infrastructure and manually import the data into spreadsheets - a laborious and time-consuming process. The data collection task consumes much of the planners' time, leaving little time for analysis and capacity planning.
The transition to business-service-oriented capacity management makes the traditional manual approach impractical. That's because capacity planners must not only elevate their analysis, reports, and recommendations to a business-service level, but they must also extend their reporting from mission-critical servers to include the entire infrastructure. This requires an automated approach that encompasses data gathering, translation, and reporting in a form that is meaningful to business managers. This increase in automation helps boost quality and operational efficiency. Ultimately, automation boosts staff productivity and increases the relevance and importance of capacity management to the business.
4. Adopt Integrated, Shared Processes Across the Enterprise
A major shortcoming of traditional capacity management is that capacity planning and analysis are performed by a small cadre of planners who are siloed based on technology, such as server specialists, storage experts, and network specialists. This fragmentation makes it difficult to manage capacity based on overall business service impact. For example, how can you ensure that sufficient capacity will be available to support an anticipated spike in workload in an order-entry service due to an upcoming promotional event?
The move to the cloud environment requires a transition from siloed processes that are performed solely by capacity planners to integrated processes that are extended to and shared by other IT groups, such as application developers and database administrators. This transition permits you to leverage the expertise of capacity planners while making capacity planning a universal, shared responsibility that transcends functional IT groups.
5. Employ Predictive Analytics
Most IT organizations are moving toward the cloud environment incrementally. This typically comprises two major phases. The first phase involves migrating physical systems to virtual machines. Here, IT simply virtualizes selected physical systems with the primary goal of cutting data center costs by reducing the number of physical devices. For example, you may already have a virtual server farm in place and want to virtualize a current physical workload to that farm. You approach this by determining which physical host servers can best accommodate the additional workload(s).
The second phase involves optimizing the virtual workloads by determining the most effective placement of virtual workloads in the cloud. You could test various combinations in a laboratory environment, but it would be difficult and expensive to duplicate the real-world environment in your data center. You need the ability to gauge the impact of deploying various virtual/physical combinations in your production environment without actually implementing them.
Analysis and "what-if" modeling tools can help you in both phases. These tools enable you to preview various virtual/physical configurations and combinations before deploying them in production. In addition, modeling workloads also permits you to assess the impact of infrastructure changes without actually making the changes. For example, you can assess the impact of upgrading a server's CPU with another, more powerful one. What's more, predictive analysis of workload trends helps you ensure that needed capacity will be available in the future when and where it's needed to meet anticipated growth.
6. Integrate Capacity Management with Other Solutions, Tools, and Processes
Effective management of the cloud environment implies a broad perspective that encompasses the entire IT infrastructure as well as multiple IT disciplines. That requires the integration of capacity management tools and processes with other Business Service Management (BSM) tools and processes. BSM is a comprehensive approach and unified platform that helps IT organizations cut cost, reduce risk, and drive business profit.
The integration of capacity management solutions with discovery and dependency mapping tools gives you broad visibility into all the physical and virtual resources currently deployed in your data center. Not only will you see what's out there, but you will also understand how it is being used.
By integrating capacity management solutions with a configuration management database (CMDB), you can leverage the business service relationships stored in the CMDB for precise capacity analysis, reporting, and planning. Shared use of the configuration items (CIs) and service relationships defined in the CMDB ensures consistency across multiple IT disciplines and eliminates the need to maintain duplicate information in multiple tools.
Integration of capacity management with performance management solutions gives capacity planners real-time and historical data on business-service performance. The planners can leverage this data to maintain an ongoing balance between performance and resource utilization.
By integrating capacity management processes with change and configuration management processes, IT can ensure that all capacity-related changes made either automatically or manually to the cloud infrastructure are in compliance with internal policies and external regulations.
With an overall Business Service Management (BSM) approach, you can integrate capacity management with other IT disciplines and processes. In this way, you can effectively and efficiently manage business services throughout their entire lifecycle - across physical, virtual, and cloud-based resources.
Optimal Use of the Cloud Is Within Reach
Cloud computing is transforming the IT infrastructure into a highly elastic resource that quickly and continually adapts to changing business needs. This transformation fundamentally changes the way organizations deliver IT services and enables IT to be far more responsive to the demands of the business.
Working your way through the six transitional steps described here will help you ensure optimal use of the capacity of your cloud infrastructure - a key success factor for your cloud initiatives. By making the transition from a siloed, technology-oriented approach to a holistic, business-aware approach to capacity management, you can position your organization to realize the full promise of cloud computing.
As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...
Oct. 26, 2016 06:15 AM EDT Reads: 2,082
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Oct. 26, 2016 06:00 AM EDT Reads: 2,046
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
Oct. 26, 2016 06:00 AM EDT Reads: 1,551
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
Oct. 26, 2016 06:00 AM EDT Reads: 7,276
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?
Oct. 26, 2016 05:45 AM EDT Reads: 2,559
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
Oct. 26, 2016 05:00 AM EDT Reads: 1,000
Today every business relies on software to drive the innovation necessary for a competitive edge in the Application Economy. This is why collaboration between development and operations, or DevOps, has become IT’s number one priority. Whether you are in Dev or Ops, understanding how to implement a DevOps strategy can deliver faster development cycles, improved software quality, reduced deployment times and overall better experiences for your customers.
Oct. 26, 2016 04:45 AM EDT Reads: 951
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will show how customers are able to achieve a level of transparency that enables everyon...
Oct. 26, 2016 03:45 AM EDT Reads: 1,342
What do dependency resolution, situational awareness, and superheroes have in common? Meet Chris Corriere, a DevOps/Software Engineer at Autotrader, speaking on creative ways to maximize usage of all of the above. Mark Miller, Community Advocate and senior storyteller at Sonatype, caught up with Chris to learn more about what his team is up to.
Oct. 26, 2016 03:00 AM EDT Reads: 1,928
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
Oct. 26, 2016 02:30 AM EDT Reads: 4,097
At its core DevOps is all about collaboration. The lines of communication must be opened and it takes some effort to ensure that they stay that way. It’s easy to pay lip service to trends and talk about implementing new methodologies, but without action, real benefits cannot be realized. Success requires planning, advocates empowered to effect change, and, of course, the right tooling. To bring about a cultural shift it’s important to share challenges. In simple terms, ensuring that everyone k...
Oct. 26, 2016 02:30 AM EDT Reads: 12,703
JetBlue Airways uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-time monitoring of mobile applications. The next BriefingsDirect Voice of the Customer performance engineering case study discussion examines how JetBlue Airways in New York uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-tim...
Oct. 26, 2016 01:15 AM EDT Reads: 1,317
So you think you are a DevOps warrior, huh? Put your money (not really, it’s free) where your metrics are and prove it by taking The Ultimate DevOps Geek Quiz Challenge, sponsored by DevOps Summit. Battle through the set of tough questions created by industry thought leaders to earn your bragging rights and win some cool prizes.
Oct. 26, 2016 12:15 AM EDT Reads: 4,142
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Oct. 26, 2016 12:00 AM EDT Reads: 3,895
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
Oct. 26, 2016 12:00 AM EDT Reads: 1,056
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
Oct. 26, 2016 12:00 AM EDT Reads: 34,236
In case you haven’t heard, the new hotness in app architectures is serverless. Mainly restricted to cloud environments (Amazon Lambda, Google Cloud Functions, Microsoft Azure Functions) the general concept is that you don’t have to worry about anything but the small snippets of code (functions) you write to do something when something happens. That’s an event-driven model, by the way, that should be very familiar to anyone who has taken advantage of a programmable proxy to do app or API routing ...
Oct. 25, 2016 11:00 PM EDT Reads: 1,317
“Being able to take needless work out of the system is more important than being able to put more work into the system.” This is one of my favorite quotes from Gene Kim’s book, The Phoenix Project, and it plays directly into why we're announcing the DevOps Express initiative today. Tracing the Steps. For years now, I have witnessed needless work being performed across the DevOps industry. No, not within our clients DevOps and continuous delivery practices. I have seen it in the buyer’s journe...
Oct. 25, 2016 10:45 PM EDT Reads: 1,371
SYS-CON Events announced today that Enzu will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their online busine...
Oct. 25, 2016 08:15 PM EDT Reads: 1,380
In many organizations governance is still practiced by phase or stage gate peer review, and Agile projects are forced to accommodate, which leads to WaterScrumFall or worse. But governance criteria and policies are often very weak anyway, out of date or non-existent. Consequently governance is frequently a matter of opinion and experience, highly dependent upon the experience of individual reviewers. As we all know, a basic principle of Agile methods is delegation of responsibility, and ideally ...
Oct. 25, 2016 08:00 PM EDT Reads: 3,584