Click here to close now.


Microservices Expo Authors: Yeshim Deniz, SmartBear Blog, Tim Hinds, Sanjeev Sharma, Elizabeth White

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, @BigDataExpo, SDN Journal

@CloudExpo: Article

Data Center Transformation Advice

Fast-changing demands on data centers drive need for uber data center infrastructure management

Once the province of IT facilities planners, the management and automation of data centers has rapidly grown in scope and importance.

As software-driven data centers have matured and advanced to support unpredictable workloads like hybrid cloud, big data, and mobile applications, the ability to manage and operate that infrastructure efficiently has grown increasingly difficult.

At the same time, as enterprises seek to rationalize their applications and data, centralization and consolidation of data centers has made their management even more critical -- at ever larger scale and density.

So how do enterprise IT operators and planners keep their data centers from spinning out of control despite these new requirements? How can they leverage the best of converged systems and gain increased automation, as well as rapid analysis for improving efficiency?

BriefingsDirect recently posed such questions to two experts from HP Technology Services to explore how new integrated management capabilities are providing the means for better and automated data center infrastructure management (DCIM).

To learn more on how disparate data center resources can be integrated into broader enterprise management capabilities and processes, now join Aaron Carman, HP Worldwide Critical Facilities Strategy Leader, and Steve Wibrew, HP Worldwide IT Management Consulting Strategy and Portfolio Lead. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Learn more about DCIM.]

Here are some excerpts:

Gardner: What’s forcing these changes in data center management and planning and operations? What are these big new requirements? Why is it becoming so difficult?

Carman: In the past, folks were dealing with traditional types of services that were on a traditional type of IT infrastructure. Standard, monolithic-type data centers were designed one-off. In the past few years, with the emergence of cloud and hybrid service delivery, as well as some of the different solutions around convergence like converged infrastructures, the environment has become much more dynamic and complex.

Hybrid services

So, many organizations are trying to grapple with, and deal with, not only the traditional silos that are in place between facilities, IT, and the business, but also deal with how they are going to host and manage hybrid service delivery and what impact that’s going to have on their environment.


It’s not only about what the impact is going to be on rolling out new infrastructure solutions like converged infrastructures from multiple vendors, but how to increasingly provide more flexibility and services to their end users as digital services.

It's become much more complex and it's a little bit harder to manage, because there are many, separate types of tools that they use to manage these environments, and it has continued to increase.

Gardner: Steve, I suppose too that with ITIL v3 and more focus on a service-delivery model, even the very goal of IT has changed.

Wibrew: That's very true. We’re seeing a trend in the change and role of IT to the business. Previously IT was a cost center, an overhead to the business, to deliver the required services. Nowadays, IT is very much the business of an organization, and without IT, most organizations simply cease to function. So IT, its availability and performance, is a critical aspect of the success of the business.

Gardner: What about this additional factor of big data and analysis as applied to IT and IT infrastructure? We’re getting reams and reams of data that needs to be used and managed. Is that part of what you’re dealing with as well?


Wibrew: That’s certainly a very important part of the converged-management solution. There’s been a tremendous explosion in the amount of data, the amount of management information, that's available. If you narrow that down to the management information associated with operating management and supporting data centers from the facility to the applications, to the platforms right up to the services to the business, clearly that's a huge amount of information that’s collected or maintained on a 24×7 basis.

Making good and intelligent decisions on that is quite a challenge for many organizations. Quite often, we would be saying that people still remain in isolated silo teams without good interaction between the different teams. It's a challenge trying to draw that information together so businesses can make intelligent choices based on analytics of that end-to-end information.

Gardner: Aaron, I’ve heard that word "silo" now a few times, siloed teams, siloed infrastructure, and also siloed management of infrastructure. Are we now talking about perhaps a management of management capabilities? Is that part of your story here now?

Added burden

Carman: It is. For the most part, most organizations when faced with trying to manage these different areas, facilities IT and service delivery, have come up with their own set of run books, processes, tools, and methodologies for operating their data center.

When you put that onto an organization, it's just an added burden for them to try to get vendors to work with one another and integrate software tools and solutions. What the folks that provide these solutions have started to realize is that there needs to be an interoperability between these tools. There has never really been a single tool that could do that, except for what has just emerged in the past few years, which is DCIM.

HP really believes that DCIM is a foundational, operational tool that will, when properly integrated into an environment, become the backbone for operational data to traverse from many of the different tools that are used to operate the data center, from IT service management (ITSM), to IT infrastructure management, and the critical facilities management tools.

Gardner: I suppose yet another trend that we’re all grappling with these days is the notion of things moving to as-a-service, on-demand, or even as a cloud technology. Is that the case, too, with DCIM, that people are looking to do this as a service? Are we starting to do this across the hybrid model as well?

Today, clients have a huge amount of choice in terms of how they provision and obtain their IT.

Carman: Yes. These solution providers are looking toward how they can penetrate the market and provide services to all different sizes of organizations. Many of them are looking to a software-as-a-service (SaaS) model to provide DCIM. There has to be a very careful analysis of what type of a licensing model you're going to actually use within your environment to ensure that the type of functionality you're trying to achieve is interoperable with existing management tools. [Learn more about DCIM.]

Wibrew: Today, clients have a huge amount of choice in terms of how they provision and obtain their IT. Obviously, there are the traditional legacy environments and the converged systems and clients operate in their own cloud solutions.

Or maybe they’re even going out to external cloud providers and some interesting dynamics that really do increase the complexity of where they get services from. This needs to be baked into that converged solution around the interoperability and interfacing between multiple systems. So IT is truly a business supporting the organization and providing end-to-end services.

Organizations struggling

Carman: Most organizations are really struggling to introduce DCIM into their environment, since at this point, it’s really viewed as more as a facilities-type tool. The approach from different DCIM providers varies greatly on the functions and features they provide in their tool. Many organizations are struggling just to understand which DCIM product is best for them and how to incorporate into a long term strategy for operations management.

So the services that we brought to market address that specifically, not only from which DCIM tool will be best for their environment, but how it fits strategically into the direction they want to take from hosting their digital services in the future.

Gardner: Steve, I think we should also be careful not to limit the purview of DCIM. This is not just IT. This does include facilities, hybrid and service delivery model, management capabilities. Maybe you could help us put the proper box around DCIM. How far and why does it go or should we narrow it so that it doesn’t become deluded or confused?

Wibrew: Yeah, that’s a very good question, an important one to address. What we’ve seen is what the analysts have predicted. Now is the time, and we’re going to see huge growth in DCIM solutions over the next few years.

DCIM alone is not the end-to-end solution.

DCIM has really been the domain of the facilities team, and there’s traditionally been quite a lack of understanding of what DCIM is all about within the IT infrastructure management team. If you talk to lot of IT specialists, the awareness of DCIM is still quite limited at the moment. So they certainly need to find out more about it and understand the value that DCIM can bring to IT infrastructure management.

I understand that features and functions do vary, and the extent of what DCIM delivers will vary from one product to another. It’s very good certainly around the facilities space in terms of power, cooling, and knowing what’s out on the data center floor. It’s very good at knowing what’s in the rack and how much power and space has been used within the rack.

It’s very good at cable management, the networks, and for storage and the power cabling. The trend is that DCIM will evolve and grow more into the IT management space as well. So it’s becoming very aware of things like server infrastructure and even down to the virtual infrastructure, as well, getting into those domains.

DCIM will typically have work protectabilities for change in activity management. But DCIM alone is not the end-to-end solution, and we realized the importance of the need to integrate it with the full ITSM solutions and platform management solutions. A major focus, over the past few months, is to make sure that the DCIM solutions do integrate very well with the wider IT service-management solutions to provide that integrated end-to-end holistic management solution across the entire data-center ecosystem.

Great variation

Carman: With DCIM being a newer solution within the industry, I want to be very careful about calling folks DCIM specialists. We feel that we have a very great knowledge of the solutions out there. They vary so greatly.

It takes a collaborative team of folks within HP, as well as with the client, to truly understand what they’re trying to achieve. You could even pull it down to what types of use cases they’re trying to achieve for the organization, which tool works best and in interoperability and coordination with the other tools and processes they have.

We have a methodology framework called the Converged Management Framework that focuses on four distinct areas for a optimized solution and strategy for starting with business goals and understanding what the true key performance indicators are and what dashboards are required.

It looks at what the metrics are going to be for measuring success and couples that with understanding organizationally who is responsible for what types of services we provide as an ultimate service to our end user. Most of the time, we’re focusing on the facilities in IT organization. [Learn more about DCIM.]

Also, those need to be aligned to the process and workflows for provisioning services to the end users, supported directly by a system’s reference architecture, which is primarily made up of operational management tools and software. All those need to be supported by one another and purposefully designed, so that you can meet and achieve the goals of the business.

IT infrastructure, right up to services of a business, end to end, is very large and very, very complex.

When you don’t do that, the time it takes for you to deliver services to your end user lengthens and costs money. When you have separate tools that are not referencing single points of data, then you’re spending a lot of time rationalizing and understanding if you have the accurate data in front of you. All this boils down to not only cost but having a resilient operations, knowing that when you’re looking at a particular device or setup devices, you truly understand what it’s providing end to end to your users.

Wibrew: If you think about the possibilities in the management of facilities, the IT infrastructure, right up to services of a business, end-to-end, is very large and very, very complex. We have to break it down into small or more manageable chunks and focus on the key priorities.

Most-important priorities

So we look at the trans-organization, work with them to identify to them what their most important priorities are in terms of their converged-management solution and their journey.

It’s heavily structured around ITSM and ITIL processes, and we’ve identified some great candidates within ITIL for integration between facilities in IT. It’s really a case of working out the prioritized journey for that particular client. Probably one of the most important integrations would be to have a single view of the truth of operational data. So it would be unified asset information.

CMDBs within a configuration management system might be the very first and important integration between the two, because that’s the foundation for other follow-on services until you know what you’ve got, it’s very difficult to plan, what you need in the future in terms of infrastructure.

Another important integration that is now possible with these converged solutions is the integration of power management in terms of energy consumption between the facilities and the IT infrastructure.

These integrated solutions can be more granular, far more dynamic around energy consumption.

If you think about managing the power consumption of things like efficiency of the data center with PoE, generally speaking, in the past, that would be the domain of the facilities team. The IT infrastructure would simply be hosted in the facility.

The IT teams didn’t really care about how much power was used. But these integrated solutions can be more granular, far more dynamic around energy consumption with much more information being collected, not just at a facility level but within the racks and in the power-distribution units (PDUs), and in the blade chassis, right down to individual service.

We can now know what the energy consumption is. We can now incentivize the IT teams to take responsibility for energy management and energy consumption. This is a great way of actually reducing a client’s carbon foot print and energy consumption within the data center through these integrated solutions.

Gardner: Aaron, I suppose another important point to be clear on is that, like many services within HP Technology Services, this is not just designed for HP products. This is an ecumenical approach to whatever is installed in terms of product facility management capability. I wonder if you could explain a bit more HP’s philosophy when it comes to supporting the entire portfolio. [Learn more about DCIM.]

Carman: HP’s professional services we’re offering in this space are really agnostic to the final solution. We understand that a customer has been running their environment for years and has made investments into a lot of different operational tools over the years.

That’s a part of our analysis and methodology, to come in and understand the environment and what the client is trying to achieve. Then we put together a strategy, a roadmap of different products, that will help them achieve their goals that are interoperable.

Next level

We continue to transform them to the next level of abilities or capabilities that they are looking to achieve, especially around how they provision services and help them become, at the end, most likely a cloud-service provider to their end users, where heavy levels of automation are built in, so that they can get digital services to their end users in a much shorter period of time.

Gardner: I realize this is fairly new. It was just on Jan. 23 that HP announced some new services that include converged-management consulting, and that management framework was updated with new technical requirements. You have four new services organized with the management workshop, roadmap, design implementations, and so forth. [Learn more about DCIM.]

So this is fairly new, but Steve Wibrew, is there any instance where you’ve worked with some organization and that some of the really powerful benefits of doing this properly have shown through? Do you have any anecdotes you can recall of an organization that’s done this and maybe some interesting ways that it’s benefited them, maybe unintended consequences?

Data-center transformation

Wibrew: The starting point is to understand what’s there in the first place. I’ve been engaged with many clients where if you ask them about inventory, what’s in the data center, you get totally different answers from different groups of people within the organization. The IT team wants to put more stuff into the data center. The facilities team says, “No more space. We’re full. We can’t do that.”

I found that when you pull this data together from multiple sources and get a consistent feel of the truth, you can start to plan far more accurately and efficiently. Perhaps the lack of space in the data center is because there may be infrastructure that’s sitting there, powered on, and not being utilized by anybody.

It’s a fact that we’re redundant. I’ve had many situations where, in pulling together a consistent inventory, we can get rid of a lot of redundant equipment, allowing space for major initiatives and expansion projects. So there are some examples of the benefits of consolidated inventory and information.

DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

Gardner: As we look a few years out at big-data requirements, hybrid cloud requirements, infrastructure KPIs for service delivery, energy, and carbon pressures? What’s the outlook in terms of doing this, and should we expect that there will be an ongoing demand, but also ongoing and improving return on investments you make, vis-à-vis these consulting services and DCIM?

Carman: Based upon a lot of the challenges that we outlined earlier in the program, we feel that in order to operate efficiently, this type of a future state operational-tools architecture is going to have to be in place, and DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

So more-and-more, with a lot of the challenges of my compute footprint shrinking and having a different requirements that I had in the past, we’re now dealing with a storage or data explosion, where my data center is all filled up with storage files.

As these new demands from the business come down and force organizations onto new types of technology infrastructure platforms they haven’t dealt within the past, it requires them to be much more flexible when they have, in most cases, very inflexible facilities. That’s the strength of DCIM and what it can provide just in that one instance.

But more-and-more, the business is expecting digital services to almost be instant. They want to capitalize on the market at that time. They don't want to wait weeks or months for enterprise IT to provide them with a service to take advantage of a new service offering. So it's forcing folks into operating differently, and that's where converged management is poised to help these customers.

Looking to the future

Gardner: Steve, when you look into your crystal ball and think about how things will be in three to five years, what is it about DCIM rather and some of these services that you think will be most impacting?

Wibrew: I think the trend we're going to see is a far greater adoption of DCIM. It's only deployed in a small number of data centers at the moment. That's going to increase quite dramatically, and this could be a much tighter alignment between how the facilities are run and how the IT infrastructure is operated and supported. It could be far more integrated than it is today.

The roles of IT are going to change, and a lot of the work now is still around design, planning, scripting, and orchestrating. In the future, we're going to see people, almost like a conductor in an orchestra, overseeing the operations within the data center through leading highly automated and optimized processes, which are actually delivered by automated solutions.

Gardner: I benefited greatly in learning more about DCIM on the HP website. There were videos, white-papers, and blog-posts. So, there’s quite a bit of information for those interested in learning more about DCIM. HP Technology Services website was a great resource for me. [Learn more about DCIM.]

You may also be interested in:

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

@MicroservicesExpo Stories
Ten years ago, there may have been only a single application that talked directly to the database and spit out HTML; customer service, sales - most of the organizations I work with have been moving toward a design philosophy more like unix, where each application consists of a series of small tools stitched together. In web example above, that likely means a login service combines with webpages that call other services - like enter and update record. That allows the customer service team to writ...
The APN DevOps Competency highlights APN Partners who demonstrate deep capabilities delivering continuous integration, continuous delivery, and configuration management. They help customers transform their business to be more efficient and agile by leveraging the AWS platform and DevOps principles.
There once was a time when testers operated on their own, in isolation. They’d huddle as a group around the harsh glow of dozens of CRT monitors, clicking through GUIs and recording results. Anxiously, they’d wait for the developers in the other room to fix the bugs they found, yet they’d frequently leave the office disappointed as issues were filed away as non-critical. These teams would rarely interact, save for those scarce moments when a coder would wander in needing to reproduce a particula...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Last month, my partners in crime – Carmen DeArdo from Nationwide, Lee Reid, my colleague from IBM and I wrote a 3-part series of blog posts on We titled our posts the Simple Math, Calculus and Art of DevOps. I would venture to say these are must-reads for any organization adopting DevOps. We examined all three ascpects – the Cultural, Automation and Process improvement side of DevOps. One of the key underlying themes of the three posts was the need for Cultural change – things like t...
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
In a report titled “Forecast Analysis: Enterprise Application Software, Worldwide, 2Q15 Update,” Gartner analysts highlighted the increasing trend of application modernization among enterprises. According to a recent survey, 45% of respondents stated that modernization of installed on-premises core enterprise applications is one of the top five priorities. Gartner also predicted that by 2020, 75% of
It is with great pleasure that I am able to announce that Jesse Proudman, Blue Box CTO, has been appointed to the position of IBM Distinguished Engineer. Jesse is the first employee at Blue Box to receive this honor, and I’m quite confident there will be more to follow given the amazing talent at Blue Box with whom I have had the pleasure to collaborate. I’d like to provide an overview of what it means to become an IBM Distinguished Engineer.
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
The cloud has reached mainstream IT. Those 18.7 million data centers out there (server closets to corporate data centers to colocation deployments) are moving to the cloud. In his session at 17th Cloud Expo, Achim Weiss, CEO & co-founder of ProfitBricks, will share how two companies – one in the U.S. and one in Germany – are achieving their goals with cloud infrastructure. More than a case study, he will share the details of how they prioritized their cloud computing infrastructure deployments ...
If you are new to Python, you might be confused about the different versions that are available. Although Python 3 is the latest generation of the language, many programmers still use Python 2.7, the final update to Python 2, which was released in 2010. There is currently no clear-cut answer to the question of which version of Python you should use; the decision depends on what you want to achieve. While Python 3 is clearly the future of the language, some programmers choose to remain with Py...
Opinions on how best to package and deliver applications are legion and, like many other aspects of the software world, are subject to recurring trend cycles. On the server-side, the current favorite is container delivery: a “full stack” approach in which your application and everything it needs to run are specified in a container definition. That definition is then “compiled” down to a container image and deployed by retrieving the image and passing it to a container runtime to create a running...
Somebody call the buzzword police: we have a serious case of microservices-washing in progress. The term “microservices-washing” is derived from “whitewashing,” meaning to hide some inconvenient truth with bluster and nonsense. We saw plenty of cloudwashing a few years ago, as vendors and enterprises alike pretended what they were doing was cloud, even though it wasn’t. Today, the hype around microservices has led to the same kind of obfuscation, as vendors and enterprise technologists alike ar...
“All our customers are looking at the cloud ecosystem as an important part of their overall product strategy. Some see it evolve as a multi-cloud / hybrid cloud strategy, while others are embracing all forms of cloud offerings like PaaS, IaaS and SaaS in their solutions,” noted Suhas Joshi, Vice President – Technology, at Harbinger Group, in this exclusive Q&A with Cloud Expo Conference Chair Roger Strukhoff.
Clearly the way forward is to move to cloud be it bare metal, VMs or containers. One aspect of the current public clouds that is slowing this cloud migration is cloud lock-in. Every cloud vendor is trying to make it very difficult to move out once a customer has chosen their cloud. In his session at 17th Cloud Expo, Naveen Nimmu, CEO of Clouber, Inc., will advocate that making the inter-cloud migration as simple as changing airlines would help the entire industry to quickly adopt the cloud wit...
As the world moves towards more DevOps and microservices, application deployment to the cloud ought to become a lot simpler. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. In his session at 17th Cloud Expo, Raghavan "Rags" Srinivas, an Architect/Developer Evangeli...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Bradley Holt, Developer Advocate at IBM Cloud Data Services, will demonstrate techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, ...
Despite all the talk about public cloud services and DevOps, you would think the move to cloud for enterprises is clear and simple. But in a survey of almost 1,600 IT decision makers across the USA and Europe, the state of the cloud in enterprise today is still fraught with considerable frustration. The business case for apps in the real world cloud is hybrid, bimodal, multi-platform, and difficult. Download this report commissioned by NTT Communications to see the insightful findings – registra...
Application availability is not just the measure of “being up”. Many apps can claim that status. Technically they are running and responding to requests, but at a rate which users would certainly interpret as being down. That’s because excessive load times can (and will be) interpreted as “not available.” That’s why it’s important to view ensuring application availability as requiring attention to all its composite parts: scalability, performance, and security.