Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Pat Romanski, JP Morgenthal, Aruna Ravichandran

Related Topics: Microservices Expo

Microservices Expo: Article

Data Centers: The Next Wave of IT Innovation

Industry poised for electrical, mechanical and architectural innovations

Energy has been one of the largest variable costs for large technology companies and today is becoming increasingly material for large enterprises of all kinds.  Last week at the FIRE Conference I heard Ford's CTO talk about how Ford wanted to be seen as a technology company.  As I listened to Paul Mascarenas talk about smart cars I couldn't help but to wonder how many of the world's largest companies have at least discretely considered evolving into more technology-centric products.

The data center is the factory of the new cloud economy and is a major inflection point for enterprise profitability.  Those who deliver the most apps, services, etc. per kilowatt/hour have a competitive advantage.  And with data centers accounting for close to 1.5% of electricity consumption in the U.S., increasing energy efficiency in the data center is becoming a strategic business and community imperative.

Since before the dotcom era, enterprises have built their own data centers with a keen focus on availability, or uptime.  Many of those data centers have now outlived their usefulness and are substantial burdens on their IT teams.  As new data centers are built, uptime considerations need to be combined with efficiency considerations.  They must be addressed together.

Increasing demands for IT resources, rising rack densities, and increased power and cooling requirements are exposing tired designs, and increasing power requirements. Simply adding more space is a shortsighted approach to what promises to be a longstanding issue: the efficient use of company resources, especially those strategic to the bottom line.

Today's modern data centers are, on average, 30%+ more efficient than data centers built even five years ago, due to rising densities and the impact on electrical and mechanical innovation.  Well-capitalized tech companies (including Google and Facebook) have invested billions in data center innovation, from sophisticated water-cooling to internal rack architectures optimized for efficient airflow.

Many enterprises, however, are suspended between the cost and risk of building innovative data centers and leasing wholesale data centers.  The traditional wholesale data center industry (including Digital Realty Trust [DLR], Dupont Fabros Technology [DFT}, and regional player CoreSite [COR]) has been very successful in building standardized designs that address a subset of the enterprise data center market.  Innovation, in a nutshell, has been limited to those with the deep pockets and courage to build their own.

Today wholesale data centers can be classified as innovative (engineering-optimized for specific enterprise goals and local resource abundance/scarcity) or traditional (from pods to containers, once type of space serves all).

With Vantage Data Centers entering the market (see highlights from our Smart Data Center Revolution event on Earth Day 2011), expect to see some changes in an otherwise transaction-centric industry.

Increasing Reliability and Efficiency

As wholesale data center providers evolve you can expect more campus-scale projects with:

  • dedicated substations and higher voltage distribution from the substation to the data center floor;
  • elimination of PDUs;
  • redundant backup generator power with 2N electrical configurations to the floor;
  • high efficiency UPS units; and
  • pre-provisioning of data centers for additional load (vertical scalability) including skid-mounted generators and UPS and preprovisioned switch gear.

Enterprises that continue to operate or lease traditional data center space (where only about half the electricity entering the building is used to power and cool the data center facility), put themselves at a competitive disadvantage. They pay significantly more for the operation of every server.  Increasingly what is good for business is good for the environment, and vice versa.

The problem was starting to appear as early as five years ago (from Computerworld):

Data centers "are becoming more and more swollen," IDC analyst Vernon Turner said today at the IDC Virtualization Forum here. Most of the servers purchased today cost less than $3,000. And while that may sound inexpensive, the annual power and cooling bill for 100 servers is about $40,000. In total, for every $1 spent on a server, $7 is spent on support, he said.

-          Patrick Thibodeau, Servers Swamp Data Centers as Chip Vendors Move Ahead, Feb 6, 2006

After the energy consumed directly by the servers, routers and switches within a data center, power distribution and cooling provide significant opportunities for energy conservation.  New, high efficiency data centers -from the innovators- are bringing power closer to the data center at utility distribution 12 kV to 34.5 kW. Stepping it down close to 480 V conditioned power loads results in less loss of power.

Cooling is the other major area where energy savings are being achieved. Where geography and climate permit, data center owners and operators are taking advantage of free cooling via airside and water side economization. Supplementing this form of cooling with chillers only in hot months and operating the data center at higher overall temperatures is also positively impacting energy consumption.

You can therefore expect to see more data center customization based on location, including climate, humidity and air and water quality.  Efficient data centers will be designed for the optimum use of both scarce and plentiful local resources, instead of the "one design fits all" approach common today.   There will always be a robust demand for traditional data centers, but expect more of the tech-centric enterprises to shift to highly-customized solutions engineered for specific needs and locations.

Recent advancements in specialized mechanical architectures will also optimize the flow of air and enable granular visibility and control of cooling with real-time data and power metering.

With these electrical, mechanical and architectural innovations campus-scale wholesale data centers are matching or closely approaching the best Power Usage Effectiveness (PUE) numbers for enterprise-owned data centers by the likes of Facebook and Google.  With the closing of the innovation gap, the decision then becomes one of whether to build or lease.

Here is a recent (April 2011) article in InformationWeek on How to Build a Modern Data Center.

Upgrading, Consolidating or... Leasing

By understanding the critical elements of a high efficiency data center and the options, and by looking at metrics such as PUE plus Carbon Usage Effectiveness (CUE) which looks at the carbon emissions associated with operating a data center (not its construction) and Water Usage Effectiveness (WUE) which measures how efficiently a data center is using water, enterprises can make better decisions about whether or not their existing data center(s) can or should be upgraded.

Per IDC (2010) the average data center in the U.S. is 12 years old, meaning it cannot be upgraded economically because of inadequate electrical systems and other physical and site limitations. A site's power distribution features for example, are not something that a company can readily go back and replace to save energy.

Every business will need to assess for itself the difference that leasing a more efficient building could make compared with owning an older building that is wasting increasing amounts of power and cooling every year as power demands increase.

If it is not possible to upgrade a data center, the build/lease question should be addressed.

What is the capital expense and risk involved in building or expanding data center capacity and what is the lost opportunity cost in time and potential unrealized return in making a decision to build? As innovation accelerates how reasonable is it to expect internal teams to keep up?  What will be the ongoing operating expense to run the new data center and what is the TCO over the 10-15 year lifespan of a modern, optimized data center that offers more IT capacity (more services, applications, etc.)  per kW?

What are the costs, advantages and others considerations of leasing data center space?

The ability to quickly access secure space and scale economies with operational service levels as needs evolve has strategic competitive implications, as does being able to reduce OPEX while preserving ownership and control of critical IT assets.

Smart data centers, whether they are owned or leased, offer significant environmental benefits and measureable cost savings.  For example, a 20k square foot space in a smart data center can reduce power and cooling by more than $1 million per year.  Data center innovation will become a critical inflection point, especially for technology-centric organizations, in the next 5-10 years.  And the location of those data centers will drive the location of strategic jobs, economic growth and the efficient stewardship of environmental resources.  The innovations being designed into these new catalysts of innovation will similarly drive additional IT efficiencies and innovations in other commercial and even residential construction.

More Stories By Greg Ness

Greg Ness is a Silicon Valley marketing veteran with background in networking, security, virtualization and cloud computing. He is VP Marketing at CloudVelocity. Formerly at Vantage Data Centers, Infoblox, Blue Lane Technologies, Juniper Networks, Redline Networks, McAfee, IntruVerofficer at Networks and ShoreTel. He is one of the world's top cloud bloggers.

@MicroservicesExpo Stories
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
Here’s a novel, but controversial statement, “it’s time for the CEO, COO, CIO to start to take joint responsibility for application platform decisions.” For too many years now technical meritocracy has led the decision-making for the business with regard to platform selection. This includes, but is not limited to, servers, operating systems, virtualization, cloud and application platforms. In many of these cases the decision has not worked in favor of the business with regard to agility and cost...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ...
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
An overall theme of Cloud computing and the specific practices within it is fundamentally one of automation. The core value of technology is to continually automate low level procedures to free up people to work on more value add activities, ultimately leading to the utopian goal of full Autonomic Computing. For example a great way to define your plan for DevOps tool chain adoption is through this lens. In this TechTarget article they outline a simple maturity model for planning this.
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Ca...