Welcome!

Microservices Expo Authors: Yeshim Deniz, Liz McMillan, Automic Blog, Jyoti Bansal, Dan Blacharski

Related Topics: Microservices Expo

Microservices Expo: Article

Data Centers: The Next Wave of IT Innovation

Industry poised for electrical, mechanical and architectural innovations

Energy has been one of the largest variable costs for large technology companies and today is becoming increasingly material for large enterprises of all kinds.  Last week at the FIRE Conference I heard Ford's CTO talk about how Ford wanted to be seen as a technology company.  As I listened to Paul Mascarenas talk about smart cars I couldn't help but to wonder how many of the world's largest companies have at least discretely considered evolving into more technology-centric products.

The data center is the factory of the new cloud economy and is a major inflection point for enterprise profitability.  Those who deliver the most apps, services, etc. per kilowatt/hour have a competitive advantage.  And with data centers accounting for close to 1.5% of electricity consumption in the U.S., increasing energy efficiency in the data center is becoming a strategic business and community imperative.

Since before the dotcom era, enterprises have built their own data centers with a keen focus on availability, or uptime.  Many of those data centers have now outlived their usefulness and are substantial burdens on their IT teams.  As new data centers are built, uptime considerations need to be combined with efficiency considerations.  They must be addressed together.

Increasing demands for IT resources, rising rack densities, and increased power and cooling requirements are exposing tired designs, and increasing power requirements. Simply adding more space is a shortsighted approach to what promises to be a longstanding issue: the efficient use of company resources, especially those strategic to the bottom line.

Today's modern data centers are, on average, 30%+ more efficient than data centers built even five years ago, due to rising densities and the impact on electrical and mechanical innovation.  Well-capitalized tech companies (including Google and Facebook) have invested billions in data center innovation, from sophisticated water-cooling to internal rack architectures optimized for efficient airflow.

Many enterprises, however, are suspended between the cost and risk of building innovative data centers and leasing wholesale data centers.  The traditional wholesale data center industry (including Digital Realty Trust [DLR], Dupont Fabros Technology [DFT}, and regional player CoreSite [COR]) has been very successful in building standardized designs that address a subset of the enterprise data center market.  Innovation, in a nutshell, has been limited to those with the deep pockets and courage to build their own.

Today wholesale data centers can be classified as innovative (engineering-optimized for specific enterprise goals and local resource abundance/scarcity) or traditional (from pods to containers, once type of space serves all).

With Vantage Data Centers entering the market (see highlights from our Smart Data Center Revolution event on Earth Day 2011), expect to see some changes in an otherwise transaction-centric industry.

Increasing Reliability and Efficiency

As wholesale data center providers evolve you can expect more campus-scale projects with:

  • dedicated substations and higher voltage distribution from the substation to the data center floor;
  • elimination of PDUs;
  • redundant backup generator power with 2N electrical configurations to the floor;
  • high efficiency UPS units; and
  • pre-provisioning of data centers for additional load (vertical scalability) including skid-mounted generators and UPS and preprovisioned switch gear.

Enterprises that continue to operate or lease traditional data center space (where only about half the electricity entering the building is used to power and cool the data center facility), put themselves at a competitive disadvantage. They pay significantly more for the operation of every server.  Increasingly what is good for business is good for the environment, and vice versa.

The problem was starting to appear as early as five years ago (from Computerworld):

Data centers "are becoming more and more swollen," IDC analyst Vernon Turner said today at the IDC Virtualization Forum here. Most of the servers purchased today cost less than $3,000. And while that may sound inexpensive, the annual power and cooling bill for 100 servers is about $40,000. In total, for every $1 spent on a server, $7 is spent on support, he said.

-          Patrick Thibodeau, Servers Swamp Data Centers as Chip Vendors Move Ahead, Feb 6, 2006

After the energy consumed directly by the servers, routers and switches within a data center, power distribution and cooling provide significant opportunities for energy conservation.  New, high efficiency data centers -from the innovators- are bringing power closer to the data center at utility distribution 12 kV to 34.5 kW. Stepping it down close to 480 V conditioned power loads results in less loss of power.

Cooling is the other major area where energy savings are being achieved. Where geography and climate permit, data center owners and operators are taking advantage of free cooling via airside and water side economization. Supplementing this form of cooling with chillers only in hot months and operating the data center at higher overall temperatures is also positively impacting energy consumption.

You can therefore expect to see more data center customization based on location, including climate, humidity and air and water quality.  Efficient data centers will be designed for the optimum use of both scarce and plentiful local resources, instead of the "one design fits all" approach common today.   There will always be a robust demand for traditional data centers, but expect more of the tech-centric enterprises to shift to highly-customized solutions engineered for specific needs and locations.

Recent advancements in specialized mechanical architectures will also optimize the flow of air and enable granular visibility and control of cooling with real-time data and power metering.

With these electrical, mechanical and architectural innovations campus-scale wholesale data centers are matching or closely approaching the best Power Usage Effectiveness (PUE) numbers for enterprise-owned data centers by the likes of Facebook and Google.  With the closing of the innovation gap, the decision then becomes one of whether to build or lease.

Here is a recent (April 2011) article in InformationWeek on How to Build a Modern Data Center.

Upgrading, Consolidating or... Leasing

By understanding the critical elements of a high efficiency data center and the options, and by looking at metrics such as PUE plus Carbon Usage Effectiveness (CUE) which looks at the carbon emissions associated with operating a data center (not its construction) and Water Usage Effectiveness (WUE) which measures how efficiently a data center is using water, enterprises can make better decisions about whether or not their existing data center(s) can or should be upgraded.

Per IDC (2010) the average data center in the U.S. is 12 years old, meaning it cannot be upgraded economically because of inadequate electrical systems and other physical and site limitations. A site's power distribution features for example, are not something that a company can readily go back and replace to save energy.

Every business will need to assess for itself the difference that leasing a more efficient building could make compared with owning an older building that is wasting increasing amounts of power and cooling every year as power demands increase.

If it is not possible to upgrade a data center, the build/lease question should be addressed.

What is the capital expense and risk involved in building or expanding data center capacity and what is the lost opportunity cost in time and potential unrealized return in making a decision to build? As innovation accelerates how reasonable is it to expect internal teams to keep up?  What will be the ongoing operating expense to run the new data center and what is the TCO over the 10-15 year lifespan of a modern, optimized data center that offers more IT capacity (more services, applications, etc.)  per kW?

What are the costs, advantages and others considerations of leasing data center space?

The ability to quickly access secure space and scale economies with operational service levels as needs evolve has strategic competitive implications, as does being able to reduce OPEX while preserving ownership and control of critical IT assets.

Smart data centers, whether they are owned or leased, offer significant environmental benefits and measureable cost savings.  For example, a 20k square foot space in a smart data center can reduce power and cooling by more than $1 million per year.  Data center innovation will become a critical inflection point, especially for technology-centric organizations, in the next 5-10 years.  And the location of those data centers will drive the location of strategic jobs, economic growth and the efficient stewardship of environmental resources.  The innovations being designed into these new catalysts of innovation will similarly drive additional IT efficiencies and innovations in other commercial and even residential construction.

More Stories By Greg Ness

Greg Ness is a Silicon Valley marketing veteran with background in networking, security, virtualization and cloud computing. He is VP Marketing at CloudVelocity. Formerly at Vantage Data Centers, Infoblox, Blue Lane Technologies, Juniper Networks, Redline Networks, McAfee, IntruVerofficer at Networks and ShoreTel. He is one of the world's top cloud bloggers.

@MicroservicesExpo Stories
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
In large enterprises, environment provisioning and server provisioning account for a significant portion of the operations team's time. This often leaves users frustrated while they wait for these services. For instance, server provisioning can take several days and sometimes even weeks. At the same time, digital transformation means the need for server and environment provisioning is constantly growing. Organizations are adopting agile methodologies and software teams are increasing the speed ...
Is your application too difficult to manage? Do changes take dozens of developers hundreds of hours to execute, and frequently result in downtime across all your site’s functions? It sounds like you have a monolith! A monolith is one of the three main software architectures that define most applications. Whether you’ve intentionally set out to create a monolith or not, it’s worth at least weighing the pros and cons of the different architectural approaches and deciding which one makes the most s...
When you decide to launch a startup company, business advisors, counselors, bankers and armchair know-it-alls will tell you that the first thing you need to do is get funding. While there is some validity to that boilerplate piece of wisdom, the availability of and need for startup funding has gone through a dramatic transformation over the past decade, and the next few years will see even more of a shift. A perfect storm of events is causing this seismic shift. On the macroeconomic side this ...
Developers want to create better apps faster. Static clouds are giving way to scalable systems, with dynamic resource allocation and application monitoring. You won't hear that chant from users on any picket line, but helping developers to create better apps faster is the mission of Lee Atchison, principal cloud architect and advocate at New Relic Inc., based in San Francisco. His singular job is to understand and drive the industry in the areas of cloud architecture, microservices, scalability ...
A Man in the Middle attack, or MITM, is a situation wherein a malicious entity can read/write data that is being transmitted between two or more systems (in most cases, between you and the website that you are surfing). MITMs are common in China, thanks to the “Great Cannon.” The “Great Cannon” is slightly different from the “The Great Firewall.” The firewall monitors web traffic moving in and out of China and blocks prohibited content. The Great Cannon, on the other hand, acts as a man in the...
This recent research on cloud computing from the Register delves a little deeper than many of the "We're all adopting cloud!" surveys we've seen. They found that meaningful cloud adoption and the idea of the cloud-first enterprise are still not reality for many businesses. The Register's stats also show a more gradual cloud deployment trend over the past five years, not any sort of explosion. One important takeaway is that coherence across internal and external clouds is essential for IT right n...
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing be...
Back in February of 2017, Andrew Clay Schafer of Pivotal tweeted the following: “seriously tho, the whole software industry is stuck on deployment when we desperately need architecture and telemetry.” Intrigue in a 140 characters. For me, I hear Andrew saying, “we’re jumping to step 5 before we’ve successfully completed steps 1-4.”
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
To more closely examine the variety of ways in which IT departments around the world are integrating cloud services, and the effect hybrid IT has had on their organizations and IT job roles, SolarWinds recently released the SolarWinds IT Trends Report 2017: Portrait of a Hybrid Organization. This annual study consists of survey-based research that explores significant trends, developments, and movements related to and directly affecting IT and IT professionals.
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
Cloud Expo, Inc. has announced today that Aruna Ravichandran, vice president of DevOps Product and Solutions Marketing at CA Technologies, has been named co-conference chair of DevOps at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, will discuss how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He will discuss how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Software as a service (SaaS), one of the earliest and most successful cloud services, has reached mainstream status. According to Cisco, by 2019 more than four-fifths (83 percent) of all data center traffic will be based in the cloud, up from 65 percent today. The majority of this traffic will be applications. Businesses of all sizes are adopting a variety of SaaS-based services – everything from collaboration tools to mission-critical commerce-oriented applications. The rise in SaaS usage has m...
The proper isolation of resources is essential for multi-tenant environments. The traditional approach to isolate resources is, however, rather heavyweight. In his session at 18th Cloud Expo, Igor Drobiazko, co-founder of elastic.io, drew upon his own experience with operating a Docker container-based infrastructure on a large scale and present a lightweight solution for resource isolation using microservices. He also discussed the implementation of microservices in data and application integrat...
We'd all like to fulfill that "find a job you love and you'll never work a day in your life" cliché. But in reality, every job (even if it's our dream job) comes with its downsides. For you, the constant fight against shadow IT might get on your last nerves. For your developer coworkers, infrastructure management is the roadblock that stands in the way of focusing on coding. As you watch more and more applications and processes move to the cloud, technology is coming to developers' rescue-most r...
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.