Welcome!

Microservices Expo Authors: Liz McMillan, Elizabeth White, Gregor Petri, Yeshim Deniz, Olivier Huynh Van

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Agile Computing, @BigDataExpo, SDN Journal

@CloudExpo: Article

The Outside-In Battle for the Soul of the Cloud

The clouds that can best adapt to the demands of the workloads they are supporting will be best positioned for success

Whether they admit it or not, the emergence of public cloud providers has dramatically altered the playing field for hardware vendors of every type. Amazon Web Services (AWS) and its competitors opened Pandora's box by introducing the world to a completely programmatic, scalable, evolving, and pay-as-you-go way to procure and utilize network, compute and storage resources on a global scale. They have disrupted many layers of the technology industry from the applications being written to the way companies interact with the infrastructure being used to support those applications.

Nowhere is this disruption easier to see than in the virtualization ecosystem. For the better part of the last decade, hypervisor companies like VMware, Citrix, Microsoft and Red Hat worked hand-in-hand with hardware manufacturers like Cisco, NetApp, EMC, HP and Dell to define both the infrastructure foundation as well as the virtualized abstraction layer that sat underneath the entirety of the client/server era. These companies provided a direct link between the enterprise applications, the hypervisor and the hardware. They owned the traditional datacenter construct.

It's that construct, since rebranded as "private cloud," that is directly under attack by public cloud providers. I predict that this will be the battlefield for the heart and soul of enterprise IT for the next decade.

The response to the public cloud threat has been varied, and often reflects the ability of traditional companies to pivot and meet the challenge. Interestingly, erstwhile competitors Microsoft and VMware reacted similarly. This is because they were both uniquely positioned to create a software-defined solution to the problem.

For both companies, the response started with existing enterprise workloads. One of the largest challenges of the AWS public cloud is the fact that getting workloads, and especially data, into and out of an enterprise environment can be both technically challenging and expensive. Most workloads running on an enterprise-virtualized platform today can't be easily ported into AWS and this increases the cost and risk of any migration. As companies with extensive and hard-won experience running mission-critical enterprise workloads, Microsoft and VMware came to much the same conclusion: build a public cloud using their existing platform and allow customers and developers to leverage all of the investment they've made in their own data centers as they selectively move workloads outside of their own data centers. Thus, Microsoft Azure and VMware vCHS were born. Both are clouds that customers can move workloads to without the need to rewrite or re-architect them. They can also be licensed using existing agreements and can be managed by existing staff and tools.

Unfortunately, the traditional data center infrastructure is now the weak link in this new software-defined world. In each of the public clouds referenced, the focus has been on the abstraction layer and how it interacts with the end users. What's missing is how the abstraction layer and the applications and tools that sit on top of it interact with the infrastructure directly.

There have been attempts at hardware-based offloading, especially with regards to storage. VAAI is a good example of VMware trying to create a way to let enterprise storage arrays handle the tasks they are good at without requiring the direct involvement of the hypervisor. But even there it's a rudimentary exchange at best: the hypervisor asks "can you do this task instead of me?" and the array responds. If the answer is yes, the hypervisor waits for the task to complete; if the answer is no, the hypervisor does the task itself. This relationship isn't dynamic, and is ignorant of the reason for and context behind the task in the first place.

In summary, we have an outside force, AWS and public cloud, being the primary catalyst driving change into the enterprise, yet very little of that change is happening below the cloud management or hypervisor layer. Why is that? Why is it important that the infrastructure layer become more of an asset to the rest of the stack? What would that look like? Let's dig in.

The question of why is actually pretty simple: it's really, really hard to take legacy hardware architecture and retrofit it into something agile and programmatic. In some cases, it's just a new concept that requires a hardware refresh (like Cisco UCS and its take on XML-defined BIOS policies), but in many cases, especially around storage, it requires a complete reimagining of the platform. It's no coincidence that most of the innovation in this agile infrastructure space is being done by startups who have no legacy customers, technical debt or margins to deal with.

Why is it important? While the best hardware is boring hardware, it's still a critical part to providing a flexible, reliable and high-performance foundation to handle applications that matter to enterprises. There are times where the best way to handle the demands of an application or, more important, multiple applications at once is in hardware. This is true at the network layer, where the manipulation of packets benefits from proximity to processing resources; the compute layer, where apps can benefit from having specialized GPU resources to handle unique requirements; and most especially at the storage layer.

Storage services can have the most dramatic impact on workload performance, yet are often implemented in such a way that they have no direct relationship with those workloads. Services like compression, deduplication and quality-of-service are usually "on or off" features when it comes to storage arrays. Best case, a storage administrator will create a volume or LUN, choose the features that need to be enabled, and then a virtualization admin will map that volume to a data store. Perhaps the virtualization team will create manual storage profiles that define the features offered by that data store, but placing and migrating VMs remains a manual process, and they will not have the ability to map application policy equally across the hypervisor and hardware layers. (Of course, it's not impossible to create programmatic, hypervisor-aware infrastructure, but it is pretty hard.)

Enterprises have come to expect some fundamental features from the public cloud space: simple architecture, linear scaling, API availability and granular application of services. These features allow an infrastructure to respond to the increased requirements of a workload natively, without the overhead of a bolt-on orchestration engine. They provide the ability for the hypervisor to be both a northbound and southbound policy enforcer. They enable the Next-Generation Data Center, one in which the hardware, the hypervisor and the application all play an integrated, coordinated role in providing the performance and availability demanded by the enterprise.

No matter where your workloads run, the rise of public cloud has ushered in an era of computing defined by a seamless, programmatic experience. The old, monolithic infrastructure of yesterday's client/server wave is giving way to a more agile, more responsive, more services-rich and more scalable cloud-based model. The battle for the enterprise soul is beginning and, inside or outside the firewall, the clouds that can best adapt to the demands of the workloads they are supporting will be best positioned for success.

More Stories By Jeramiah Dooley

Jeramiah Dooley joined the SolidFire team as a Cloud Architect on the Technology Solutions team. Prior to SolidFire he was most recently at VCE and before that Peak 10. You can check out his Virtualization for Service Providers blog or follow him on twitter @jdooley_clt.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
Apache Hadoop is a key technology for gaining business insights from your Big Data, but the penetration into enterprises is shockingly low. In fact, Apache Hadoop and Big Data proponents recognize that this technology has not yet achieved its game-changing business potential. In his session at 19th Cloud Expo, John Mertic, director of program management for ODPi at The Linux Foundation, will explain why this is, how we can work together as an open data community to increase adoption, and the i...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
About a year ago we tuned into “the need for speed” and how a concept like "serverless computing” was increasingly catering to this. We are now a year further and the term “serverless” is taking on unexpected proportions. With some even seeing it as the successor to cloud in general or at least as a successor to the clouds’ poorer cousin in terms of revenue, hype and adoption: PaaS. The question we need to ask is whether this constitutes an example of Hype Hopping: to effortlessly pivot to the ...
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lea...
24Notion is full-service global creative digital marketing, technology and lifestyle agency that combines strategic ideas with customized tactical execution. With a broad understand of the art of traditional marketing, new media, communications and social influence, 24Notion uniquely understands how to connect your brand strategy with the right consumer. 24Notion ranked #12 on Corporate Social Responsibility - Book of List.
SYS-CON Events announced today that LeaseWeb USA, a cloud Infrastructure-as-a-Service (IaaS) provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. LeaseWeb is one of the world's largest hosting brands. The company helps customers define, develop and deploy IT infrastructure tailored to their exact business needs, by combining various kinds cloud solutions.
Large enterprises today are juggling an enormous variety of network equipment. Business users are asking for specific network throughput guarantees when it comes to their critical applications, legal departments require compliance with mandated regulatory frameworks, and operations are asked to do more with shrinking budgets. All these requirements do not easily align with existing network architectures; hence, network operators are continuously faced with a slew of granular parameter change req...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. Big Data at Cloud Expo - to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
SYS-CON Events announced today the Enterprise IoT Bootcamp, being held November 1-2, 2016, in conjunction with 19th Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA. Combined with real-world scenarios and use cases, the Enterprise IoT Bootcamp is not just based on presentations but with hands-on demos and detailed walkthroughs. We will introduce you to a variety of real world use cases prototyped using Arduino, Raspberry Pi, BeagleBone, Spark, and Intel Edison. Y...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, showed how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful f...
Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS - software, platform, and infrastructure as a service.
Digitization is driving a fundamental change in society that is transforming the way businesses work with their customers, their supply chains and their people. Digital transformation leverages DevOps best practices, such as Agile Parallel Development, Continuous Delivery and Agile Operations to capitalize on opportunities and create competitive differentiation in the application economy. However, information security has been notably absent from the DevOps movement. Speed doesn’t have to negat...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Your business relies on your applications and your employees to stay in business. Whether you develop apps or manage business critical apps that help fuel your business, what happens when users experience sluggish performance? You and all technical teams across the organization – application, network, operations, among others, as well as, those outside the organization, like ISPs and third-party providers – are called in to solve the problem.
While DevOps promises a better and tighter integration among an organization’s development and operation teams and transforms an application life cycle into a continual deployment, Chef and Azure together provides a speedy, cost-effective and highly scalable vehicle for realizing the business values of this transformation. In his session at @DevOpsSummit at 19th Cloud Expo, Yung Chou, a Technology Evangelist at Microsoft, will present a unique opportunity to witness how Chef and Azure work tog...
As applications are promoted from the development environment to the CI or the QA environment and then into the production environment, it is very common for the configuration settings to be changed as the code is promoted. For example, the settings for the database connection pools are typically lower in development environment than the QA/Load Testing environment. The primary reason for the existence of the configuration setting differences is to enhance application performance. However, occas...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.