Welcome!

Microservices Expo Authors: TJ Randall, Liz McMillan, Elizabeth White, Pat Romanski, AppDynamics Blog

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Agile Computing

@CloudExpo: Article

The Coming Network Evolution: Cisco Gets It, Do You?

As Microsoft, Google, Amazon build up steam in the cloud they're creating demands for even more powerful & intelligent networks

Greg Ness's Blog

I think it is only a matter of time before ALL of the leading networking players start talking about the (strategic importance of the) network as a way to succeed in an uncertain economic climate. Last week, in "Cloud Computing, Virtualization and IT Diseconomies" I talked about the increasingly intense pressures already building on static network infrastructure, and the underlying need for more intelligence and automation.

I think the new survival mantra for the coming economic weakness will be "He (or she) who automates wins." As the industrial age emerged from the agricultural age, and as it blends with the computer age, innovation has been driven by the ability of visionaries to boost productivity through automation and connectivity.

I just watched Cisco's John Chambers "Can IT Strengthen the Economy?" interview at the recent Gartner conference just released. Chambers clearly sees innovation as the way out. The network is strategic to business productivity. Flexibility, speed and scale are becoming even more important. That means dynamic connectivity and intelligence will become even more strategic to the network.

I think Chambers gets it and is reminding his customers that strategic innovation will trump mere cost-cutting in a period of economic uncertainty. Those who emerge will emerge even more powerful because they will have avoided the temptation to make the network tactical with the long term vision of shifting it to the cloud ala Nicholas Carr's vision of utility computing.

I think it is only a matter of time before ALL of the leading networking players start talking about the (strategic importance of the) network as a way to succeed in an uncertain economic climate. Last week, in Cloud Computing, Virtualization and IT Diseconomies I talked about the increasingly intense pressures already building on static network infrastructure, and the underlying need for more intelligence and automation.

These intense pressures are setting the stage for the next technology boom, by creating gaps between what networks can do today and what they'll need to do tomorrow. I was amazed at how quickly the concept of Infratsructure2.0 spread, including an interesting discussion at F5 Network's pace-setting DevCentral blog.

These pressures are coming from increasing rates of change, especially in larger networks supporting more devices and branches and processes, as well as with the introduction of consolidation, virtualization and cloud computing initiatives. These new initiatives are introducing even higher rates of change and making it clear that a static network will no longer be a strategic network.

As Nicholas Carr debates with Tim O'Reilly about the form cloud will take a few nuggets emerge:

"But the cloud platform, like the software platform before it, has new rules for competitive advantage. And chief among those advantages are those that we've identified as "Web 2.0", the design of systems that harness network effects to get better the more people use them."

- Tim O'Reilly "Web2.0 and Cloud Computing, October 2008

As Nicholas correctly challenges the role of "network effects" he then engages a fallacy that I think is the core of his misperception of the role of network infrastructure within IT. That is, his electric utility as IT metaphor leads him down a path that is well-trodden from a hype perspective, but not yet enterprise-grade. He talks about economies of scale in IT that can contribute to which cloud players win or lose:

1. Capital intensity. Building a large utility computing system requires lots of capital, which itself presents a big barrier to entry.

2. Scale advantages. As O'Reilly himself notes, big players reap important scale economies in equipment, labor, real estate, electricity, and other inputs.

3. Diversity factor. One of the big advantages that accrue to utilities is their ability to make demand flatter and more predictable (by serving a diverse group of customers with varying demand patterns), which in turn allows them to use their capital more efficiently. As your customer base expands, so does your diversity factor and hence your efficiency advantage and your ability to undercut your less-efficient competitors' prices.

- Nicholas Carr, "What Time O'Reilly gets wrong about the cloud", October 2008

In Cloud Computing, Virtualization and IT Diseconomies I talked about the prevalence of manual labor in critical IT processes, from IP address management to servers that lead to substantial scale and complexity challenges. Exactly where are the advantages if the costs of simple tasks per IP address go up (on a per IP address basis) as networks get larger? Here's what I wrote:

"As much as cloud computing has rallied behind the prospect of electricity and real estate savings, the business case still feels like a dotcom hangover in some cases. Virtualization is still a bit hamstrung in the enterprise by the disconnect between static infrastructure and moving, state-changing VMs; and labor is the largest cost component of server TCO (IDC findings) and a significant component of network TCO (as suggested by the Computerworld findings). So just how much will real estate and electricity savings offset other diseconomies and barriers in the cloud game? I think cloud computing will also have to innovate in areas like automation and connectivity intelligence."

I think that rising complexity and scale challenges driven by various initiatives (including cloud computing) will force static networks to evolve into dynamic networks. That is the only way that scale and complexity can be addressed, and I think that is the core of Carr's challenge to enterprise IT. Dynamic networks would create a new level of automation potential and reduce the sheer amount of resources dedicated to connectivity and change, which will only go up as endpoints and systems become more mobile and more dynamic.

[Thanks to Rick Kagan and Stu Bailey at Infoblox for the above image]

Across several recent articles at Archimedius I've talked about the increasingly costly demands of manual labor on IT, including IP address management, DNS, DHCP and a host of other core network services. I've talked about the importance of reachability and connectivity intelligence within the network so that solutions can learn and adapt to these new fluid systems and more powerful endpoints.

Recent Computerworld and IDC research was also cited in , my lengthy tome predicting the shrinking role of manual labor in IT. I noted larger enterprises paying more for mundane, boring tasks like managing IP addresses by spreadsheet, even on a cost per IP address basis.Cloud Computing, Virtualization and IT Diseconomies

I'll also go so far as to suggest who the leaders are in each required category, from endpoint intelligence (Microsoft), to network intelligence (Cisco) to application intelligence (F5 Networks). I inserted Infoblox as the leader in connectivity intelligence, which I see as this emerging dynamic feedback loop between systems, endpoints and networks now overly dependent upon manual labor to address rising flexibility and scale demands. (Disclaimer: I work for Infoblox).

That's one of the reasons I was so encouraged by the recent discussion at F5's DevCentral community. Here is the post if you're interested in more.


Managing a heterogeneous infrastructure is difficult enough, but managing a dynamic, ever changing heterogeneous infrastructure that must be stable enough to deliver dynamic applications makes the former look like a walk in the park. Part of the problem is certainly the inability to manage heterogeneous network infrastructure devices from a single management system.

- Lori MacVittie, F5 DevCentral

Who knows if standards could ever emerge between the likes of Cisco, Juniper, Brocade, Riverbed and F5 Networks. Lorie is quick to point out that they have worked in the past, as with WS-I (which included Microsoft and Oracle, among others). A very interesting standard I mentioned previously is IF-MAP from the Trusted Computing Group, which includes ArcSight, Aruba, Infoblox and Juniper, among others.


As the Mind requires a Nervous System; Network Intelligence requires Connectivity Intelligence

Yet I think standards will only be part of the solution, even if they are adopted. I think the critical requirement for Infrastructure2.0 will be connectivity intelligence. TCP/IP has now outgrown its static shell and is about to be tasked with connecting even more powerful and dynamic systems. Whether it's the rise of RFID in supply chain, mobility ala Google's Android, or even the adoption of parking meters with their own IP addresses, it is clear that TCP/IP is spreading with or without a strong economy and the most productive enterprises will be the most likely to survive.

The manual labor that has driven IP address management costs higher as networks grow larger is similarly impacting other core network services (like DNS and DHCP) that were not created to support such complex arrays of devices, branches and systems. This is the broader opportunity for Juniper, Brocade and others as well, not only to reduce network infrastructure TCO but to address the new level of flexibility enabled by virtualization and other initiatives driving new scale and flexibility requirements.

Enterprises are now on the battlefield between two competing forces, the rapid proliferation of TCP/IP and the increasingly dynamic and powerful systems and endpoints attaching to the network in order to boost productivity. Those who succeed will have invested in automation based on dynamic feedback between devices and systems and the rise in network intelligence.

Gone will be manual spreadsheets tracking IP addresses across large and ever-changing extended enterprise networks. Gone will be endless hours of overtime tied up in mundane and resource-consuming tasks. Gone will be manual pings to determine whether a network is available or secure or not.

This is the next technology boom, the era of Infratsructure2.0. Cisco is already on message. F5 is getting there and I think it is only a matter of time before the marketers at the world's leading technology companies realize that the war is on, and all of the old alliances that enabled exclusivity and lock-in and layers of manual labor are off the table.

Out of this coming weakness will emerge new strength, possibilities and profits. As Microsoft, Google, Amazon build up steam in the cloud they are creating demands for even more powerful and intelligent networks. Enterprises who see the network as tactical will take the brunt of the pain from a weak economy; those who embrace automation will be the fastest to return to normal and ultimately establish and or maintain operational leadership.

More Stories By Greg Ness

Gregory Ness is the VP of Marketing of Vidder and has over 30 years of experience in marketing technology, B2B and consumer products and services. Prior to Vidder, he was VP of Marketing at cloud migration pioneer CloudVelox. Before CloudVelox he held marketing leadership positions at Vantage Data Centers, Infoblox (BLOX), BlueLane Technologies (VMW), Redline Networks (JNPR), IntruVert (INTC) and ShoreTel (SHOR). He has a BA from Reed College and an MA from The University of Texas at Austin. He has spoken on virtualization, networking, security and cloud computing topics at numerous conferences including CiscoLive, Interop and Future in Review.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
At its core DevOps is all about collaboration. The lines of communication must be opened and it takes some effort to ensure that they stay that way. It’s easy to pay lip service to trends and talk about implementing new methodologies, but without action, real benefits cannot be realized. Success requires planning, advocates empowered to effect change, and, of course, the right tooling. To bring about a cultural shift it’s important to share challenges. In simple terms, ensuring that everyone k...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, will discuss how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galer...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...