Click here to close now.


Microservices Expo Authors: XebiaLabs Blog, Yeshim Deniz, Elizabeth White, AppDynamics Blog, Liz McMillan

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Microsoft Cloud, Agile Computing, @CloudExpo

Containers Expo Blog: Article

The Seven Properties of Network Virtualization

A great starting point for requirements for your enterprise architecture

A review of the key properties of network virtualization can inform your planning and help in requirements generation as you architect new systems. The best source of information I’ve found on network virtualization is at Nicira, a firm anyone with an infrastructure should be paying attention to now.

The following is drawn from their paper on The Seven Properties of Network Virtualization”

1. Independence from network hardware
In the emerging multi-tenant cloud, the old rules of vendor lock-in are rapidly changing. A network virtualization platform must be able to operate on top of any network hardware, much like x86 server hypervisors work on top of any server. This independence means the physical network can be supplied by any combination of hardware vendors. Over time, newer architectures that better support virtualization as well as commodity options are becoming available, further improving the capital efficiency of cloud.

2. Faithful reproduction of the physical network service model
The vast bulk of enterprise applications have not been written as web applications, and the cost/payback ratio of rewriting tens of billions of dollars of application development is neither realistic nor even possible. Therefore, a network virtualization platform must be able to support any workload that runs within a physical environment today. In order to do so, it must recreate Layer 2 and Layer 3 semantics fully, including support for broadcast and multicast. In addition it must be able to offer higher-level in-network services that are used in networks today such as ACLs, load balancing, and WAN optimization.

It is also important that the virtual network solution fully virtualize the network address space. Commonly, virtual networks are migrated from or integrated with physical environments where it is not possible to change the current addresses of the VMs. Therefore, it is important that a virtual network environment not dictate or limit the addresses that can be used within the virtual networks, and that it allows overlapping IP and MAC addresses between virtual networks.

3. Follow operational model of compute virtualization
A key property of compute virtualization is the ability to treat a VM as soft state, meaning it can be moved, paused, resumed, snapshotted, and rewound to a previous configuration. In order to integrate seamlessly in a virtualized environment, a network virtualization solution must support the same control and flexibility for virtual networks.

4. Compatible with any hypervisor platform
Network virtualization platforms must also be able to work with the full range of server hypervisors, including Xen, XenServer, KVM, ESX, and HyperV, providing the ability to control virtualized network connectivity across any network substrate as well as between hypervisor environments. This “any-to-any” paradigm shift provides for:

  • Ÿ More effective utilization of existing network investments,
  • Ÿ Cost and management reduction of new, Layer 3 fabric innovations,
  • Ÿ Workload portability from enterprise to cloud service provider environments.

5. Secure isolation between virtual networks, the physical network, and the control plane
The promise of multi-tenancy requires maximum utilization of compute, storage and network assets through sharing of the physical infrastructure. It is important that a network virtualization platform maintain this consolidation while still providing the isolation needed by regulatory compliance standards such as PCI or FINRA, as well as provide the same security guarantees of compute virtualization.Like compute virtualization, a network virtualization platform should provide strict address isolation between virtual networks (meaning one virtual network cannot inadvertently address another) as well address isolation between the virtual networks and the physical network. This last property removes the physical network as an attack target unless the virtualization platform itself is undermined.

6. Cloud performance and scale
Cloud drives a significant increase in the scale of tenants, servers, and applications supported in a single data center. However, current networks are still bound by the physical limitations of networks, especially VLANs (which are limited to 4,096). VLANS were designed during an earlier era before server virtualization dramatically increased the requirements for the numbers of virtually isolated environments. Network virtualization must support considerably larger scale deployments with tens thousands, or even hundreds of thousands of virtual networks. This not only enables a larger number of tenants, but also support critical services like disaster recovery, data center utilization, etc., which outstrip current limitations.

A virtual network solution should also not introduce any chokepoints or single points of failure into the network. This roughly entails that to all components for the solution must be fully distributed, and all network paths should support multi-pathing and failover. Finally, a network virtualization solution should also not significantly impact data path performance. The number of lookups on the data path required to implemented network virtualization is similar to what data paths perform today. It is possible to implement full network virtualization in software at the edge of the network and still perform at full 10G line rates.

7. Programmatic network provisioning and control
Traditionally, networks are configured one device at a time, although this can be accelerated through the development of scripts (which emulate individual configuration). Current approaches make network configuration slow, error prone and open to security holes through a mistaken keystroke. In a large-scale cloud environment, this introduces a level of fragility and manual configuration costs that hurt service velocity and/or profitability.

A network virtualization solution should provide full control over all virtual network resources and allow for these resources to be managed programmatically. This allows the provisioning to happen at the service level versus the element level significantly simplifying provisioning logic and any disruption that might occur due to physical network node failure. The programmatic API should provide full access to management and configuration of a virtual network to not only support dynamic provisioning at cloud time scales, but also the ability to introduce and configure services on the fly.

Concluding Thoughts
The seven key features above are a great starting point for requirements for your enterprise architecture. The good news is that you can enjoy all these features of network virtualization without significant change. The only thing it really requires is an understanding of this new approach and access to the technical thought leadership.

For more on this topic a great place to start your research is with Nicira.


This post by was first published at

More Stories By Bob Gourley

Bob Gourley, former CTO of the Defense Intelligence Agency (DIA), is Founder and CTO of Crucial Point LLC, a technology research and advisory firm providing fact based technology reviews in support of venture capital, private equity and emerging technology firms. He has extensive industry experience in intelligence and security and was awarded an intelligence community meritorious achievement award by AFCEA in 2008, and has also been recognized as an Infoworld Top 25 CTO and as one of the most fascinating communicators in Government IT by GovFresh.

@MicroservicesExpo Stories
Opinions on how best to package and deliver applications are legion and, like many other aspects of the software world, are subject to recurring trend cycles. On the server-side, the current favorite is container delivery: a “full stack” approach in which your application and everything it needs to run are specified in a container definition. That definition is then “compiled” down to a container image and deployed by retrieving the image and passing it to a container runtime to create a running...
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of, and Fred Yatzeck, principal architect leading product development at, discussed how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at th...
If you are new to Python, you might be confused about the different versions that are available. Although Python 3 is the latest generation of the language, many programmers still use Python 2.7, the final update to Python 2, which was released in 2010. There is currently no clear-cut answer to the question of which version of Python you should use; the decision depends on what you want to achieve. While Python 3 is clearly the future of the language, some programmers choose to remain with Py...
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
As we increasingly rely on technology to improve the quality and efficiency of our personal and professional lives, software has become the key business differentiator. Organizations must release software faster, as well as ensure the safety, security, and reliability of their applications. The option to make trade-offs between time and quality no longer exists—software teams must deliver quality and speed. To meet these expectations, businesses have shifted from more traditional approaches of d...
Ten years ago, there may have been only a single application that talked directly to the database and spit out HTML; customer service, sales - most of the organizations I work with have been moving toward a design philosophy more like unix, where each application consists of a series of small tools stitched together. In web example above, that likely means a login service combines with webpages that call other services - like enter and update record. That allows the customer service team to writ...
JFrog has announced a powerful technology for managing software packages from development into production. JFrog Artifactory 4 represents disruptive innovation in its groundbreaking ability to help development and DevOps teams deliver increasingly complex solutions on ever-shorter deadlines across multiple platforms JFrog Artifactory 4 establishes a new category – the Universal Artifact Repository – that reflects JFrog's unique commitment to enable faster software releases through the first pla...
Somebody call the buzzword police: we have a serious case of microservices-washing in progress. The term “microservices-washing” is derived from “whitewashing,” meaning to hide some inconvenient truth with bluster and nonsense. We saw plenty of cloudwashing a few years ago, as vendors and enterprises alike pretended what they were doing was cloud, even though it wasn’t. Today, the hype around microservices has led to the same kind of obfuscation, as vendors and enterprise technologists alike ar...
IT data is typically silo'd by the various tools in place. Unifying all the log, metric and event data in one analytics platform stops finger pointing and provides the end-to-end correlation. Logs, metrics and custom event data can be joined to tell the holistic story of your software and operations. For example, users can correlate code deploys to system performance to application error codes.
With containerization using Docker, the orchestration of containers using Kubernetes, the self-service model for provisioning your projects and applications and the workflows we built in OpenShift is the best in class Platform as a Service that enables introducing DevOps into your organization with ease. In his session at DevOps Summit, Veer Muchandi, PaaS evangelist with RedHat, will provide a deep dive overview of OpenShift v3 and demonstrate how it helps with DevOps.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet condit...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
The APN DevOps Competency highlights APN Partners who demonstrate deep capabilities delivering continuous integration, continuous delivery, and configuration management. They help customers transform their business to be more efficient and agile by leveraging the AWS platform and DevOps principles.
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration. Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling.
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
“All our customers are looking at the cloud ecosystem as an important part of their overall product strategy. Some see it evolve as a multi-cloud / hybrid cloud strategy, while others are embracing all forms of cloud offerings like PaaS, IaaS and SaaS in their solutions,” noted Suhas Joshi, Vice President – Technology, at Harbinger Group, in this exclusive Q&A with Cloud Expo Conference Chair Roger Strukhoff.