Welcome!

Microservices Expo Authors: Carmen Gonzalez, Elizabeth White, Jason Bloomberg, Roger Strukhoff, Pat Romanski

Related Topics: SDN Journal, Microservices Expo, Containers Expo Blog, @CloudExpo, Cloud Security

SDN Journal: Blog Post

Overlay Entropy

A Plexxi solution provides an optimized L1, L2 and L3 network

There have been many articles describing overlay networks in the past few quarters. It's a relatively straightforward concept, not far removed from some of the older VPN technologies very popular a while ago. The actual transport of packets is probably the simplest, it is the control plane that is much harder to construct and therefore explain. It is therefore also that the control plane in overlay networks has seen the most innovation and change, and is likely to change some more in standard and proprietary ways in the next little while. A perfect example is the use of IP Multicast for unknown, multicast and broadcast traffic as defined in the latest IETF draft for VXLAN, but controller implementations try and avoid IP Multicast as part of the necessary data path. Which will continue to lead to changes in the control plane for learning, distribution of destinations, etc.

A Plexxi solution provides an optimized L1, L2 and L3 network. With the advent of overlay networks, the relationship and interaction between the physical, L2 and L3 network and the overlay infrastructure is important to understand. We strongly believe the control and data planes should be interconnected and coordinated/orchestrated. In this and next week’s blog, I will describe some key touch points of the two at the data plane: entropy as a mechanism to discern flow like information and the role and capabilities of a hardware gateway.

I looked at VXLAN, NVGRE and STT as the major overlay encapsulations. VXLAN and STT are very much driven by VMWare, with STT used as the tunnel encapsulation between vSwitch based VXLAN Tunnel End Points (VTEP), VXLAN used as the tunnel encapsulation to external entities like gateways. NVGRE of course is the tunnel protocol of choice for Microsoft’s overlay solution and very similar to to previous GRE based encapsulations. All encapsulations are IP based, allowing the tunnels to be transported across a basic IP infrastructure (with the above mentioned note for IP Multicast). VXLAN and NVGRE are packet based mechanisms, each original packet ends up being encapsulated into a new packet.

VXLAN is build on top of UDP. As shown below, an encapsulated ethernet packet has 54 bytes of new header information added (assuming it is being transported again over ethernet). The first 18 bytes contain the ethernet header containing the MAC address of the source VTEP and its next IP destination, most likely the next IP router/switch. This header changes at each IP hop. The next 20 bytes contain the IP header. The protocol is set to 17 for UDP. The source IP address is that of the originating VTEP, the destination IP address that of the destination VTEP. The IP header is followed by 8 bytes of UDP header containing source UDP port, destination UDP port (4789) and the usual UDP length and checksum fields. While formatted in a normal way, the UDP source port is used in a special way to create “entropy”, explained in more detail below.

VXLAN Packet Format2

A VXLAN Encapsulated Ethernet Packet

Following the UDP header is the actual 8 byte VXLAN header. Just about all fields except the 24 bit VXLAN Network Identifier (VNI) are reserved and set to zero. The VNI is key, it determines which VXLAN the original packet belongs to. When the destination VTEP receives this packet and decapsulates it, it will use this to find the right table to use for MAC address lookups of the original packet to get it to its destination. Only the original packets (shown with Ethernet headers above) follows the VXLAN header. For every packet sent out by a VM, VXLAN adds 54 bytes of new tunnel headers between the source and destination VTEP. Intermediate systems do all their forwarding based on this new header: ethernet switches will use the Outer Ethernet header, IP routers will use the Outer IPv4 header to route this packet towards its destination. Each IP router will replace the Outer Ethernet header with a new one representing itself as the source, and the next IP router as the destination.

NVGRE packets look very similar to VXLAN packets. The initial Outer Ethernet header is the same as VXLAN, representing the source tunnel endpoint and the first IP router as the source and destination. The next 20 bytes of IP header are also similar to VXLAN, except that the protocol is 47 for GRE. NVGRE encodes the Virtualized LAN (Virtual Subnet ID or VSID in NVGRE terms) inside the GRE header, using 24 bits of the original GRE Key field to represent the VSID, leaving 8 bits for a FlowID field, which serves a similar entropy function as the UDP source port for VXLAN, explain further below. The VSID in NVGRE and VNI in VXLAN represent the overlay virtual network ID for each of the technologies. Following the GRE header, the original (Ethernet) packet. NVGRE added 46 bytes of new header information to existing packets.

NVGRE Packet Format2

A NVGRE Encapsulated Ethernet Packet

As I mentioned in last week’s blog, a tunnel endpoint is an aggregation point and as a result, all of the individual flows that are put into a specific VTEP to VTEP tunnel go through the transport network based on the new headers that have been added. Many networks rely on some form of L2 or L3 ECMP to use all available bandwidth between any two points on the network, spine and leaf networks being the prime example of an absolute dependency on a very well functioning ECMP to perform at its best. Without discussing the virtues of ECMP again, tunneled packets need something in the new header that allows an hash calculation to make use of multiple ECMP paths. With pretty much all of the L2 and L3 header identical (except for the VNI or VSID) for all traffic between two tunnel endpoints, the creators of these encapsulations have been creative in encoding entropy in these new headers so that hash calculations for these headers can be used to place traffic onto multiple equal cost paths.

For VXLAN, this entropy is encoded in the UDP source port field. With only a single UDP VXLAN connection between any two endpoints allowed (and necessary), the source port is essentially irrelevant and can be used to mark a packet with a hash calculation result that in effect acts as a flow identifier for the inner packet. Except that it is not unique. The VXLAN spec does not specify exactly how to calculate this hash value, but its generally assumed that specific portions of the inner packet L2, L3 and/or L4 header are used to calculate this hash. The originating VTEP calculates this, puts it in the new UDP header as the source port, and it remains there unmodified until it arrives at the receiving VTEP. Intermediate systems that calculate hashes for L2 or L3 ECMP balancing typically use UDP ports as part of their calculation and as a result, different inner packet flows will result in different placement onto ECMP links. As mentioned, intermediate routers or switches that transport the VXLAN packet do not modify the UDP source port, they only use its value in their ECMP calculation.

NVGRE is fairly similar. GRE packets have no TCP or UDP header, and as a result network hardware typically has the ability to recognize these packets as GRE and use the 32-bit GRE key field as an information source in their ECMP calculations. GRE tunnel endpoints encode inner packet flows with individual (but not necessarily unique) key values, and as a result, intermediate network systems will calculate different hash results to place these inner packet flows onto multiple ECMP links. NVGRE has taken 24 of these bits to encode the VSID, but has left 8 bits to create this entropy at the tunnel endpoint, the field has been renamed FlowID. The VSID and FlowID combined will be used to calculate hashes for ECMP link placement. A possible challenge is that for networks that have many many flows inside a VSID between two specific NVGRE endpoints, the 8 bits worth of differentiation may not create a “normal” ECMP distribution.

While the packet formats have been constructed to ensure that the “normal” tools of entropy can be used for ECMP and LAG by existing switching hardware, the latest hardware platforms have the ability to look well beyond the outer headers. Many bits and pieces of the new headers can be examined and decisions can be made on them. While specific switching ASICs will have slightly different tools, the latest generations of them have he ability to look at VNI and VSID even when not acting as a gateway, and packet modification or forwarding decisions can be made on their value. Inner MAC and IP headers can also be examined and acted on, with a bit more complexity. Switching ASICs are built to have quick access to the most important fields to make decisions on, access to less common fields is there, but requires some manual construction by those that program the ASIC (the networking vendors).

When the switching platform is configured to be a gateway to provide bridging functions between regular VLANs and the tunneled VXLAN or NVGRE infrastructure, the ASIC has access to the entire original packet, since it actively encapsulates or decapsulates the original packet. That gives the switch decision choices very similar to a vSwitch, but at a smaller scale. More detail on the gateway function and STT next week.

The post Overlay Entropy appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

@MicroservicesExpo Stories
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MyS...
IT leaders face a monumental challenge. They must figure out how to sort through the cacophony of new technologies, buzzwords, and industry hype to find the right digital path forward for their organizations. And they simply cannot afford to fail. Those organizations that are fastest to the right digital path will be the ones that win. The path forward, however, is strewn with the legacy of decisions made long ago — often before any of the current leadership team assumed their roles. While it’s ...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud: This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
Cloud Expo, Inc. has announced today that Andi Mann returns to 'DevOps at Cloud Expo 2017' as Conference Chair The @DevOpsSummit at Cloud Expo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great t...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2017 New York. The 20th Cloud Expo and 7th @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Internet to enable us all to im...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech.