Click here to close now.


Microservices Expo Authors: Liz McMillan, Yeshim Deniz, Pat Romanski, Elizabeth White, Lori MacVittie

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Microsoft Cloud, Open Source Cloud, @CloudExpo

Containers Expo Blog: Article

The Benefits of Virtualization

What does the latest Sandy Bridge mean for virtualization in the central office?

Those familiar with deploying virtual machines (VMs) know that in order to ensure performance, VMs must be tied to physical platforms. As the demand for data-intensive virtualized and cloud solutions continues to increase, more powerful server platforms will be required to deliver this performance without significantly multiplying hardware infrastructure for every VM.

The new Intel Sandy Bridge series (Intel's Xeon E5-2600 processor family) is ideally suited for enabling more powerful and efficient virtualized solutions for high-throughput, processing-intensive communications applications. This latest dual-processor architecture features an increased core count, I/O, and memory performance to allow more virtual machines to run on a single physical platform. Virtualization can be extremely memory-intensive, as more VMs typically require more total system memory. In order to optimize performance and easily manage VMs, each one usually requires at least one physical processor core. Using the new Sandy Bridge E5-2600 series architecture can enable individual physical servers to support greater numbers of virtualized appliances, thereby consolidating hardware for lower operational costs, preventing against VM sprawl, and simplifying transition to the cloud with opportunities for scaling up over multiple cores.

The Benefits of Virtualization
Modern carrier-grade platforms comprise unprecedented amounts of processing, memory, and network I/O resources. For developers, though, these goodies also come with the mandate to make the most effective use of modern platforms through scaling and other techniques. Through the intelligent use of carrier-class virtualization, developers can create highly scalable platforms and often eliminate unnecessary over-provisioning of resources for peak usage.

Current advances in multicore processors, cryptography accelerators, and high-throughput Ethernet silicon make it possible to consolidate what previously required multiple specialized server platforms into a single private cloud. 4G wireless deployments, HD-quality video to all devices, the continuing transition to VoIP technologies, increased security concerns, and power efficiency requirements are all driving the need for more flexible solutions.

By deploying a private cloud with virtual machine infrastructure, one's hardware becomes a pool of resources available to be provisioned as needed. The control plane, data plane, and networking can all share the same pool of common hardware.

Deployments can be easily upgraded by simply adding physical resources to the managed pool. Additionally, migrating VM instances from one compute node to another, as Figure 1 shows, can be non-disruptive.

Many telecom solutions require multiple different hardware solutions simply because they are made up of applications that run on different operating systems. In a private cloud deployment, multiple operating systems can be run on the same physical hardware, eliminating this requirement.

A private cloud enables running instances (VMs assigned to a specific function) to be tailored to different workload environments. For example, a dedicated service level can be assigned to each instance, and as demand increases or decreases, other instances can be spawned or decommissioned as necessary. This allows each process workload to be tailored for the moment-in-time demand required (see Figure 2). This ability to tailor each process workload to address moment-in-time demand means the practice of over-provisioning all resources for a "peak workload" can go by the wayside. As resources are no longer needed, they are simply added back into the pool to be used by other instances that may need to be spawned.

Virtual machines allow for the more efficient use of hardware resources by allowing multiple instances to share the same physical hardware, maximizing the use of those resources and increasing the work per watt of power consumed when compared to traditional infrastructure.

VMs also allow for 1+1 and N+1 redundancy through the use of multiple virtual instances running fewer independent hardware nodes, such as AdvancedTCA SBCs. In addition, VMs often require fewer physical nodes to achieve the same level of redundancy. By reducing the physical node count to achieve the same uptime goals, less power is consumed overall (see Figure 3).

AdvancedTCA and Virtualization
For private clouds running VM infrastructure, choosing AdvancedTCA chassis with SBCs for the compute node (the most common core element in any private cloud) makes sense because of their commonality, variety, manageability, and ease of deployment.

Network switches with Layer 3 functionality are the glue that holds the private cloud together. The selection of AdvancedTCA switches will depend largely on the internal and external bandwidth required for each compute node. Video streaming or deep packet inspection typically requires much more bandwidth (and thus higher bandwidth switches) than SMSC messaging, for example, to optimize performance.

The last necessity is also one of the most critical: shared storage. For an instance to be launched or migrated to any physical node, all nodes must also have access to the same storage. In private cloud infrastructure, a high-performance SAN and a cluster file system often supply this access. Connectivity options typically include Fibre Channel, SAS, and iSCSI connectivity. iSCSI with link speeds of up to 10 Gbps is the least intrusive approach to implementation to each node, as the SAN can be connected to AdvancedTCA fabric switches to provide storage connectivity to each node.

To avoid excessive use of fabric bandwidth for storage connectivity in high-throughput environments, employing SAS or Fibre Channels that are directly attached and connected externally to each node via RTMs is a viable option. With multiple manufacturers now making AdvancedTCA blade-based SANs as well as NEBS certified external SANs, many options are available to meet the SAN requirements for a carrier-grade private cloud.

How Sandy Bridge Processors Optimize AdvancedTCA Platforms
The new Intel Xeon processor E5 family, based on the Sandy Bridge microarchitecture, changes how well software applications run on AdvancedTCA platforms. It supports innovative networking through 40-gigabit Ethernet, and its features allow for advanced virtualization and cloud computing techniques.

The Intel Xeon E5-2600 series CPUs consist of up to eight cores, each running up to 55 percent faster than its Xeon 5600 predecessor. It can therefore deliver much higher server performance to the enterprise market. Furthermore, new enterprise servers can support up to 32 GB dual in-line memory modules (DIMMs) so memory capacity can increase from 288 GB to 768 GB using 24 slots. E5-based AdvancedTCA compute blades with more limited board real estate are expected to support up to 256 GB in 16 VLP RDIMM slots at launch. This represents a 40 percentincrease over prior blades.

Greater power efficiency is another key benefit. The E5 family provides up to a 70 percent performance gain per watt over previous generation CPUs. Communications OEMs can develop power-efficient dual processor blades for service providers that fully meet or beat AdvancedTCA power specifications.

But the real game-changer lies in the E5-2600's integrated I/O, which allows designers to reduce latency significantly and increase bandwidth. AdvancedTCA's 40G fabric has been backplane-ready since 2010 in anticipation of an updated PICMG specification release. Since then, solution providers have sought ways to eliminate bottlenecks and utilize as much of the fabric as possible. Now that Intel has integrated the new PCI-Express 3.0 with 40 lanes aboard each Xeon® processor and Quickpath Interconnects (QPIs) linking each CPU together, I/O bottlenecks are reduced, throughput is increased, and I/O latency is cut by up to 30 percent. A standard dual Xeon® E5-2600 CPU configuration offers up to 80 lanes of PCIe Gen3, which provides 200 percent more throughput than the previous generation architectures.

The overall result is much higher I/O throughput. New AdvancedTCA blades will now be able to deliver more than 10 Gb/s per node. This is a critical milestone for wireless video applications that service providers are so hungry to launch. Greater overall performance and higher performance per watt are significant by themselves, but having enough I/O capacity to match the processor capabilities makes for even greater advances in application throughput.

More Stories By Austin Hipes

Austin Hipes currently serves as the director of field engineering for NEI. In this role, he manages field applications engineers (FAEs) supporting sales design activities and educating customers on hardware and the latest technology trends. Over the last eight years, Austin has been focused on designing systems for network equipment providers requiring carrier grade solutions. He was previously director of technology at Alliance Systems and a field applications engineer for Arrow Electronics. He received his Bachelor’s degree from the University of Texas at Dallas.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration. Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling.
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
The APN DevOps Competency highlights APN Partners who demonstrate deep capabilities delivering continuous integration, continuous delivery, and configuration management. They help customers transform their business to be more efficient and agile by leveraging the AWS platform and DevOps principles.
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
IT data is typically silo'd by the various tools in place. Unifying all the log, metric and event data in one analytics platform stops finger pointing and provides the end-to-end correlation. Logs, metrics and custom event data can be joined to tell the holistic story of your software and operations. For example, users can correlate code deploys to system performance to application error codes.
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of, and Fred Yatzeck, principal architect leading product development at, discussed how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at th...
With containerization using Docker, the orchestration of containers using Kubernetes, the self-service model for provisioning your projects and applications and the workflows we built in OpenShift is the best in class Platform as a Service that enables introducing DevOps into your organization with ease. In his session at DevOps Summit, Veer Muchandi, PaaS evangelist with RedHat, will provide a deep dive overview of OpenShift v3 and demonstrate how it helps with DevOps.
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
“All our customers are looking at the cloud ecosystem as an important part of their overall product strategy. Some see it evolve as a multi-cloud / hybrid cloud strategy, while others are embracing all forms of cloud offerings like PaaS, IaaS and SaaS in their solutions,” noted Suhas Joshi, Vice President – Technology, at Harbinger Group, in this exclusive Q&A with Cloud Expo Conference Chair Roger Strukhoff.
In their session at DevOps Summit, Asaf Yigal, co-founder and the VP of Product at, and Tomer Levy, co-founder and CEO of, will explore the entire process that they have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data. They will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the archi...
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, will explore HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
Application availability is not just the measure of “being up”. Many apps can claim that status. Technically they are running and responding to requests, but at a rate which users would certainly interpret as being down. That’s because excessive load times can (and will be) interpreted as “not available.” That’s why it’s important to view ensuring application availability as requiring attention to all its composite parts: scalability, performance, and security.
Saviynt Inc. has announced the availability of the next release of Saviynt for AWS. The comprehensive security and compliance solution provides a Command-and-Control center to gain visibility into risks in AWS, enforce real-time protection of critical workloads as well as data and automate access life-cycle governance. The solution enables AWS customers to meet their compliance mandates such as ITAR, SOX, PCI, etc. by including an extensive risk and controls library to detect known threats and b...
Clearly the way forward is to move to cloud be it bare metal, VMs or containers. One aspect of the current public clouds that is slowing this cloud migration is cloud lock-in. Every cloud vendor is trying to make it very difficult to move out once a customer has chosen their cloud. In his session at 17th Cloud Expo, Naveen Nimmu, CEO of Clouber, Inc., will advocate that making the inter-cloud migration as simple as changing airlines would help the entire industry to quickly adopt the cloud wit...
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Our guest on the podcast this week is Jason Bloomberg, President at Intellyx. When we build services we want them to be lightweight, stateless and scalable while doing one thing really well. In today's cloud world, we're revisiting what to takes to make a good service in the first place. Listen in to learn why following "the book" doesn't necessarily mean that you're solving key business problems.