Click here to close now.

Welcome!

Microservices Journal Authors: AppDynamics Blog, Liz McMillan, Pat Romanski, Elizabeth White, JP Morgenthal

Related Topics: Virtualization, Java, Microservices Journal, .NET, Open Source, Cloud Expo

Virtualization: Article

The Benefits of Virtualization

What does the latest Sandy Bridge mean for virtualization in the central office?

Those familiar with deploying virtual machines (VMs) know that in order to ensure performance, VMs must be tied to physical platforms. As the demand for data-intensive virtualized and cloud solutions continues to increase, more powerful server platforms will be required to deliver this performance without significantly multiplying hardware infrastructure for every VM.

The new Intel Sandy Bridge series (Intel's Xeon E5-2600 processor family) is ideally suited for enabling more powerful and efficient virtualized solutions for high-throughput, processing-intensive communications applications. This latest dual-processor architecture features an increased core count, I/O, and memory performance to allow more virtual machines to run on a single physical platform. Virtualization can be extremely memory-intensive, as more VMs typically require more total system memory. In order to optimize performance and easily manage VMs, each one usually requires at least one physical processor core. Using the new Sandy Bridge E5-2600 series architecture can enable individual physical servers to support greater numbers of virtualized appliances, thereby consolidating hardware for lower operational costs, preventing against VM sprawl, and simplifying transition to the cloud with opportunities for scaling up over multiple cores.

The Benefits of Virtualization
Modern carrier-grade platforms comprise unprecedented amounts of processing, memory, and network I/O resources. For developers, though, these goodies also come with the mandate to make the most effective use of modern platforms through scaling and other techniques. Through the intelligent use of carrier-class virtualization, developers can create highly scalable platforms and often eliminate unnecessary over-provisioning of resources for peak usage.

Current advances in multicore processors, cryptography accelerators, and high-throughput Ethernet silicon make it possible to consolidate what previously required multiple specialized server platforms into a single private cloud. 4G wireless deployments, HD-quality video to all devices, the continuing transition to VoIP technologies, increased security concerns, and power efficiency requirements are all driving the need for more flexible solutions.

By deploying a private cloud with virtual machine infrastructure, one's hardware becomes a pool of resources available to be provisioned as needed. The control plane, data plane, and networking can all share the same pool of common hardware.

Deployments can be easily upgraded by simply adding physical resources to the managed pool. Additionally, migrating VM instances from one compute node to another, as Figure 1 shows, can be non-disruptive.

Many telecom solutions require multiple different hardware solutions simply because they are made up of applications that run on different operating systems. In a private cloud deployment, multiple operating systems can be run on the same physical hardware, eliminating this requirement.

A private cloud enables running instances (VMs assigned to a specific function) to be tailored to different workload environments. For example, a dedicated service level can be assigned to each instance, and as demand increases or decreases, other instances can be spawned or decommissioned as necessary. This allows each process workload to be tailored for the moment-in-time demand required (see Figure 2). This ability to tailor each process workload to address moment-in-time demand means the practice of over-provisioning all resources for a "peak workload" can go by the wayside. As resources are no longer needed, they are simply added back into the pool to be used by other instances that may need to be spawned.

Virtual machines allow for the more efficient use of hardware resources by allowing multiple instances to share the same physical hardware, maximizing the use of those resources and increasing the work per watt of power consumed when compared to traditional infrastructure.

VMs also allow for 1+1 and N+1 redundancy through the use of multiple virtual instances running fewer independent hardware nodes, such as AdvancedTCA SBCs. In addition, VMs often require fewer physical nodes to achieve the same level of redundancy. By reducing the physical node count to achieve the same uptime goals, less power is consumed overall (see Figure 3).

AdvancedTCA and Virtualization
For private clouds running VM infrastructure, choosing AdvancedTCA chassis with SBCs for the compute node (the most common core element in any private cloud) makes sense because of their commonality, variety, manageability, and ease of deployment.

Network switches with Layer 3 functionality are the glue that holds the private cloud together. The selection of AdvancedTCA switches will depend largely on the internal and external bandwidth required for each compute node. Video streaming or deep packet inspection typically requires much more bandwidth (and thus higher bandwidth switches) than SMSC messaging, for example, to optimize performance.

The last necessity is also one of the most critical: shared storage. For an instance to be launched or migrated to any physical node, all nodes must also have access to the same storage. In private cloud infrastructure, a high-performance SAN and a cluster file system often supply this access. Connectivity options typically include Fibre Channel, SAS, and iSCSI connectivity. iSCSI with link speeds of up to 10 Gbps is the least intrusive approach to implementation to each node, as the SAN can be connected to AdvancedTCA fabric switches to provide storage connectivity to each node.

To avoid excessive use of fabric bandwidth for storage connectivity in high-throughput environments, employing SAS or Fibre Channels that are directly attached and connected externally to each node via RTMs is a viable option. With multiple manufacturers now making AdvancedTCA blade-based SANs as well as NEBS certified external SANs, many options are available to meet the SAN requirements for a carrier-grade private cloud.

How Sandy Bridge Processors Optimize AdvancedTCA Platforms
The new Intel Xeon processor E5 family, based on the Sandy Bridge microarchitecture, changes how well software applications run on AdvancedTCA platforms. It supports innovative networking through 40-gigabit Ethernet, and its features allow for advanced virtualization and cloud computing techniques.

The Intel Xeon E5-2600 series CPUs consist of up to eight cores, each running up to 55 percent faster than its Xeon 5600 predecessor. It can therefore deliver much higher server performance to the enterprise market. Furthermore, new enterprise servers can support up to 32 GB dual in-line memory modules (DIMMs) so memory capacity can increase from 288 GB to 768 GB using 24 slots. E5-based AdvancedTCA compute blades with more limited board real estate are expected to support up to 256 GB in 16 VLP RDIMM slots at launch. This represents a 40 percentincrease over prior blades.

Greater power efficiency is another key benefit. The E5 family provides up to a 70 percent performance gain per watt over previous generation CPUs. Communications OEMs can develop power-efficient dual processor blades for service providers that fully meet or beat AdvancedTCA power specifications.

But the real game-changer lies in the E5-2600's integrated I/O, which allows designers to reduce latency significantly and increase bandwidth. AdvancedTCA's 40G fabric has been backplane-ready since 2010 in anticipation of an updated PICMG specification release. Since then, solution providers have sought ways to eliminate bottlenecks and utilize as much of the fabric as possible. Now that Intel has integrated the new PCI-Express 3.0 with 40 lanes aboard each Xeon® processor and Quickpath Interconnects (QPIs) linking each CPU together, I/O bottlenecks are reduced, throughput is increased, and I/O latency is cut by up to 30 percent. A standard dual Xeon® E5-2600 CPU configuration offers up to 80 lanes of PCIe Gen3, which provides 200 percent more throughput than the previous generation architectures.

The overall result is much higher I/O throughput. New AdvancedTCA blades will now be able to deliver more than 10 Gb/s per node. This is a critical milestone for wireless video applications that service providers are so hungry to launch. Greater overall performance and higher performance per watt are significant by themselves, but having enough I/O capacity to match the processor capabilities makes for even greater advances in application throughput.

More Stories By Austin Hipes

Austin Hipes currently serves as the director of field engineering for NEI. In this role, he manages field applications engineers (FAEs) supporting sales design activities and educating customers on hardware and the latest technology trends. Over the last eight years, Austin has been focused on designing systems for network equipment providers requiring carrier grade solutions. He was previously director of technology at Alliance Systems and a field applications engineer for Arrow Electronics. He received his Bachelor’s degree from the University of Texas at Dallas.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Docker is an open platform for developers and sysadmins of distributed applications that enables them to build, ship, and run any app anywhere. Docker allows applications to run on any platform irrespective of what tools were used to build it making it easy to distribute, test, and run software. I found this 5 Minute Docker video, which is very helpful when you want to get a quick and digestible overview. If you want to learn more, you can go to Docker’s web page and start with this Docker intro...
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the...
Over the years, a variety of methodologies have emerged in order to overcome the challenges related to project constraints. The successful use of each methodology seems highly context-dependent. However, communication seems to be the common denominator of the many challenges that project management methodologies intend to resolve. In this respect, Information and Communication Technologies (ICTs) can be viewed as powerful tools for managing projects. Few research papers have focused on the way...
As the world moves from DevOps to NoOps, application deployment to the cloud ought to become a lot simpler. However, applications have been architected with a much tighter coupling than it needs to be which makes deployment in different environments and migration between them harder. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, Netflix and so on is at the heart of CloudFoundry – a complete developer-oriented Platform as a Service (PaaS...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo – to be held June 3-5, 2015, at the Javits Center in New York City – will expand the DevOps community, enable a wide...
There’s a lot of discussion around managing outages in production via the likes of DevOps principles and the corresponding software development lifecycles that does enable higher quality output from development, however, one cannot lay all blame for “bugs” and failures at the feet of those responsible for coding and development. As developers incorporate features and benefits of these paradigm shift, there is a learning curve and a point of not-knowing-what-is-not-known. Sometimes, the only way ...
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block – and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming on...
How can you compare one technology or tool to its competitors? Usually, there is no objective comparison available. So how do you know which is better? Eclipse or IntelliJ IDEA? Java EE or Spring? C# or Java? All you can usually find is a holy war and biased comparisons on vendor sites. But luckily, sometimes, you can find a fair comparison. How does this come to be? By having it co-authored by the stakeholders. The binary repository comparison matrix is one of those rare resources. It is edite...
Cloud Expo, Inc. has announced today that Andi Mann returns to DevOps Summit 2015 as Conference Chair. The 4th International DevOps Summit will take place on June 9-11, 2015, at the Javits Center in New York City. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at ...
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding bu...
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
T-Mobile has been transforming the wireless industry with its “Uncarrier” initiatives. Today as T-Mobile’s IT organization works to transform itself in a like manner, technical foundations built over the last couple of years are now key to their drive for more Agile delivery practices. In his session at DevOps Summit, Martin Krienke, Sr Development Manager at T-Mobile, will discuss where they started their Continuous Delivery journey, where they are today, and where they are going in an effort ...
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers ...
SYS-CON Events announced today that EnterpriseDB (EDB), the leading worldwide provider of enterprise-class Postgres products and database compatibility solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. EDB is the largest provider of Postgres software and services that provides enterprise-class performance and scalability and the open source freedom to divert budget from more costly traditiona...
Do you think development teams really update those BMC Remedy tickets with all the changes contained in a release? They don't. Most of them just "check the box" and move on. They rose a Risk Level that won't raise questions from the Change Control managers and they work around the checks and balances. The alternative is to stop and wait for a department that still thinks releases are rare events. When a release happens every day there's just not enough time for people to attend CAB meeting...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud en...
I’ve been thinking a bit about microservices (μServices) recently. My immediate reaction is to think: “Isn’t this just yet another new term for the same stuff, Web Services->SOA->APIs->Microservices?” Followed shortly by the thought, “well yes it is, but there are some important differences/distinguishing factors.” Microservices is an evolutionary paradigm born out of the need for simplicity (i.e., get away from the ESB) and alignment with agile (think DevOps) and scalable (think Containerizati...
In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, president of Intellyx, panelists Roberto Medrano, Executive Vice President at Akana; Lori MacVittie, IoT_Microservices Power PanelEvangelist for F5 Networks; and Troy Topnik, ActiveState’s Technical Product Manager; will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of ...