Click here to close now.


Microservices Expo Authors: Carmen Gonzalez, Lori MacVittie, Tim Hinds, Sanjeev Sharma, Elizabeth White

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Open Source Cloud, Containers Expo Blog, Apache

@CloudExpo: Article

Nutanix Fields Next-Gen Software-Defined Data Center Widgetry

Nutanix claims to be the first to deliver RAID, high availability, snapshots and clones at the VM-level

Nutanix, a cloud hardware start-up that's offering a hybrid scale-out compute-cum-storage appliance backed by $72 million in VC funding only half of which is reportedly spent, has put out next-generation software-defined data center products.

It's updating its server hardware and its software to deal with divergent workloads. It's going to a quad-node box made by Quanta and should be able to support 400 VMs per chassis, up from 300.

It's got VM-centric disaster recovery, adaptive compression and a new highly configurable hardware platform. The widgetry includes Nutanix OS 3.0 and NX-3000 series hardware. It's supposed to help enterprises build next-generation software-defined data centers.

Besides VM-level disaster recovery and adaptive post-process compression, Nutanix OS 3.0 delivers dynamic cluster expansion, rolling software upgrades and support for KVM, its second hypervisor after VMware's.

Its software enhancements, coupled with the configurable NX-3000 series platform, enable flexibility, performance and scalability in enterprise data centers.

With NX-3000, Nutanix delivers a configurable platform in which compute- and storage-heavy nodes co-exist in a single heterogeneous cluster. It includes hardware models that vary in capacity and the number of PCIe-SSDs, SATA SSDs and SATA DDs server nodes.

The nodes can have different CPU cores per socket and variable memory capacities. This allows for independent scaling of compute and storage in a single system that's optimized for every use case and can scale to address evolving business requirements.

The Scale-Out Converged Storage (SOCS) virtual disk controllers that make the Nutanix server cluster into a SAN so compute and storage are on the same cluster and the compute jobs are close to the storage. Nutanix uses Flash

The NX-3000 uses Intel's Sandy Bridge chips - the eight-core E5-2660 processors running at 2.2GHz - and delivers VM density in a 2U form factor.

Nutanix claims to be the first to deliver RAID, high availability, snapshots and clones at the VM-level.

It says it's implemented a highly differentiated VM-centric disaster recovery engine.

The new Nutanix OS 3.0 includes native storage-optimized disaster recovery that enables multi-way, master-master replication supposedly never seen before in traditional storage arrays.

Administrators can configure disaster recovery policies that specify protection domains and consistency groups in primary sites, which can then be replicated to any combination of secondary sites to ensure maximum business resiliency and application performance. And any Nutanix cluster can serve as both a primary and secondary site simultaneously for different protection domains, providing even more flexibility and choice.

Nutanix OS 3.0 is supposed to deliver best-in-class runbook (failover and failback) automation that's hypervisor-agnostic, which means native disaster recovery capabilities are available and consistent regardless of the underlying virtualization platform or management tools.

One of the pillars of the Nutanix solution is a highly efficient MapReduce-based framework that implements information lifecycle management in the cluster to achieve tiering, disk rebuilding and cluster rebalancing.

It's supposedly the first of its kind in the storage industry.

The same framework is being leveraged to deliver adaptive post-process compression of cold data as it migrates to the lower data tiers, so as not to impact the normal IO path.

By leveraging the information lifecycle management capabilities inherent in Nutanix' software, the system dynamically determines which data blocks to compress based on how frequently they're being accessed by the VMs.

Post-process compression is ideal for random or batch workloads and delivers the highest possible overall performance. In addition, Nutanix' OS 3.0 supports basic in-line compression that works as the data is being written, which is better suited for archival and sequential workloads.

The company says, "While our existing storage solutions support compression in general, the granularity of Nutanix compression allows us to set policies at the VM level, ensuring maximum business value and storage utilization,"

With Nutanix OS 3.0, the company is supposed to deliver on its commitment to bring all of its enterprise features to the broadest range of platforms in the industry.

The software, which was designed to be hypervisor-agnostic, will now support KVM and VMware vSphere 5.1.

Regardless of the underlying virtualization platform or management framework, enterprises benefit from all of the capabilities of the Nutanix software.

The KVM hypervisor provides financial flexibility for enterprises and works well in workloads such as Hadoop.

Nutanix OS 3.0 also uses a discovery-based protocol to auto-detect new nodes added to the same network as a cluster, enabling administrators to quickly and easily expand a cluster without incurring any downtime.

In the background, the system will then rebalance the data across the entire storage pool, including the newly added nodes, to provide maximum I/O performance.

The new software also uses software-defined networking tricks to achieve rolling software upgrades in the always-on cluster. Upgrades are delivered in a peer-to-peer framework to enable rapid software upgrades while retaining maximum cluster availability.

The features and capabilities delivered in Nutanix OS 3.0 and NX-3000 are supposed to usher in a new era of business resiliency and data center optimization.

The start-up thinks it's displaced $25 million in server and SAN storage sales and is close to doubling sales every quarter. Its co-founder and CEO Dheeraj Pandey built the first Exadata clusters at Oracle. Co-founder Mohit Aron was chief architect at Aster Data and lead designer of the Google File System that led to Hadoop.

More Stories By Maureen O'Gara

Maureen O'Gara the most read technology reporter for the past 20 years, is the Cloud Computing and Virtualization News Desk editor of SYS-CON Media. She is the publisher of famous "Billygrams" and the editor-in-chief of "Client/Server News" for more than a decade. One of the most respected technology reporters in the business, Maureen can be reached by email at maureen(at) or paperboy(at), and by phone at 516 759-7025. Twitter: @MaureenOGara

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show h...
Application availability is not just the measure of “being up”. Many apps can claim that status. Technically they are running and responding to requests, but at a rate which users would certainly interpret as being down. That’s because excessive load times can (and will be) interpreted as “not available.” That’s why it’s important to view ensuring application availability as requiring attention to all its composite parts: scalability, performance, and security.
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
There once was a time when testers operated on their own, in isolation. They’d huddle as a group around the harsh glow of dozens of CRT monitors, clicking through GUIs and recording results. Anxiously, they’d wait for the developers in the other room to fix the bugs they found, yet they’d frequently leave the office disappointed as issues were filed away as non-critical. These teams would rarely interact, save for those scarce moments when a coder would wander in needing to reproduce a particula...
Last month, my partners in crime – Carmen DeArdo from Nationwide, Lee Reid, my colleague from IBM and I wrote a 3-part series of blog posts on We titled our posts the Simple Math, Calculus and Art of DevOps. I would venture to say these are must-reads for any organization adopting DevOps. We examined all three ascpects – the Cultural, Automation and Process improvement side of DevOps. One of the key underlying themes of the three posts was the need for Cultural change – things like t...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
It is with great pleasure that I am able to announce that Jesse Proudman, Blue Box CTO, has been appointed to the position of IBM Distinguished Engineer. Jesse is the first employee at Blue Box to receive this honor, and I’m quite confident there will be more to follow given the amazing talent at Blue Box with whom I have had the pleasure to collaborate. I’d like to provide an overview of what it means to become an IBM Distinguished Engineer.
The cloud has reached mainstream IT. Those 18.7 million data centers out there (server closets to corporate data centers to colocation deployments) are moving to the cloud. In his session at 17th Cloud Expo, Achim Weiss, CEO & co-founder of ProfitBricks, will share how two companies – one in the U.S. and one in Germany – are achieving their goals with cloud infrastructure. More than a case study, he will share the details of how they prioritized their cloud computing infrastructure deployments ...
Opinions on how best to package and deliver applications are legion and, like many other aspects of the software world, are subject to recurring trend cycles. On the server-side, the current favorite is container delivery: a “full stack” approach in which your application and everything it needs to run are specified in a container definition. That definition is then “compiled” down to a container image and deployed by retrieving the image and passing it to a container runtime to create a running...
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
If you are new to Python, you might be confused about the different versions that are available. Although Python 3 is the latest generation of the language, many programmers still use Python 2.7, the final update to Python 2, which was released in 2010. There is currently no clear-cut answer to the question of which version of Python you should use; the decision depends on what you want to achieve. While Python 3 is clearly the future of the language, some programmers choose to remain with Py...
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of, and Fred Yatzeck, principal architect leading product development at, discussed how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at th...
As we increasingly rely on technology to improve the quality and efficiency of our personal and professional lives, software has become the key business differentiator. Organizations must release software faster, as well as ensure the safety, security, and reliability of their applications. The option to make trade-offs between time and quality no longer exists—software teams must deliver quality and speed. To meet these expectations, businesses have shifted from more traditional approaches of d...
Ten years ago, there may have been only a single application that talked directly to the database and spit out HTML; customer service, sales - most of the organizations I work with have been moving toward a design philosophy more like unix, where each application consists of a series of small tools stitched together. In web example above, that likely means a login service combines with webpages that call other services - like enter and update record. That allows the customer service team to writ...
JFrog has announced a powerful technology for managing software packages from development into production. JFrog Artifactory 4 represents disruptive innovation in its groundbreaking ability to help development and DevOps teams deliver increasingly complex solutions on ever-shorter deadlines across multiple platforms JFrog Artifactory 4 establishes a new category – the Universal Artifact Repository – that reflects JFrog's unique commitment to enable faster software releases through the first pla...
Somebody call the buzzword police: we have a serious case of microservices-washing in progress. The term “microservices-washing” is derived from “whitewashing,” meaning to hide some inconvenient truth with bluster and nonsense. We saw plenty of cloudwashing a few years ago, as vendors and enterprises alike pretended what they were doing was cloud, even though it wasn’t. Today, the hype around microservices has led to the same kind of obfuscation, as vendors and enterprise technologists alike ar...