Welcome!

Microservices Expo Authors: Elizabeth White, Christopher Keene, Liz McMillan, Lori MacVittie, David Green

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Open Source Cloud, Containers Expo Blog, Apache

@CloudExpo: Article

Nutanix Fields Next-Gen Software-Defined Data Center Widgetry

Nutanix claims to be the first to deliver RAID, high availability, snapshots and clones at the VM-level

Nutanix, a cloud hardware start-up that's offering a hybrid scale-out compute-cum-storage appliance backed by $72 million in VC funding only half of which is reportedly spent, has put out next-generation software-defined data center products.

It's updating its server hardware and its software to deal with divergent workloads. It's going to a quad-node box made by Quanta and should be able to support 400 VMs per chassis, up from 300.

It's got VM-centric disaster recovery, adaptive compression and a new highly configurable hardware platform. The widgetry includes Nutanix OS 3.0 and NX-3000 series hardware. It's supposed to help enterprises build next-generation software-defined data centers.

Besides VM-level disaster recovery and adaptive post-process compression, Nutanix OS 3.0 delivers dynamic cluster expansion, rolling software upgrades and support for KVM, its second hypervisor after VMware's.

Its software enhancements, coupled with the configurable NX-3000 series platform, enable flexibility, performance and scalability in enterprise data centers.

With NX-3000, Nutanix delivers a configurable platform in which compute- and storage-heavy nodes co-exist in a single heterogeneous cluster. It includes hardware models that vary in capacity and the number of PCIe-SSDs, SATA SSDs and SATA DDs server nodes.

The nodes can have different CPU cores per socket and variable memory capacities. This allows for independent scaling of compute and storage in a single system that's optimized for every use case and can scale to address evolving business requirements.

The Scale-Out Converged Storage (SOCS) virtual disk controllers that make the Nutanix server cluster into a SAN so compute and storage are on the same cluster and the compute jobs are close to the storage. Nutanix uses Flash

The NX-3000 uses Intel's Sandy Bridge chips - the eight-core E5-2660 processors running at 2.2GHz - and delivers VM density in a 2U form factor.

Nutanix claims to be the first to deliver RAID, high availability, snapshots and clones at the VM-level.

It says it's implemented a highly differentiated VM-centric disaster recovery engine.

The new Nutanix OS 3.0 includes native storage-optimized disaster recovery that enables multi-way, master-master replication supposedly never seen before in traditional storage arrays.

Administrators can configure disaster recovery policies that specify protection domains and consistency groups in primary sites, which can then be replicated to any combination of secondary sites to ensure maximum business resiliency and application performance. And any Nutanix cluster can serve as both a primary and secondary site simultaneously for different protection domains, providing even more flexibility and choice.

Nutanix OS 3.0 is supposed to deliver best-in-class runbook (failover and failback) automation that's hypervisor-agnostic, which means native disaster recovery capabilities are available and consistent regardless of the underlying virtualization platform or management tools.

One of the pillars of the Nutanix solution is a highly efficient MapReduce-based framework that implements information lifecycle management in the cluster to achieve tiering, disk rebuilding and cluster rebalancing.

It's supposedly the first of its kind in the storage industry.

The same framework is being leveraged to deliver adaptive post-process compression of cold data as it migrates to the lower data tiers, so as not to impact the normal IO path.

By leveraging the information lifecycle management capabilities inherent in Nutanix' software, the system dynamically determines which data blocks to compress based on how frequently they're being accessed by the VMs.

Post-process compression is ideal for random or batch workloads and delivers the highest possible overall performance. In addition, Nutanix' OS 3.0 supports basic in-line compression that works as the data is being written, which is better suited for archival and sequential workloads.

The company says, "While our existing storage solutions support compression in general, the granularity of Nutanix compression allows us to set policies at the VM level, ensuring maximum business value and storage utilization,"

With Nutanix OS 3.0, the company is supposed to deliver on its commitment to bring all of its enterprise features to the broadest range of platforms in the industry.

The software, which was designed to be hypervisor-agnostic, will now support KVM and VMware vSphere 5.1.

Regardless of the underlying virtualization platform or management framework, enterprises benefit from all of the capabilities of the Nutanix software.

The KVM hypervisor provides financial flexibility for enterprises and works well in workloads such as Hadoop.

Nutanix OS 3.0 also uses a discovery-based protocol to auto-detect new nodes added to the same network as a cluster, enabling administrators to quickly and easily expand a cluster without incurring any downtime.

In the background, the system will then rebalance the data across the entire storage pool, including the newly added nodes, to provide maximum I/O performance.

The new software also uses software-defined networking tricks to achieve rolling software upgrades in the always-on cluster. Upgrades are delivered in a peer-to-peer framework to enable rapid software upgrades while retaining maximum cluster availability.

The features and capabilities delivered in Nutanix OS 3.0 and NX-3000 are supposed to usher in a new era of business resiliency and data center optimization.

The start-up thinks it's displaced $25 million in server and SAN storage sales and is close to doubling sales every quarter. Its co-founder and CEO Dheeraj Pandey built the first Exadata clusters at Oracle. Co-founder Mohit Aron was chief architect at Aster Data and lead designer of the Google File System that led to Hadoop.

More Stories By Maureen O'Gara

Maureen O'Gara the most read technology reporter for the past 20 years, is the Cloud Computing and Virtualization News Desk editor of SYS-CON Media. She is the publisher of famous "Billygrams" and the editor-in-chief of "Client/Server News" for more than a decade. One of the most respected technology reporters in the business, Maureen can be reached by email at maureen(at)sys-con.com or paperboy(at)g2news.com, and by phone at 516 759-7025. Twitter: @MaureenOGara

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
Thomas Bitman of Gartner wrote a blog post last year about why OpenStack projects fail. In that article, he outlined three particular metrics which together cause 60% of OpenStack projects to fall short of expectations: Wrong people (31% of failures): a successful cloud needs commitment both from the operations team as well as from "anchor" tenants. Wrong processes (19% of failures): a successful cloud automates across silos in the software development lifecycle, not just within silos.
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...

Let's just nip the conflation of these terms in the bud, shall we?

"MIcro" is big these days. Both microservices and microsegmentation are having and will continue to have an impact on data center architecture, but not necessarily for the same reasons. There's a growing trend in which folks - particularly those with a network background - conflate the two and use them to mean the same thing.

They are not.

One is about the application. The other, the network. T...

As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...
SYS-CON Events announced today that eCube Systems, a leading provider of middleware modernization, integration, and management solutions, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. eCube Systems offers a family of middleware evolution products and services that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
The following fictional case study is a composite of actual horror stories I’ve heard over the years. Unfortunately, this scenario often occurs when in-house integration teams take on the complexities of DevOps and ALM integration with an enterprise service bus (ESB) or custom integration. It is written from the perspective of an enterprise architect tasked with leading an organization’s effort to adopt Agile to become more competitive. The company has turned to Scaled Agile Framework (SAFe) as ...
If you are within a stones throw of the DevOps marketplace you have undoubtably noticed the growing trend in Microservices. Whether you have been staying up to date with the latest articles and blogs or you just read the definition for the first time, these 5 Microservices Resources You Need In Your Life will guide you through the ins and outs of Microservices in today’s world.
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
This complete kit provides a proven process and customizable documents that will help you evaluate rapid application delivery platforms and select the ideal partner for building mobile and web apps for your organization.
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp...
Sharding has become a popular means of achieving scalability in application architectures in which read/write data separation is not only possible, but desirable to achieve new heights of concurrency. The premise is that by splitting up read and write duties, it is possible to get better overall performance at the cost of a slight delay in consistency. That is, it takes a bit of time to replicate changes initiated by a "write" to the read-only master database. It's eventually consistent, and it'...
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
A company’s collection of online systems is like a delicate ecosystem – all components must integrate with and complement each other, and one single malfunction in any of them can bring the entire system to a screeching halt. That’s why, when monitoring and analyzing the health of your online systems, you need a broad arsenal of different tools for your different needs. In addition to a wide-angle lens that provides a snapshot of the overall health of your system, you must also have precise, ...
It's been a busy time for tech's ongoing infatuation with containers. Amazon just announced EC2 Container Registry to simply container management. The new Azure container service taps into Microsoft's partnership with Docker and Mesosphere. You know when there's a standard for containers on the table there's money on the table, too. Everyone is talking containers because they reduce a ton of development-related challenges and make it much easier to move across production and testing environm...
Node.js and io.js are increasingly being used to run JavaScript on the server side for many types of applications, such as websites, real-time messaging and controllers for small devices with limited resources. For DevOps it is crucial to monitor the whole application stack and Node.js is rapidly becoming an important part of the stack in many organizations. Sematext has historically had a strong support for monitoring big data applications such as Elastic (aka Elasticsearch), Cassandra, Solr, S...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.