Welcome!

Microservices Expo Authors: Automic Blog, Elizabeth White, Pat Romanski, Liz McMillan, Olivier Huynh Van

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Open Source Cloud, Containers Expo Blog, Apache

@CloudExpo: Article

Nutanix Fields Next-Gen Software-Defined Data Center Widgetry

Nutanix claims to be the first to deliver RAID, high availability, snapshots and clones at the VM-level

Nutanix, a cloud hardware start-up that's offering a hybrid scale-out compute-cum-storage appliance backed by $72 million in VC funding only half of which is reportedly spent, has put out next-generation software-defined data center products.

It's updating its server hardware and its software to deal with divergent workloads. It's going to a quad-node box made by Quanta and should be able to support 400 VMs per chassis, up from 300.

It's got VM-centric disaster recovery, adaptive compression and a new highly configurable hardware platform. The widgetry includes Nutanix OS 3.0 and NX-3000 series hardware. It's supposed to help enterprises build next-generation software-defined data centers.

Besides VM-level disaster recovery and adaptive post-process compression, Nutanix OS 3.0 delivers dynamic cluster expansion, rolling software upgrades and support for KVM, its second hypervisor after VMware's.

Its software enhancements, coupled with the configurable NX-3000 series platform, enable flexibility, performance and scalability in enterprise data centers.

With NX-3000, Nutanix delivers a configurable platform in which compute- and storage-heavy nodes co-exist in a single heterogeneous cluster. It includes hardware models that vary in capacity and the number of PCIe-SSDs, SATA SSDs and SATA DDs server nodes.

The nodes can have different CPU cores per socket and variable memory capacities. This allows for independent scaling of compute and storage in a single system that's optimized for every use case and can scale to address evolving business requirements.

The Scale-Out Converged Storage (SOCS) virtual disk controllers that make the Nutanix server cluster into a SAN so compute and storage are on the same cluster and the compute jobs are close to the storage. Nutanix uses Flash

The NX-3000 uses Intel's Sandy Bridge chips - the eight-core E5-2660 processors running at 2.2GHz - and delivers VM density in a 2U form factor.

Nutanix claims to be the first to deliver RAID, high availability, snapshots and clones at the VM-level.

It says it's implemented a highly differentiated VM-centric disaster recovery engine.

The new Nutanix OS 3.0 includes native storage-optimized disaster recovery that enables multi-way, master-master replication supposedly never seen before in traditional storage arrays.

Administrators can configure disaster recovery policies that specify protection domains and consistency groups in primary sites, which can then be replicated to any combination of secondary sites to ensure maximum business resiliency and application performance. And any Nutanix cluster can serve as both a primary and secondary site simultaneously for different protection domains, providing even more flexibility and choice.

Nutanix OS 3.0 is supposed to deliver best-in-class runbook (failover and failback) automation that's hypervisor-agnostic, which means native disaster recovery capabilities are available and consistent regardless of the underlying virtualization platform or management tools.

One of the pillars of the Nutanix solution is a highly efficient MapReduce-based framework that implements information lifecycle management in the cluster to achieve tiering, disk rebuilding and cluster rebalancing.

It's supposedly the first of its kind in the storage industry.

The same framework is being leveraged to deliver adaptive post-process compression of cold data as it migrates to the lower data tiers, so as not to impact the normal IO path.

By leveraging the information lifecycle management capabilities inherent in Nutanix' software, the system dynamically determines which data blocks to compress based on how frequently they're being accessed by the VMs.

Post-process compression is ideal for random or batch workloads and delivers the highest possible overall performance. In addition, Nutanix' OS 3.0 supports basic in-line compression that works as the data is being written, which is better suited for archival and sequential workloads.

The company says, "While our existing storage solutions support compression in general, the granularity of Nutanix compression allows us to set policies at the VM level, ensuring maximum business value and storage utilization,"

With Nutanix OS 3.0, the company is supposed to deliver on its commitment to bring all of its enterprise features to the broadest range of platforms in the industry.

The software, which was designed to be hypervisor-agnostic, will now support KVM and VMware vSphere 5.1.

Regardless of the underlying virtualization platform or management framework, enterprises benefit from all of the capabilities of the Nutanix software.

The KVM hypervisor provides financial flexibility for enterprises and works well in workloads such as Hadoop.

Nutanix OS 3.0 also uses a discovery-based protocol to auto-detect new nodes added to the same network as a cluster, enabling administrators to quickly and easily expand a cluster without incurring any downtime.

In the background, the system will then rebalance the data across the entire storage pool, including the newly added nodes, to provide maximum I/O performance.

The new software also uses software-defined networking tricks to achieve rolling software upgrades in the always-on cluster. Upgrades are delivered in a peer-to-peer framework to enable rapid software upgrades while retaining maximum cluster availability.

The features and capabilities delivered in Nutanix OS 3.0 and NX-3000 are supposed to usher in a new era of business resiliency and data center optimization.

The start-up thinks it's displaced $25 million in server and SAN storage sales and is close to doubling sales every quarter. Its co-founder and CEO Dheeraj Pandey built the first Exadata clusters at Oracle. Co-founder Mohit Aron was chief architect at Aster Data and lead designer of the Google File System that led to Hadoop.

More Stories By Maureen O'Gara

Maureen O'Gara the most read technology reporter for the past 20 years, is the Cloud Computing and Virtualization News Desk editor of SYS-CON Media. She is the publisher of famous "Billygrams" and the editor-in-chief of "Client/Server News" for more than a decade. One of the most respected technology reporters in the business, Maureen can be reached by email at maureen(at)sys-con.com or paperboy(at)g2news.com, and by phone at 516 759-7025. Twitter: @MaureenOGara

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Let's recap what we learned from the previous chapters in the series: episode 1 and episode 2. We learned that a good rollback mechanism cannot be designed without having an intimate knowledge of the application architecture, the nature of your components and their dependencies. Now that we know what we have to restore and in which order, the question is how?
If you’re responsible for an application that depends on the data or functionality of various IoT endpoints – either sensors or devices – your brand reputation depends on the security, reliability, and compliance of its many integrated parts. If your application fails to deliver the expected business results, your customers and partners won't care if that failure stems from the code you developed or from a component that you integrated. What can you do to ensure that the endpoints work as expect...
Digitization is driving a fundamental change in society that is transforming the way businesses work with their customers, their supply chains and their people. Digital transformation leverages DevOps best practices, such as Agile Parallel Development, Continuous Delivery and Agile Operations to capitalize on opportunities and create competitive differentiation in the application economy. However, information security has been notably absent from the DevOps movement. Speed doesn’t have to negat...
Your business relies on your applications and your employees to stay in business. Whether you develop apps or manage business critical apps that help fuel your business, what happens when users experience sluggish performance? You and all technical teams across the organization – application, network, operations, among others, as well as, those outside the organization, like ISPs and third-party providers – are called in to solve the problem.
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management solutions, helping companies worldwide activate their data to drive more value and business insight and to transform moder...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, showed how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful f...
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. Big Data at Cloud Expo - to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is...
The many IoT deployments around the world are busy integrating smart devices and sensors into their enterprise IT infrastructures. Yet all of this technology – and there are an amazing number of choices – is of no use without the software to gather, communicate, and analyze the new data flows. Without software, there is no IT. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will look at the protocols that communicate data and the emerging data analy...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they mana...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of So...
SYS-CON Events announced today the Enterprise IoT Bootcamp, being held November 1-2, 2016, in conjunction with 19th Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA. Combined with real-world scenarios and use cases, the Enterprise IoT Bootcamp is not just based on presentations but with hands-on demos and detailed walkthroughs. We will introduce you to a variety of real world use cases prototyped using Arduino, Raspberry Pi, BeagleBone, Spark, and Intel Edison. Y...
Video experiences should be unique and exciting! But that doesn’t mean you need to patch all the pieces yourself. Users demand rich and engaging experiences and new ways to connect with you. But creating robust video applications at scale can be complicated, time-consuming and expensive. In his session at @ThingsExpo, Zohar Babin, Vice President of Platform, Ecosystem and Community at Kaltura, will discuss how VPaaS enables you to move fast, creating scalable video experiences that reach your...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
Large enterprises today are juggling an enormous variety of network equipment. Business users are asking for specific network throughput guarantees when it comes to their critical applications, legal departments require compliance with mandated regulatory frameworks, and operations are asked to do more with shrinking budgets. All these requirements do not easily align with existing network architectures; hence, network operators are continuously faced with a slew of granular parameter change req...
About a year ago we tuned into “the need for speed” and how a concept like "serverless computing” was increasingly catering to this. We are now a year further and the term “serverless” is taking on unexpected proportions. With some even seeing it as the successor to cloud in general or at least as a successor to the clouds’ poorer cousin in terms of revenue, hype and adoption: PaaS. The question we need to ask is whether this constitutes an example of Hype Hopping: to effortlessly pivot to the ...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
As applications are promoted from the development environment to the CI or the QA environment and then into the production environment, it is very common for the configuration settings to be changed as the code is promoted. For example, the settings for the database connection pools are typically lower in development environment than the QA/Load Testing environment. The primary reason for the existence of the configuration setting differences is to enhance application performance. However, occas...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...