|By Maureen O'Gara||
|December 10, 2012 08:00 AM EST||
Nutanix, a cloud hardware start-up that's offering a hybrid scale-out compute-cum-storage appliance backed by $72 million in VC funding only half of which is reportedly spent, has put out next-generation software-defined data center products.
It's updating its server hardware and its software to deal with divergent workloads. It's going to a quad-node box made by Quanta and should be able to support 400 VMs per chassis, up from 300.
It's got VM-centric disaster recovery, adaptive compression and a new highly configurable hardware platform. The widgetry includes Nutanix OS 3.0 and NX-3000 series hardware. It's supposed to help enterprises build next-generation software-defined data centers.
Besides VM-level disaster recovery and adaptive post-process compression, Nutanix OS 3.0 delivers dynamic cluster expansion, rolling software upgrades and support for KVM, its second hypervisor after VMware's.
Its software enhancements, coupled with the configurable NX-3000 series platform, enable flexibility, performance and scalability in enterprise data centers.
With NX-3000, Nutanix delivers a configurable platform in which compute- and storage-heavy nodes co-exist in a single heterogeneous cluster. It includes hardware models that vary in capacity and the number of PCIe-SSDs, SATA SSDs and SATA DDs server nodes.
The nodes can have different CPU cores per socket and variable memory capacities. This allows for independent scaling of compute and storage in a single system that's optimized for every use case and can scale to address evolving business requirements.
The Scale-Out Converged Storage (SOCS) virtual disk controllers that make the Nutanix server cluster into a SAN so compute and storage are on the same cluster and the compute jobs are close to the storage. Nutanix uses Flash
The NX-3000 uses Intel's Sandy Bridge chips - the eight-core E5-2660 processors running at 2.2GHz - and delivers VM density in a 2U form factor.
Nutanix claims to be the first to deliver RAID, high availability, snapshots and clones at the VM-level.
It says it's implemented a highly differentiated VM-centric disaster recovery engine.
The new Nutanix OS 3.0 includes native storage-optimized disaster recovery that enables multi-way, master-master replication supposedly never seen before in traditional storage arrays.
Administrators can configure disaster recovery policies that specify protection domains and consistency groups in primary sites, which can then be replicated to any combination of secondary sites to ensure maximum business resiliency and application performance. And any Nutanix cluster can serve as both a primary and secondary site simultaneously for different protection domains, providing even more flexibility and choice.
Nutanix OS 3.0 is supposed to deliver best-in-class runbook (failover and failback) automation that's hypervisor-agnostic, which means native disaster recovery capabilities are available and consistent regardless of the underlying virtualization platform or management tools.
One of the pillars of the Nutanix solution is a highly efficient MapReduce-based framework that implements information lifecycle management in the cluster to achieve tiering, disk rebuilding and cluster rebalancing.
It's supposedly the first of its kind in the storage industry.
The same framework is being leveraged to deliver adaptive post-process compression of cold data as it migrates to the lower data tiers, so as not to impact the normal IO path.
By leveraging the information lifecycle management capabilities inherent in Nutanix' software, the system dynamically determines which data blocks to compress based on how frequently they're being accessed by the VMs.
Post-process compression is ideal for random or batch workloads and delivers the highest possible overall performance. In addition, Nutanix' OS 3.0 supports basic in-line compression that works as the data is being written, which is better suited for archival and sequential workloads.
The company says, "While our existing storage solutions support compression in general, the granularity of Nutanix compression allows us to set policies at the VM level, ensuring maximum business value and storage utilization,"
With Nutanix OS 3.0, the company is supposed to deliver on its commitment to bring all of its enterprise features to the broadest range of platforms in the industry.
The software, which was designed to be hypervisor-agnostic, will now support KVM and VMware vSphere 5.1.
Regardless of the underlying virtualization platform or management framework, enterprises benefit from all of the capabilities of the Nutanix software.
The KVM hypervisor provides financial flexibility for enterprises and works well in workloads such as Hadoop.
Nutanix OS 3.0 also uses a discovery-based protocol to auto-detect new nodes added to the same network as a cluster, enabling administrators to quickly and easily expand a cluster without incurring any downtime.
In the background, the system will then rebalance the data across the entire storage pool, including the newly added nodes, to provide maximum I/O performance.
The new software also uses software-defined networking tricks to achieve rolling software upgrades in the always-on cluster. Upgrades are delivered in a peer-to-peer framework to enable rapid software upgrades while retaining maximum cluster availability.
The features and capabilities delivered in Nutanix OS 3.0 and NX-3000 are supposed to usher in a new era of business resiliency and data center optimization.
The start-up thinks it's displaced $25 million in server and SAN storage sales and is close to doubling sales every quarter. Its co-founder and CEO Dheeraj Pandey built the first Exadata clusters at Oracle. Co-founder Mohit Aron was chief architect at Aster Data and lead designer of the Google File System that led to Hadoop.
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
Dec. 7, 2016 01:15 AM EST Reads: 1,868
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dec. 7, 2016 12:15 AM EST Reads: 1,014
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
Dec. 7, 2016 12:00 AM EST Reads: 2,122
Cloud Expo, Inc. has announced today that Andi Mann returns to 'DevOps at Cloud Expo 2017' as Conference Chair The @DevOpsSummit at Cloud Expo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great t...
Dec. 6, 2016 08:00 PM EST Reads: 479
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
Dec. 6, 2016 07:45 PM EST Reads: 1,967
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Dec. 6, 2016 07:30 PM EST Reads: 1,595
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Dec. 6, 2016 06:30 PM EST Reads: 1,887
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.
Dec. 6, 2016 06:00 PM EST Reads: 1,738
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Dec. 6, 2016 05:15 PM EST Reads: 5,630
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Dec. 6, 2016 05:00 PM EST Reads: 2,610
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2017 New York. The 20th Cloud Expo and 7th @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Internet to enable us all to im...
Dec. 6, 2016 03:15 PM EST Reads: 630
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Dec. 6, 2016 02:30 PM EST Reads: 2,216
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Dec. 6, 2016 02:30 PM EST Reads: 3,349
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Dec. 6, 2016 02:15 PM EST Reads: 1,747
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Dec. 6, 2016 02:00 PM EST Reads: 769
Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech.
Dec. 6, 2016 01:30 PM EST Reads: 2,173
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 6, 2016 12:30 PM EST Reads: 1,069
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
Dec. 6, 2016 11:30 AM EST Reads: 1,042
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Dec. 6, 2016 09:00 AM EST Reads: 5,858
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Dec. 6, 2016 08:15 AM EST Reads: 1,101