Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Yeshim Deniz, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Java IoT, Industrial IoT, Microservices Expo, Open Source Cloud, @CloudExpo

Containers Expo Blog: Blog Post

The Evolution of Solid State Arrays

Solid state storage continues to evolve

In the first wave of solid-state storage arrays, we saw commodity style SSDs (solid state drives) being added to traditional storage arrays. This solution provided an incremental benefit in performance over spinning hard drives, however the back-end technology in these arrays was developed up to 20 years ago and was purely focused around driving performance out of the slowest part of the infrastructure – the hard drive.  Of course SSDs are an order of magnitude faster than HDDs so you can pretty much guarantee SSDs in traditional arrays results in underused resources, but is premium priced.

Wave 2 of SSD arrays saw the development of custom hardware, mostly still continuing to use commodity SSDs.  At this point we saw full exploitation of the solid state capabilities, with architecture designed to provide the full performance capabilities of solid state drives.  These arrays removed unnecessary or bottlenecking features (like cache) and provided much more back-end scalability.  Within the wave 2 group, Nimbus Data have chosen a hybrid approach and developed their own solid state drives.  This gives them more control over the management functionality of the SSDs and subsequently more control over performance and availability.

Notably, some startup vendors have taken a slightly different approach.  Violin Memory have chosen from day 1 to use custom NAND memory cards called VIMMs (Violin Intelligent Memory Module). This technology removes the need for NAND to emulate a hard drive and for the interface between the processor/memory & persistent memory (e.g. the NAND) to go across a hard drive interface like SAS using the SCSI protocol.  Whilst it could be debated that the savings from removing the disk drive protocol could be marginal, the use of NAND that doesn’t emulate hard drives is about much more than that.  SSD controllers have many features to extend the life of the drive itself.  This includes wear levelling and garbage collection, features that could have a direct impact on device performance.  Custom NAND components can, for instance allow wear levelling to be achieved across the entire array or for individual cell failures to be managed more efficiently.

Building bespoke NAND components isn’t cheap.  Violin have chosen to invest in technology that they believe gives them an advantage in their hardware – no dependency on SSD manufacturers.  The ability to build advanced functionality into their persistent memory means availability can be increased (components don’t need to be swapped out as frequently – failing components can be partially used).

At this point we should do a call out to Texas Memory Systems, recently acquired by IBM.  They have also used custom NAND components; their RamSan-820 uses 500GB flash modules using eMLC memory.

I believe that the third wave will see many more vendors looking to move away from the SSD form factor and building bespoke NAND components as Violin have done.  Currently Violin and TMS have the headstart.  They’ve done the hard work and built the foundation of their platform.  Their future innovations will probably revolve around bigger and faster devices and replacing NAND with whatever is the next generation of persistent memory.

Last week, HDS announced their approach to full flash devices; a new custom-build Flash Module Drive (FMD) that can be added to the VSP platform.  This provides 1.6TB or 3.2TB (higher capacity due March 2013) of storage per module, which can then be stacked into an 8U shelf of 48 FMDs in total – a total of 600TB of flash in a single VSP.  Each FMD is like a traditional SSD drive in terms of height and width, but is much deeper in size.  It appears to the VSP as a traditional SSD.

The FMD chassis is separate to the existing disk chassis that are deployed in the VSP and so FMDs can’t be deployed in conjunction with hard drives.  Although this seems like a negative, the flash modules have higher specification back-end directors (to fully utilise the flash performance), which, in addition to their size, explains why they wouldn’t be mixed together.

Creating a discrete flash module provides Hitachi with a number of benefits compared to individual MLC SSDs including:

  • Higher performance on mixed workloads
  • Inbuilt compression using the onboard custom chips
  • Improved ECC error correction using onboard code and hardware
  • Lower power per TB consumption from higher memory density
  • > 1,000,000 IOPS in a single array

The new FMDs can also be used with HDT (dynamic tiering) to cater for mixed sub-LUN workloads and of course Hitachi’s upgraded microcode is already optimised to work with flash devices.

The Architect’s View
Solid state storage continues to evolve.  NAND flash is fast and has its foibles but this can be overcome with dedicated NAND modules.  Today, only four vendors have moved to dedicated solid-state components while the others continue to use commodity SSDs.  At scale, performance and availability, when viewed in terms of consistency become much more important.  Many vendors today are producing high performance devices, but how well will they scale going forward and how resilient will they be?  As the market matures, these differences will be the dividing line between survival and failure.

Disclaimer: I recently attended the Hitachi Bloggers’ and Influencers’ Days 2012.  My flights and accommodation were covered by Hitachi during the trip, however there is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time when attending the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Related Links

Comments are always welcome; please indicate if you work for a vendor as it’s only fair. If you have any related links of interest, please feel free to add them as a comment for consideration.

Read the original blog entry...

Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development organizations.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.