Click here to close now.




















Welcome!

Microservices Expo Authors: Liz McMillan, Roger Strukhoff, Pat Romanski, Elizabeth White, Joe Pruitt

Related Topics: Containers Expo Blog, Java IoT, Industrial IoT, Microservices Expo, Open Source Cloud, @CloudExpo

Containers Expo Blog: Blog Post

The Evolution of Solid State Arrays

Solid state storage continues to evolve

In the first wave of solid-state storage arrays, we saw commodity style SSDs (solid state drives) being added to traditional storage arrays. This solution provided an incremental benefit in performance over spinning hard drives, however the back-end technology in these arrays was developed up to 20 years ago and was purely focused around driving performance out of the slowest part of the infrastructure – the hard drive.  Of course SSDs are an order of magnitude faster than HDDs so you can pretty much guarantee SSDs in traditional arrays results in underused resources, but is premium priced.

Wave 2 of SSD arrays saw the development of custom hardware, mostly still continuing to use commodity SSDs.  At this point we saw full exploitation of the solid state capabilities, with architecture designed to provide the full performance capabilities of solid state drives.  These arrays removed unnecessary or bottlenecking features (like cache) and provided much more back-end scalability.  Within the wave 2 group, Nimbus Data have chosen a hybrid approach and developed their own solid state drives.  This gives them more control over the management functionality of the SSDs and subsequently more control over performance and availability.

Notably, some startup vendors have taken a slightly different approach.  Violin Memory have chosen from day 1 to use custom NAND memory cards called VIMMs (Violin Intelligent Memory Module). This technology removes the need for NAND to emulate a hard drive and for the interface between the processor/memory & persistent memory (e.g. the NAND) to go across a hard drive interface like SAS using the SCSI protocol.  Whilst it could be debated that the savings from removing the disk drive protocol could be marginal, the use of NAND that doesn’t emulate hard drives is about much more than that.  SSD controllers have many features to extend the life of the drive itself.  This includes wear levelling and garbage collection, features that could have a direct impact on device performance.  Custom NAND components can, for instance allow wear levelling to be achieved across the entire array or for individual cell failures to be managed more efficiently.

Building bespoke NAND components isn’t cheap.  Violin have chosen to invest in technology that they believe gives them an advantage in their hardware – no dependency on SSD manufacturers.  The ability to build advanced functionality into their persistent memory means availability can be increased (components don’t need to be swapped out as frequently – failing components can be partially used).

At this point we should do a call out to Texas Memory Systems, recently acquired by IBM.  They have also used custom NAND components; their RamSan-820 uses 500GB flash modules using eMLC memory.

I believe that the third wave will see many more vendors looking to move away from the SSD form factor and building bespoke NAND components as Violin have done.  Currently Violin and TMS have the headstart.  They’ve done the hard work and built the foundation of their platform.  Their future innovations will probably revolve around bigger and faster devices and replacing NAND with whatever is the next generation of persistent memory.

Last week, HDS announced their approach to full flash devices; a new custom-build Flash Module Drive (FMD) that can be added to the VSP platform.  This provides 1.6TB or 3.2TB (higher capacity due March 2013) of storage per module, which can then be stacked into an 8U shelf of 48 FMDs in total – a total of 600TB of flash in a single VSP.  Each FMD is like a traditional SSD drive in terms of height and width, but is much deeper in size.  It appears to the VSP as a traditional SSD.

The FMD chassis is separate to the existing disk chassis that are deployed in the VSP and so FMDs can’t be deployed in conjunction with hard drives.  Although this seems like a negative, the flash modules have higher specification back-end directors (to fully utilise the flash performance), which, in addition to their size, explains why they wouldn’t be mixed together.

Creating a discrete flash module provides Hitachi with a number of benefits compared to individual MLC SSDs including:

  • Higher performance on mixed workloads
  • Inbuilt compression using the onboard custom chips
  • Improved ECC error correction using onboard code and hardware
  • Lower power per TB consumption from higher memory density
  • > 1,000,000 IOPS in a single array

The new FMDs can also be used with HDT (dynamic tiering) to cater for mixed sub-LUN workloads and of course Hitachi’s upgraded microcode is already optimised to work with flash devices.

The Architect’s View
Solid state storage continues to evolve.  NAND flash is fast and has its foibles but this can be overcome with dedicated NAND modules.  Today, only four vendors have moved to dedicated solid-state components while the others continue to use commodity SSDs.  At scale, performance and availability, when viewed in terms of consistency become much more important.  Many vendors today are producing high performance devices, but how well will they scale going forward and how resilient will they be?  As the market matures, these differences will be the dividing line between survival and failure.

Disclaimer: I recently attended the Hitachi Bloggers’ and Influencers’ Days 2012.  My flights and accommodation were covered by Hitachi during the trip, however there is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time when attending the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Related Links

Comments are always welcome; please indicate if you work for a vendor as it’s only fair. If you have any related links of interest, please feel free to add them as a comment for consideration.

Read the original blog entry...

@MicroservicesExpo Stories
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers. If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.
Alibaba, the world’s largest ecommerce provider, has pumped over a $1 billion into its subsidiary, Aliya, a cloud services provider. This is perhaps one of the biggest moments in the global Cloud Wars that signals the entry of China into the main arena. Here is why this matters. The cloud industry worldwide is being propelled into fast growth by tremendous demand for cloud computing services. Cloud, which is highly scalable and offers low investment and high computational capabilities to end us...
One of the ways to increase scalability of services – and applications – is to go “stateless.” The reasons for this are many, but in general by eliminating the mapping between a single client and a single app or service instance you eliminate the need for resources to manage state in the app (overhead) and improve the distributability (I can make up words if I want) of requests across a pool of instances. The latter occurs because sessions don’t need to hang out and consume resources that could ...
Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out ...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Our guest on the podcast this week is Adrian Cockcroft, Technology Fellow at Battery Ventures. We discuss what makes Docker and Netflix highly successful, especially through their use of well-designed IT architecture and DevOps.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...
MuleSoft has announced the findings of its 2015 Connectivity Benchmark Report on the adoption and business impact of APIs. The findings suggest traditional businesses are quickly evolving into "composable enterprises" built out of hundreds of connected software services, applications and devices. Most are embracing the Internet of Things (IoT) and microservices technologies like Docker. A majority are integrating wearables, like smart watches, and more than half plan to generate revenue with ...
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization. In his session at DevOps Summit, Chris Van Tuin, Chief Technologist for the Western US at Red Hat, will discuss: The acceleration of application delivery for the business with DevOps