|By Eric Burgener||
|November 26, 2012 07:45 AM EST||
Legacy storage architectures do not perform very efficiently in virtual computing environments. The very random, very write-intensive I/O patterns generated by virtual hosts drive storage costs up as enterprises either add spindles or look to newer storage technologies like solid state disk (SSD) to address the IOPS shortfall.
SSD costs are coming down, but they are still significantly higher than spinning disk costs. When enterprises do consider SSD, how it is used and where it is placed in the virtual infrastructure can make a big difference in how much enterprises have to spend to meet their performance requirements. It can also impose certain operational limitations that may or may not be issues in specific environments.
Some of the key considerations that need to be taken into account are SSD placement (in the host or in the SAN), high availability/failover requirements, caching vs logging architectures, and the value of preserving existing investments vs rip and replace investments that promise storage hardware specifically designed for virtual environments.
There are two basic locations to place SSD, each of which offers its own pros and cons. Host-based SSD will generally offer the lowest storage latencies, particularly if the SSD is located on PCIe cards. In non-clustered environments where it is clear that IOPS and storage latencies are the key performance problems, these types of devices can be very valuable. In most cases, they will remove storage as the performance problem.
But don't necessarily expect that in your environment, these devices will deliver their rated IOPS directly to your applications. In removing storage as the bottleneck, system performance will now be determined by whatever the next bottleneck in the system is. That could be CPU, memory, operating system, or any number of other potential issues. This phenomenon is referred to as Amdahl's Law.
What you probably care about are application IOPS. Test the devices you're considering in your environment before purchase, so you know exactly the level of performance gain they will provide to you. Then you can make a more informed decision about whether or not you can cost justify them for use with your workloads. Paying for performance you can't use is like buying a Ferrari for use on America's interstate system - you may never get out of second gear.
Raw SSD technology generally can provide blazingly fast read performance. Write performance, however, varies depending on whether you are writing randomly or sequentially. The raw technical specs on many SSD devices indicate that sequential write performance may be half that of read performance, and random write performance may be half again as slow. Write latencies may also not be deterministic because of how SSD devices manage the space they are writing to. Many SSD vendors are combining software and other infrastructure around their SSD devices to address some of these issues. If you're looking at SSD, look to the software it's packaged with to make sure the SSD capacity you're buying can be used most efficiently.
Host-based SSD introduces failover limitations. If you have implemented a product like VMware HA in your environment to automatically recover failed nodes, any data sitting in a host-based SSD device that has not been written through to shared storage will not be available on recovery. This can lead to data loss on recovery - something that may or may not be an issue in your environment. Even though SSD is non-volatile storage, if the node it is sitting in is down, you can't get to it. You can get to it after that node is recovered, but the issue here is whether or not you can automatically fail over and have access to it.
Because of this issue, most host-based SSD products implement what is called a "write-through" cache, which means that they don't acknowledge writes at SSD latencies, they actually write them through to shared disk and then send the write acknowledgement back from there. Anything on shared disk can be potentially recovered by any other node in the cluster, ensuring that no committed data is unavailable on failover. But what this means is that you won't get any write performance improvements from SSD, just better read performance.
What does your workload look like in terms of read vs write percentages? Most virtual environments are very write intensive, much more so than they ever were in physical environments, and virtual desktop infrastructure (VDI) environments can be as much as 90% writes when operating in steady state mode. If write performance is your problem, host-based SSD with a write-through cache may not help very much in the big picture.
SAN-based SSD, on the other hand, can support failover without data loss, and if implemented with a write-back cache can provide write performance speedups as well. But many implementations available for use with SAN arrays are really only designed to speed up reads. Check carefully as you consider SSD to understand how it is implemented, and how well that maps to the actual performance requirements in your environment.
Caching vs Logging Architectures
Most SSD, wherever it is implemented, is used as a cache. Sizing guidelines for caches start with the cache as a percentage of the back-end storage it is front-ending. Generally the cache needs to be somewhere between 3% to 6% of the back-end storage, so larger data store capacities require larger caches. For example, 20TB of back-end data might require 1TB of SSD cache (5%).
Caches are generally just speeding up reads, but if you are working with a write-back cache, then the cache will have to be split between SSD capacity used to speed up reads and SSD capacity used to speed up writes. Everything else being equal in terms of performance requirements, write-back caches will have to be larger than write-through caches, but will provide more balanced performance gains (across both reads and writes).
Logging architectures, by definition, speed up writes, making them a good fit for write-intensive workloads like those found in virtual computing environments. Logs provide write performance gains by taking the very random workload and essentially removing the randomness from it by writing it sequentially to a log, acknowledging the writes from there, then asynchronously de-staging them to a shared storage pool. This means that the same SSD device used in a log vs used in a cache will be faster, assuming some randomness to the workload. The write performance the guest VMs see is the performance of the log device operating in sequential write mode almost all the time, and it can result in write performance improvements of up to 10x (relative to that same device operating in the random mode it would normally be operating in). And a log provides write performance improvements for all writes from all VMs all the time. (What's also interesting is that if you are getting 10x the IOPS from your current spinning disk, given Amdahl's Law, you may not even need to purchase SSD to remove storage as the performance bottleneck.)
Logs are very small (10GB or so) and are dedicated to a host, while the shared storage pool is accessible to all nodes in a cluster and primarily handles read requests. In a 20 node cluster with 20TB of shared data, you would need 200GB for the logs (10GB x 20 hosts) vs the 1TB you would need if SSD was used as a cache. Logs are much more efficient than caches for write performance improvements, resulting in lower costs.
If logs are located on SAN-based SSD, you not only get the write performance improvements, but this design fully supports node failover without data loss, a very nice differentiator from write-through cache implementations.
But what about read performance? This is where caches excel, and a write log doesn't seem to address that. That's true, and why it's important to combine a logging architecture with storage tiering. Any SSD capacity not used by the logs can be configured into a fast tier 0, which will provide the read performance improvements for any data residing in that tier. The bottom line here is that you can get better overall storage performance improvements from a "log + tiering" design than you can from a cache design while using 50% - 90% less high performance device (in this case, SSD) capacity. In our example above, if you buy a 256GB SAN-based SSD device and use it in a 20 node cluster, you'll get SSD sequential write performance for every write all the time, and have 56GB left over to put into a tier 0. Compare that to buying 1TB+ of cache capacity at SSD prices.
With single image management technology like linked clones or other similar implementations, you can lock your VM templates into this tier, and very efficiently gain read performance improvements against the shared blocks in those templates for all child VMs all the time. Single image management technology can help make the use of SSD capacity more efficient in either a cache or a log architecture, so don't overlook it as long as it is implemented in a way that does not impinge upon your storage performance.
Purpose-Built Storage Hardware
There are some interesting new array designs that leverage SSD, sometimes in combination with some of the other technologies mentioned above (log architectures, storage tiering, single image manage-ment, spinning disk). Designed specifically with the storage performance issues in virtual environments in mind, there is no doubt that these arrays can outperform legacy arrays. But for most enterprises, that may not be the operative question.
It's rare that an enterprise doesn't already have a sizable investment in storage. Many of these existing arrays support SSD, which can be deployed in a SAN-based cache or fast tier. It's much easier, and potentially much less disruptive and expensive if existing storage investments could be leveraged to address the storage performance issues in virtual environments. It's also less risky, since most of the hot new "virtual computing-aware" arrays and appliances are built by startups, not proven vendors. If there are pure software-based options to consider that support heterogeneous storage hardware and can address the storage issues common in virtual computing environments, allowing you to potentially take advantage of SSD capacity that fits into your current arrays, this could be a simpler, more cost-effective, and less risky option than buying from a storage startup. But only, of course, if it adequately resolves your performance problem.
If there's one point you should take away from this article, it's that just blindly throwing SSD at a storage performance problem in virtual computing environments is not going to be a very efficient or cost-effective way to address your particular issues. Consider how much more performance you need, whether you need it on reads, writes, or both, whether you need to failover without data loss, and whether preserving existing storage hardware investments is important to you. SSD is a great technology, but your best value from it will come when you deploy it most efficiently.
API-Driven Digital Healthcare Solution By @AkanaInc | @DevOpsSummit #API #IoT #DevOps #Microservices
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device acce...
Sep. 2, 2015 08:00 AM EDT Reads: 258
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Sep. 2, 2015 07:30 AM EDT Reads: 615
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
Sep. 2, 2015 06:45 AM EDT Reads: 340
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
Sep. 2, 2015 06:00 AM EDT Reads: 563
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
Sep. 2, 2015 05:15 AM EDT Reads: 551
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Sep. 2, 2015 03:45 AM EDT Reads: 430
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
Sep. 2, 2015 03:00 AM EDT Reads: 529
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
Sep. 2, 2015 02:30 AM EDT Reads: 456
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
Sep. 1, 2015 11:00 PM EDT Reads: 516
Microservice architecture is fast becoming a go-to solution for enterprise applications, but it's not always easy to make the transition from an established, monolithic infrastructure. Lightweight and loosely coupled, building a set of microservices is arguably more difficult than building a monolithic application. However, once established, microservices offer a series of advantages over traditional architectures as deployment times become shorter and iterating becomes easier.
Sep. 1, 2015 06:30 PM EDT
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
Sep. 1, 2015 03:30 PM EDT Reads: 255
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Sep. 1, 2015 12:30 PM EDT Reads: 927
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
Sep. 1, 2015 12:00 PM EDT Reads: 248
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
Sep. 1, 2015 11:30 AM EDT Reads: 413
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
Sep. 1, 2015 11:15 AM EDT Reads: 433
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Sep. 1, 2015 10:45 AM EDT Reads: 614
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
Sep. 1, 2015 10:30 AM EDT Reads: 283
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
Sep. 1, 2015 08:30 AM EDT Reads: 170
Introducing Containers & Microservices Bootcamp at @CloudExpo Silicon Valley | #Containers #Microservices
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
Sep. 1, 2015 08:15 AM EDT Reads: 374
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...
Sep. 1, 2015 07:45 AM EDT Reads: 416