Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Karthick Viswanathan, Andy Thurai

Related Topics: Microsoft Cloud, Containers Expo Blog, Cognitive Computing , Silverlight, Agile Computing, @CloudExpo

Microsoft Cloud: Blog Post

Right-Size IT Budgets with Windows Server 2012 "Storage Spaces"

SAN-like Storage Capabilities with Commodity Hardware in Windows Server 2012

What is the Largest Single Cost Category in Your IT Hardware Budget?
If you're like most of the enterprise customer organizations that were surveyed when we were designing Windows Server 2012, your answer is probably the same as theirs: STORAGE! For the organizations we surveyed, we found that as much as 60% of their annual hardware budgets were allocated to expensive hardware SAN solutions due to ever-increasing storage requirements.

Wouldn't it be nice to have some of that budget back for other IT projects? YES! We agree too ... that's why the server team included "Storage Spaces" in Windows Server 2012!

Get Ready for "Storage Spaces"!
"Storage Spaces" is a new storage virtualization technology that's included in Windows Server 2012 and our FREE Hyper-V Server 2012 to provide SAN-like storage capabilities and performance using commodity hardware components, such as industry standard servers and JBODs (ie., "Just a Bunch Of Disks").  When designing the Storage Spaces feature, we leveraged the storage expertise we've gained with our public cloud offerings (Windows Live, [email protected], Bing, Office 365 and Windows Azure) where we've been supporting scalable, world-wide storage solutions on commodity hardware for the past several years. The result is a software-driven solution that provides storage features like pooling, abstraction, fault tolerance and thin provisioning for a fraction of traditional storage hardware costs.

You mentioned "Thin Provisioning" ... How Does That Work?
Thin provisioning allows you to easily abstract the size of a virtualized disk away from the underlying physical disk storage capacity.  It gives you great flexibility in that you can setup your volumes with the capacity that you know you're going to need over the next several years without requiring an equal upfront hardware investment in disk capacity.  So ... you say that you know you're going need 3TB of storage on a volume in the next three years, but only have 1TB of physical disk capacity right now?  No problem!  Thin provision your virtualized disk space today for 3TB and just incrementally add more physical disks into the underlying Storage Pool down the road when you need them!

NOTE: Thin provisioning works great for production workloads and provides us with new flexibility for managing long-term storage needs, BUT if you are using Failover Clustering there are special requirements to ensure that clustered storage is always accessible when needed.  When setting up Storage Spaces for clustering scenarios, you'll need to use Fixed Provisioning, rather than Thin Provisioning.  You can read about the complete requirements when clustering Storage Spaces here.

Disk IO Performance Must Be Pretty Sloowww ... Right?
Actually, the disk IO performance achievable with Storage Spaces can be quite comparable to dedicated storage hardware solutions!  Storage Spaces gives you, as an IT Pro, the ability to engineer a storage solution for the right balance of performance and capacity that your applications require.  However, it might require you to rethink some of your storage architecture a bit - instead of using expensive intelligent RAID controllers, storage processors and SAN switches to achieve high performance, Storage Spaces requires that you think about disk IO a bit differently to gain maximum performance.  With Storage Spaces, the storage "intelligence" is handled by software components, so the best thing you can do to increase overall disk throughput is to increase the # of IO channels with multiple disk HBAs (ie., "Host Bus Adapters") and spread IOs across a large number of fast disk spindles.

In fact, we recently demonstrated a solution that leveraged 5 SAS HBAs and 40 SSD disks in a single industry standard server to achieve over 1 million IOPS in disk performance ... for $50,000 USD including the server hardware, software and all storage components!

Of course, performance is not always the main driving factor in storage needs - many Tier-2 or Tier-3 storage requirements favor economical storage capacity over raw performance.  In these scenarios, even if you already have a dedicated Tier-1 hardware storage solution, you may still consider Storage Spaces for your other Tier-2 and Tier-3 storage needs.  I see a lot of IT Pros looking at Storage Spaces as a storage solution for Disaster Recovery, Archiving and Backup scenarios.

More Stories By Keith Mayer

Keith Mayer is a Technical Evangelist at Microsoft focused on Windows Infrastructure, Data Center Virtualization, Systems Management and Private Cloud. Keith has over 17 years of experience as a technical leader of complex IT projects, in diverse roles, such as Network Engineer, IT Manager, Technical Instructor and Consultant. He has consulted and trained thousands of IT professionals worldwide on the design and implementation of enterprise technology solutions.

Keith is currently certified on several Microsoft technologies, including System Center, Hyper-V, Windows, Windows Server, SharePoint and Exchange. He also holds other industry certifications from IBM, Cisco, Citrix, HP, CheckPoint, CompTIA and Interwoven.

Keith is the author of the IT Pros ROCK! Blog on Microsoft TechNet, voted as one of the Top 50 "Must Read" IT Blogs.

Keith also manages the Windows Server 2012 "Early Experts" Challenge - a FREE online study group for IT Pros interested in studying and preparing for certification on Windows Server 2012. Join us and become the next "Early Expert"!

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
Kin Lane recently wrote a couple of blogs about why copyrighting an API is not common. I couldn’t agree more that copyrighting APIs is uncommon. First of all, the API definition is just an interface (It is the implementation detail … Continue reading →
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MyS...