|By Dana Gardner||
|January 13, 2014 08:30 AM EST||
If, as the adage goes, you should fight fire with fire then perhaps its equally justified to fight Big Data optimization requirements with -- Big Data.
It turns out that high-performing, cost-effective Big-Data processing helps to make the best use of dynamic storage resources by taking in all the relevant storage activities data, analyzing it and then making the best real-time choices for dynamic hybrid storage optimization.
In other words, Big Data can be exploited to better manage complex data and storage. The concept, while tricky at first, is powerful and, I believe, a harbinger of what we're going to see more of, which is to bring high intelligence to bear on many more services, products and machines.
To explore how such Big Data analysis makes good on data storage efficiency, BriefingsDirect recently sat down with optimized hybrid storage provider Nimble Storage to hear their story on the use of HP Vertica as their data analysis platform of choice. Yes, it's the same Nimble that last month had a highly successful IPO. The expert is Larry Lancaster, Chief Data Scientist at Nimble Storage Inc. in San Jose, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: How do you use big data to support your hybrid storage optimization value?
Lancaster: At a high level, Nimble Storage recognized early, near the inception of the product, that if we were able to collect enough operational data about how our products are performing in the field, get it back home and analyze it, we'd be able to dramatically reduce support costs. Also, we can create a feedback loop that allows engineering to improve the product very quickly, according to the demands that are being placed on the product in the field.
Looking at it from that perspective, to get it right, you need to do it from the inception of the product. If you take a look at how much data we get back for every array we sell in the field, we could be receiving anywhere from 10,000 to 100,000 data points per minute from each array. Then, we bring those back home, we put them into a database, and we run a lot of intensive analytics on those data.
Once you're doing that, you realize that as soon as you do something, you have this data you're starting to leverage. You're making support recommendations and so on, but then you realize you could do a lot more with it. We can do dynamic cache sizing. We can figure out how much cache a customer needs based on an analysis of their real workloads.
We found that big data is really paying off for us. We want to continue to increase how much it's paying off for us, but to do that we need to be able to do bigger queries faster. We have a team of data scientists and we don't want them sitting here twiddling their thumbs. That’s what brought us to Vertica at Nimble.
Using Big Data
Gardner: It's an interesting juxtaposition that you're using big data in order to better manage data and storage. What better use of it? And what sort of efficiencies are we talking about here, when you are able to get that data in that massive scale and do these analytics and then go back out into the field and adjust? What does that get for you?
Lancaster: We have a very tight feedback loop. In one release we put out, we may make some changes in the way certain things happen on the back end, for example, the way NVRAM is drained. There are some very particular details around that, and we can observe very quickly how that performs under different workloads. We can make tweaks and do a lot of tuning.
Without the kind of data we have, we might have to have multiple cases being opened on performance in the field and escalations, looking at cores, and then simulating things in the lab.
It's a very labor-intensive, slow process with very little data to base the decision on. When you bring home operational data from all your products in the field, you're now talking about being able to figure out in near real-time the distribution of workloads in the field and how people access their storage. I think we have a better understanding of the way storage works in the real world than any other storage vendor, simply because we have the data.
Gardner: So it's an interesting combination of a product lifecycle approach to getting data -- but also combining a service with a product in such a way that you're adjusting in real time.
Lancaster: That’s right. We do a lot of neat things. We do capacity forecasting. We do a lot of predictive analytics to try to figure out when the storage administrator is going to need to purchase something, rather than having them just stumble into the fact that they need to provision for equipment because they've run out of space.
A lot of things that should have been done in storage from the very beginning that sound straightforward were simply never done. We're the first company to take a comprehensive approach to it. We open and close 80 percent of our cases automatically, 90 percent of them are automatically opened.
We have a suite of tools that run on this operational data, so we don't have to call people up and say, "Please gather this data for us. Please send us these log posts. Please send us these statistics." Now, we take a case that could have taken two or three days and we turn it into something that can be done in an hour.
That’s the kind of efficiency we gain that you can see, and the InfoSight service delivers that to our customers.
Gardner: Larry, just to be clear, you're supporting both flash and traditional disk storage, but you're able to exploit the hybrid relationship between them because of this data and analysis. Tell us a little bit about how the hybrid storage works.
Challenge for hard drives
Lancaster: At a high level, you have hard drives, which are inexpensive, but they're slow for random I/O. For sequential I/O, they are all right, but for random I/O performance, they're slow. It takes time to move the platter and the head. You're looking at 5 to 10 milliseconds seek time for random read.
That's been the challenge for hard drives. Flash drives have come out and they can dramatically improve on that. Now, you're talking about microsecond-order latencies, rather than milliseconds.
But the challenge there is that they're expensive. You could go buy all flash or you could go buy all hard drives and you can live with those downsides of each. Or, you can take the best of both worlds.
Then, there's a challenge. How do I keep the data that I need to access randomly in flash, but keep the rest of the data that I don't care so much about in a frequent random-read performance, keep that on the hard drives only, and in that way, optimize my use of flash. That's the way you can save money, but it's difficult to do that.
It comes down to having some understanding of the workloads that the customer is running and being able to anticipate the best algorithms and parameters for those algorithms to make sure that the right data is in flash.
We've built up an enormous dataset covering thousands of system-years of real-world usage to tell us exactly which approaches to caching are going to deliver the most benefit. It would be hard to be the best hybrid storage solution without the kind of analytics that we're doing.
Gardner: Then, to extrapolate a little bit higher, or maybe wider, for how this benefits an organization, the analysis that you're gathering also pertains to the data lifecycle, things like disaster recovery (DR), business continuity, backups, scheduling, and so forth. Tell us how the data gathering analytics has been applied to that larger data lifecycle equation.
Lancaster: You're absolutely right. One of the things that we do is make sure that we audit all of the storage that our customers have deployed to understand how much of it is protected with local snapshots, how much of it is replicated for disaster recovery, and how much incremental space is required to increase retention time and so on.
We have very efficient snapshots, but at the end of the day, if you're making changes, snapshots still do take some amount of space. So, learning exactly what is that overhead, and how can we help you achieve your disaster recovery goals.
We have a good understanding of that in the field. We go to customers with proactive service recommendations about what they could and should do. But we also take into account the fact that they may be doing DR when we forecast how much capacity they are going to need.
It is part of a larger lifecycle that we address, but at the end of the day, for my team it's still all about analytics. It's about looking to the data as the source of truth and as the source of recommendation.
We can tell you roughly how much space you're going to need to do disaster recovery on a given type of application, because we can look in our field and see the distribution of the extra space that would take and what kind of bandwidth you're going to need. We have all that information at our fingertips.
When you start to work this way, you realize that you can do things you couldn't do before. And the things you could do before, you can do orders of magnitude better. So we're a great case of actually applying data science to the product lifecycle, but also to front-line revenue and cost enhancement.
Gardner: How can you actually get that analysis in the speed, at the scale, and at the cost that you require?
Lancaster: To give you a brief history of my awareness of HP Vertica and my involvement around the product, I don’t remember the exact year, but it may have been eight years ago roughly. At some point, there was an announcement that Mike Stonebraker was involved in a group that was going to productize the C-Store Database, which was sort of an academic experiment at UC Berkeley, to understand the benefits and capabilities of real column store.
I was immediately interested and contacted them. I was working at another storage company at the time. I had a 20 terabyte (TB) data warehouse, which at the time was one of the largest Oracle on Linux data warehouses in the world.
They didn't want to touch that opportunity just yet, because they were just starting out in alpha mode. I hooked up with them again a few years later, when I was CTO at a company called Glassbeam, where we developed what's substantially an extract, transform, and load (ETL) platform.
By then, they were well along the road. They had a great product and it was solid. So we tried it out, and I have to tell you, I fell in love with Vertica because of the performance benefits that it provided.
When you start thinking about collecting as many different data points as we like to collect, you have to recognize that you’re going to end up with a couple choices on a row store. Either you're going to have very narrow tables and a lot of them or else you're going to be wasting a lot of I/O overhead, retrieving entire rows where you just need a couple fields.
That was what piqued my interest at first. But as I began to use it more and more at Glassbeam, I realized that the performance benefits you could gain by using HP Vertica properly were another order of magnitude beyond what you would expect just with the column-store efficiency.
That's because of certain features that Vertica allows, such as something called pre-join projections. We can drill into that sort of stuff more if you like, but, at a high-level, it lets you maintain the normalized logical integrity of your schema, while having under the hood, an optimized denormalized query performance physically on disk.
Now you might ask you can be efficient if you have a denormalized structure on disk. It's because Vertica allows you to do some very efficient types of encoding on your data. So all of the low cardinality columns that would have been wasting space in a row store end up taking almost no space at all.
What you find, at least it's been my impression, is that Vertica is the data warehouse that you would have wanted to have built 10 or 20 years ago, but nobody had done it yet.
Nowadays, when I'm evaluating other big data platforms, I always have to look at it from the perspective of it's great, we can get some parallelism here, and there are certain operations that we can do that might be difficult on other platforms, but I always have to compare it to Vertica. Frankly, I always find that Vertica comes out on top in terms of features, performance, and usability.
Gardner: When you arrived there at Nimble Storage, what were they using, and where are you now on your journey into a transition to Vertica?
Lancaster: I built the environment here from the ground up. When I got here, there were roughly 30 people. It's a very small company. We started with Postgres. We started with something free. We didn’t want to have a large budget dedicated to the backing infrastructure just yet. We weren’t ready to monetize it yet.
So, we started on Postgres and we've scaled up now to the point where we have about 100 TBs on Postgres. We get decent performance out of the database for the things that we absolutely need to do, which are micro-batch updates and transactional activity. We get that performance because the database lives on Nimble Storage.
I don't know what the largest unsharded Postgres instance is in the world, but I feel like I have one of them. It's a challenge to manage and leverage. Now, we've gotten to the point where we're really enjoying doing larger queries. We really want to understand the entire installed base of how we want to do analyses that extend across the entire base.
We want to understand the lifecycle of a volume. We want to understand how it grows, how it lives, what its performance characteristics are, and then how gradually it falls into senescence when people stop using it. It turns out there is a lot of really rich information that we now have access to to understand storage lifecycles in a way I don't think was possible before.
But to do that, we need to take our infrastructure to the next level. So we've been doing that and we've loaded a large number of our sensor data that’s the numerical data I have talked about into Vertica, started to compare the queries, and then started to use Vertica more and more for all the analysis we're doing.
Internally, we're using Vertica, just because of the performance benefits. I can give you an example. We had a particular query, a particularly large query. It was to look at certain aspects of latency over a month across the entire installed base to understand a little bit about the distribution, depending on different factors, and so on.
We ran that query in Postgres, and depending on how busy the server was, it took anywhere from 12 to 24 hours to run. On Vertica, to run the same query on the same data takes anywhere from three to seven seconds.
I anticipated that because we were aware upfront of the benefits we'd be getting. I've seen it before. We knew how to structure our projections to get that kind of performance. We knew what kind of infrastructure we'd need under it. I'm really excited. We're getting exactly what we wanted and better.
This is only a three node cluster. Look at the performance we're getting. On the smaller queries, we're getting sub-second latencies. On the big ones, we're getting sub-10 second latencies. It's absolutely amazing. It's game changing.
People can sit at their desktops now, manipulate data, come up with new ideas and iterate without having to run a batch and go home. It's a dramatic productivity increase. Data scientists tend to be fairly impatient. They're highly paid people, and you don’t want them sitting at their desk waiting to get an answer out of the database. It's not the best use of their time.
Gardner: Larry, is there another aspect to the HP Vertica value when it comes to the cloud model for deployment? It seems to me that if Nimble Storage continues to grow rapidly and scales that, bringing all that data back to a central single point might be problematic. Having it distributed or in different cloud deployment models might make sense. Is there something about the way Vertica works within a cloud services deployment that is of interest to you as well?
Lancaster: There's the ease of adding nodes without downtime, the fact that you can create a K-safe cluster. If my cluster is 16 nodes wide now, and I want two nodes redundancy, it's very similar to RAID. You can specify that, and the database will take care of that for you. You don’t have to worry about the database going down and losing data as a result of the node failure every time or two.
I love the fact that you don’t have to pay extra for that. If I want to put more cores or nodes on it or I want to put more redundancy into my design, I can do that without paying more for it. Wow! That’s kind of revolutionary in itself.
It's great to see a database company incented to give you great performance. They're incented to help you work better with more nodes and more cores. They don't have to worry about people not being able to pay the additional license fees to deploy more resources. In that sense, it's great.
We have our own private cloud -- that’s how I like to think of it -- at an offsite colocation facility. We do DR through Nimble Storage. At the same time, we have a K-safe cluster. We had a hardware glitch on one of the nodes last week, and the other two nodes stayed up, served data, and everything was fine.
Those kinds of features are critical, and that ability to be flexible and expand is critical for someone who is trying to build a large cloud infrastructure, because you're never going to know in advance exactly how much you're going to need.
If you do your job right as a cloud provider, people just want more and more and more. You want to get them hooked and you want to get them enjoying the experience. Vertica lets you do that.
You may also be interested in:
- MZI Healthcare Identifies Big Data Patient Productivity Gems Using HP Vertica
- Thought Leader Interview: HP's Global CISO Brett Wahlin on the future of Security and Risk
- Panel explains how CSC creates a tough cybersecurity posture against global threats
- Risk and complexity: Businesses need to get a grip
- HP Vertica General Manager Colin Mahony on the next generation of analytics platforms
- Advanced IT monitoring Delivers Predictive Diagnostics Focus to United Airlines
- CSC and HP team up to define the new state needed for comprehensive enterprise cybersecurity
- BYOD brings new security challenges for IT: Allowing greater access while protecting networks
- HP Vertica Architecture Gives Massive Performance Boost to Toughest BI Queries for Infinity Insurance
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Oct. 27, 2016 09:15 AM EDT Reads: 2,154
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
Oct. 27, 2016 07:30 AM EDT Reads: 1,612
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
Oct. 27, 2016 07:00 AM EDT Reads: 5,018
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Oct. 27, 2016 07:00 AM EDT Reads: 1,171
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
Oct. 27, 2016 06:00 AM EDT Reads: 1,035
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
Oct. 27, 2016 06:00 AM EDT Reads: 7,310
Today every business relies on software to drive the innovation necessary for a competitive edge in the Application Economy. This is why collaboration between development and operations, or DevOps, has become IT’s number one priority. Whether you are in Dev or Ops, understanding how to implement a DevOps strategy can deliver faster development cycles, improved software quality, reduced deployment times and overall better experiences for your customers.
Oct. 27, 2016 05:15 AM EDT Reads: 995
DevOps theory promotes a culture of continuous improvement built on collaboration, empowerment, systems thinking, and feedback loops. But how do you collaborate effectively across the traditional silos? How can you make decisions without system-wide visibility? How can you see the whole system when it is spread across teams and locations? How do you close feedback loops across teams and activities delivering complex multi-tier, cloud, container, serverless, and/or API-based services?
Oct. 27, 2016 05:15 AM EDT Reads: 1,141
What do dependency resolution, situational awareness, and superheroes have in common? Meet Chris Corriere, a DevOps/Software Engineer at Autotrader, speaking on creative ways to maximize usage of all of the above. Mark Miller, Community Advocate and senior storyteller at Sonatype, caught up with Chris to learn more about what his team is up to.
Oct. 27, 2016 04:00 AM EDT Reads: 1,985
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will show how customers are able to achieve a level of transparency that enables everyon...
Oct. 27, 2016 04:00 AM EDT Reads: 1,363
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
Oct. 27, 2016 03:45 AM EDT Reads: 4,130
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
Oct. 27, 2016 03:45 AM EDT Reads: 773
At its core DevOps is all about collaboration. The lines of communication must be opened and it takes some effort to ensure that they stay that way. It’s easy to pay lip service to trends and talk about implementing new methodologies, but without action, real benefits cannot be realized. Success requires planning, advocates empowered to effect change, and, of course, the right tooling. To bring about a cultural shift it’s important to share challenges. In simple terms, ensuring that everyone k...
Oct. 27, 2016 02:45 AM EDT Reads: 12,775
JetBlue Airways uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-time monitoring of mobile applications. The next BriefingsDirect Voice of the Customer performance engineering case study discussion examines how JetBlue Airways in New York uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-tim...
Oct. 27, 2016 02:30 AM EDT Reads: 1,357
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
Oct. 27, 2016 02:15 AM EDT Reads: 1,141
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Oct. 27, 2016 02:00 AM EDT Reads: 3,993
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
Oct. 27, 2016 01:00 AM EDT Reads: 34,313
As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...
Oct. 27, 2016 12:45 AM EDT Reads: 2,182
So you think you are a DevOps warrior, huh? Put your money (not really, it’s free) where your metrics are and prove it by taking The Ultimate DevOps Geek Quiz Challenge, sponsored by DevOps Summit. Battle through the set of tough questions created by industry thought leaders to earn your bragging rights and win some cool prizes.
Oct. 27, 2016 12:30 AM EDT Reads: 4,230
SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp...
Oct. 27, 2016 12:00 AM EDT Reads: 3,550