|By Tom Leyden||
|November 8, 2012 09:00 AM EST||
It’s probably a good idea to state I wrote this blog while employed by Amplidata, but during my own time. This article reflects my own opinion, not necessarily that of Amplidata or its partners.
As I am writing this, I am crossing the Atlantic for the seventh time in about two months. I’m on my way to CloudExpo West in Santa Clara, one of the few technology trade shows that are still growing. At the event I will be sitting on the last Object Storage for Big Data panel of the season. Robin Harris – aka StorageMojo – and I have been working hard this fall educating the industry on the benefits, challenges and opportunities of Object Storage. We’ve been trying to explain how the current generation of Object Storage platforms is so much different from the first attempt at it (EMC’s Centera), how it enables companies cope with the massive amounts of unstructured data that we are all generating and how companies can even monetize archived data by re-activating their archives.
Unlike StorageMojo and some other people who I have been working with lately, I don’t have decades of experience in the storage industry. However, being located in Belgium, I’ve had the privilege of working with people who used to be part of the Filepool team (and spent years at EMC after the acquisition). Those were the earliest object storage days, I had no idea of what was coming. Later, at Sun, I learned a lot about Object Storage when we were working on the Sun Cloud project. The architecture (ZFS) was different of what we are seeing on the market today, but the concept was – as was often the case at Sun – promising. This article is not another take at describing Object Storage and the benefits it brings, it’s more an overview of what we have learned at the past four Object Storage for Big Data panels. The setup for each of the panels was mostly the same: Robin Harris would challenge between 4 and 6 Object Storage specialists (technology vendors or users) and try to have the audience participate with. We did expect the topics of the panels to be different as we were hosted by trade shows with different audiences, but we never expected the discussions to vary as much as they did.
The common thread for each panel was the challenge companies have to store different types of Big Data and more particularly Big Unstructured Data. The latter represents up to 90% of the digital data that we will be generating over the next decades and will put traditional storage technologies under heavy stress as they are hitting their scalability limits. Unstructured data is currently mostly stored in file system based storage infrastructures. File systems will not only be unable to scale as required – try setting up a file structure for 5 petabytes of data – but they will also become obsolete as applications can provide a lot more features to keep your unstructured data organized (structured?), to analyze that information and potentially monetize what is today stored in (dead) tape archives. Rich applications that talk directly to a large and (infinitely) scalable storage pool make a lot more sense than maintenance-intensive files systems. Also, properly designed Object Storage (with erasure coding technology instead of RAID to protect the data) requires a lot less overhead, consumes a lot less power, can easily be implemented over multiple sites and does not require migration to new systems when a system cannot be further scaled. So what else did we discuss at the panels?
The first panel after summer was at Intel’s IDF in San Francisco. Panel members came from Intel and Quanta, who with Amplidata built an Object Storage reference architecture. We also had Michelle Munson of Aspera, who presented a couple of perfect use cases of Object Storage in the media and entertainment industry. Aspera developed a very smart way to transfer large amounts of data over the WAN in a much more efficient way than how it is currently done. Aspera’s bandwidth optimization software practically enables this new generation of Object Storage by taking away the latency issue, e.g. to stream high res movies over a long distance. Once we had explained the drivers for Object Storage, the opportunities and best practices, most of the discussion (questions from the audience) was about why RAID is not the right technology to architect an Object Storage platform with. We discussed the benefits of erasure coding in much detail and spent a lot of time on the differences with RAID. In short: in Erasure Coding based systems, all disks are equal (all parity) and there is no need to rebuild a disk when broken: when codes are lost due to bit errors or hardware failures, new codes can be generated spread over the whole pool, not just one system. A recent and very good independent deepdive in the Amplidata erasure coding technology can be found here.
A lot less RAID and erasure coding at the Createasphere DAM Show in New York a few weeks later. The show focusses on Digital Asset Management and the attendees are more interested in the applications and content than the actual data. That did not make the discussion any less interesting. From Sarah Berndt of Johnson Space Center we learned a *lot* about the importance of metadata, an issue that would be discussed at SNW Europe as well (see further). Interesting newcomer on the panel was Dalet, a DAM vendor who integrate with many Object Storage platforms and see a clear benefit of having their platform interface with a scale-out storage pool directly (REST) rather than through an additional file system. Dalet is the perfect valet in my car analogy that is becoming more and more popular: a file system is like a public parking lot where you have to go find your car yourself (this once took me a few hours in Paris’ CDG airport). Object storage is much more like valet parking, where you get a ticket when you leave your car and use that ticket to get it back later. The application, Dalet, is the valet.
At SNWUSA in Santa Clara in October we had David Chapa of Quantum on board for the firs time. David is an authority to explain the use cases where tape is the better alternative and when it is better to use Object Storage, or Wide Area Storage (WAS) as Quantum calls it. WAS is Quantum’s attempt to take away the confusion caused by the name Object Storage, a term first used by EMC almost a decade ago. I think it’s a good idea of Quantum to try to introduce a new term, I’m not sure WAS is the best choice though. Maybe something new will come up next month at Greg Duplessie’s Object Storage summit, although I doubt it. Once we kind of agreed that this generation of Object Storage, or whatever it will be called later, has very little or nothing to do with EMC’s product line that was most famous for locking-in customers, the conversation took a very sudden change. In an attempt to spice up the discussion, Ranajit Nevatia of Panzura claimed Object Storage provides very bad performance. This was very much true for the first generation of Object Storage platforms we just discussed and might be true of the platforms they currently promote (including Atmos, EMC’s second attempt at Object Storage), but not at all for the technologies that are most successful on the market today. Scality have been promoting their high IOPS (smaller files, IO intensive workloads). Amplidata focus more on large file storage, which is IMO the more obviouse use case for Object Storage, but I may be biassed. In a recent independent test, Amplidata demonstrated throughout numbers that can only be called “extremely high-performant”. Howard Marks confirmed Amplidata provides 1 GB/s of throughput with a single controller. But it gets better: Amplidatas scale throughput linearly by adding more controllers. So a system with 6 controllers provides 6 GB/s of throughput.
Last week’s panel at SNW Europe, which is traditionally well attended by press and analysts, was again very interactive. Robin Harris set the stage explaining how this generation of Object Storage is different from earlier products. This led to a lengthy discussion about API’s, a call for one standard API (I say let’s just all standardize on Amazon) and complaints about lock-ins by … yes, EMC. Vendors be warned, that trick is getting old and is not getting any respect. The audience included some of the better analysts and bloggers, including the451′s Simon Robinson and Storagebod. The latter, known for being a critic of the Object Storage paradigm (with great arguments), helped us bring the discussion to the next level by bringing up interesting topics such as the importance of metadata for the applications: who/what will enter metadata? The application? People? The panel acknowledged that, while applications already generate quite some metadata, companies will have to make business decisions on how much metadata they need. Adding more metadata comes at a cost as it will require manual work. The day after the panel, it was interesting to see Chris Mellor be critical of Object Storage in his review of the show (how dare the Object Storage vendors doubt the many benefits of tape?). Chris, join us on the panel next time!
SYS-CON Events announced today that Numerex Corp, a leading provider of managed enterprise solutions enabling the Internet of Things (IoT), will exhibit at the 19th International Cloud Expo | @ThingsExpo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Numerex Corp. (NASDAQ:NMRX) is a leading provider of managed enterprise solutions enabling the Internet of Things (IoT). The Company's solutions produce new revenue streams or create operating...
Oct. 25, 2016 01:30 PM EDT Reads: 2,692
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Oct. 25, 2016 01:00 PM EDT Reads: 3,678
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Oct. 25, 2016 12:45 PM EDT Reads: 4,940
When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center. What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to...
Oct. 25, 2016 12:45 PM EDT Reads: 13,867
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Oct. 25, 2016 12:30 PM EDT Reads: 919
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Oct. 25, 2016 11:45 AM EDT Reads: 3,809
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service.
Oct. 25, 2016 11:30 AM EDT Reads: 1,126
SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will present at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they manag...
Oct. 25, 2016 11:15 AM EDT Reads: 3,648
SYS-CON Events announced today that eCube Systems, the leading provider of modern development tools and best practices for Continuous Integration on OpenVMS, will exhibit at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. eCube Systems offers a family of middleware products and development tools that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
Oct. 25, 2016 11:00 AM EDT Reads: 4,567
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of So...
Oct. 25, 2016 10:45 AM EDT Reads: 2,179
Apache Hadoop is a key technology for gaining business insights from your Big Data, but the penetration into enterprises is shockingly low. In fact, Apache Hadoop and Big Data proponents recognize that this technology has not yet achieved its game-changing business potential. In his session at 19th Cloud Expo, John Mertic, director of program management for ODPi at The Linux Foundation, will explain why this is, how we can work together as an open data community to increase adoption, and the i...
Oct. 25, 2016 10:15 AM EDT Reads: 1,938
operations aren’t merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations – together with other practices such as security – to collaborate and coexist with less overhead and conflict than in the past. In his session at @DevOpsSummit at 19th Cloud Expo, Gordon Haff, Red Hat Technology Evangelist, will discuss what modern operational practices look like in a world in which applications are more loosely coupled, are deve...
Oct. 25, 2016 09:45 AM EDT Reads: 1,820
DevOps is a term that comes full of controversy. A lot of people are on the bandwagon, while others are waiting for the term to jump the shark, and eventually go back to business as usual. Regardless of where you are along the specturm of loving or hating the term DevOps, one thing is certain. More and more people are using it to describe a system administrator who uses scripts, or tools like, Chef, Puppet or Ansible, in order to provision infrastructure. There is also usually an expectation of...
Oct. 25, 2016 09:15 AM EDT Reads: 1,742
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
Oct. 25, 2016 08:45 AM EDT Reads: 16,521
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
Oct. 25, 2016 06:00 AM EDT Reads: 7,255
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
Oct. 25, 2016 05:30 AM EDT Reads: 1,527
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Oct. 25, 2016 05:15 AM EDT Reads: 2,003
DevOps theory promotes a culture of continuous improvement built on collaboration, empowerment, systems thinking, and feedback loops. But how do you collaborate effectively across the traditional silos? How can you make decisions without system-wide visibility? How can you see the whole system when it is spread across teams and locations? How do you close feedback loops across teams and activities delivering complex multi-tier, cloud, container, serverless, and/or API-based services?
Oct. 25, 2016 04:45 AM EDT Reads: 1,101
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
Oct. 25, 2016 04:00 AM EDT Reads: 998
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will show how customers are able to achieve a level of transparency that enables everyon...
Oct. 25, 2016 03:30 AM EDT Reads: 1,312