Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Derek Weeks, Pat Romanski, Carmen Gonzalez

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security

@CloudExpo: Blog Feed Post

The Hottest Panels of this Fall!

It’s probably a good idea to state I wrote this blog while employed by Amplidata

It’s probably a good idea to state I wrote this blog while employed by Amplidata, but during my own time. This article reflects my own opinion, not necessarily that of Amplidata or its partners.

As I am writing this,  I am crossing the Atlantic for the seventh time in about two months. I’m on my way to CloudExpo West in Santa Clara, one of the few technology trade shows that are still growing. At the event I will be sitting on the last Object Storage for Big Data panel of the season. Robin Harris – aka StorageMojo – and I have been working hard this fall educating the industry on the benefits, challenges and opportunities of Object Storage. We’ve been trying to explain how the current generation of Object Storage platforms is so much different from the first attempt at it (EMC’s Centera), how it enables companies cope with the massive amounts of unstructured data that we are all generating and how companies can even monetize archived data by re-activating their archives.

Unlike StorageMojo and some other people who I have been working with lately, I don’t have decades of experience in the storage industry. However, being located in Belgium, I’ve had the privilege of working with people who used to be  part of the Filepool team (and spent years at EMC after the acquisition). Those were the earliest object storage days, I had no idea of what was coming. Later, at Sun, I learned a lot about Object Storage when we were working on the Sun Cloud project. The architecture (ZFS) was different of what we are seeing on the market today, but the concept was – as was often the case at Sun – promising. This article is not another take at describing Object Storage and the benefits it brings, it’s more an overview of what we have learned at the past four Object Storage for Big Data panels. The setup for each of the panels was mostly the same: Robin Harris would challenge between 4 and 6 Object Storage specialists (technology vendors or users) and try to have the audience participate with. We did expect the topics of the panels to be different as we were hosted by trade shows with different audiences, but we never expected the discussions to vary as much as they did.

The common thread for each panel was the challenge companies have to store different types of Big Data and more particularly Big Unstructured Data. The latter represents up to 90% of the digital data that we will be generating over the next decades and will put traditional storage technologies under heavy stress as they are hitting their scalability limits. Unstructured data is currently mostly stored in file system based storage infrastructures. File systems will not only be unable to scale as required – try setting up a file structure for 5 petabytes of data – but they will also become obsolete as applications can provide a lot more features to keep your unstructured data organized (structured?), to analyze that information and potentially monetize what is today stored in (dead) tape archives. Rich applications that talk directly to a large and (infinitely) scalable storage pool make a lot more sense than maintenance-intensive files systems. Also, properly designed Object Storage (with erasure coding technology instead of RAID to protect the data) requires a lot less overhead, consumes a lot less power, can easily be implemented over multiple sites and does not require migration to new systems when a system cannot be further scaled. So what else did we discuss at the panels?

The first panel after summer was at Intel’s IDF in San Francisco. Panel members came from Intel and Quanta, who with Amplidata built an Object Storage reference architecture. We also had Michelle Munson of Aspera, who presented a couple of perfect use cases of Object Storage in the media and entertainment industry. Aspera developed a very smart way to transfer large amounts of data over the WAN in a much more efficient way than how it is currently done. Aspera’s bandwidth optimization software practically enables this new generation of Object Storage by taking away the latency issue, e.g. to stream high res movies over a long distance. Once we had explained the drivers for Object Storage, the opportunities and best practices, most of the discussion (questions from the audience) was about why RAID is not the right technology to architect an Object Storage platform with. We discussed the benefits of erasure coding in much detail and spent a lot of time on the differences with RAID. In short: in Erasure Coding based systems, all disks are equal (all parity) and there is no need to rebuild a disk when broken: when codes are lost due to bit errors or hardware failures, new codes can be generated spread over the whole pool, not just one system. A recent and very good independent deepdive in the Amplidata erasure coding technology can be found here.

A lot less RAID and erasure coding at the Createasphere DAM Show in New York a few weeks later. The show focusses on Digital Asset Management and the attendees are more interested in the applications and content than the actual data. That did not make the discussion any less interesting. From Sarah Berndt of Johnson Space Center we learned a *lot* about the importance of metadata, an issue that would be discussed at SNW Europe as well (see further). Interesting newcomer on the panel was Dalet, a DAM vendor who integrate with many Object Storage platforms and see a clear benefit of having their platform interface with a scale-out storage pool directly (REST) rather than through an additional file system. Dalet is the perfect valet in my car analogy that is becoming more and more popular: a file system is like a public parking lot where you have to go find your car yourself (this once took me a few hours in Paris’ CDG airport). Object storage is much more like valet parking, where you get a ticket when you leave your car and use that ticket to get it back later. The application, Dalet, is the valet.

At SNWUSA in Santa Clara in October we had David Chapa of Quantum on board for the firs time. David is an authority to explain the use cases where tape is the better alternative and when it is better to use Object Storage, or Wide Area Storage (WAS) as Quantum calls it. WAS is Quantum’s attempt to take away the confusion caused by the name Object Storage, a term first used by EMC almost a decade ago. I think it’s a good idea of Quantum to try to introduce a new term, I’m not sure WAS is the best choice though. Maybe something new will come up next month at Greg Duplessie’s Object Storage summit, although I doubt it. Once we kind of agreed that this generation of Object Storage, or whatever it will be called later, has very little or nothing to do with EMC’s product line that was most famous for locking-in customers, the conversation took a very sudden change. In an attempt to spice up the discussion, Ranajit Nevatia of Panzura claimed Object Storage provides very bad performance. This was very much true for the first generation of Object Storage platforms we just discussed and might be true of the platforms they currently promote (including Atmos, EMC’s second attempt at Object Storage), but not at all for the technologies that are most successful on the market today. Scality have been promoting their high IOPS (smaller files, IO intensive workloads). Amplidata  focus more on large file storage, which is IMO the more obviouse use case for Object Storage, but I may be biassed. In a recent independent test, Amplidata demonstrated throughout numbers that can only be called “extremely high-performant”. Howard Marks confirmed Amplidata provides 1 GB/s of throughput with a single controller. But it gets better: Amplidatas scale throughput linearly by adding more controllers. So a system with 6 controllers provides 6 GB/s of throughput.

Last week’s panel at SNW Europe, which is traditionally well attended by press and analysts, was again very interactive. Robin Harris set the stage explaining how this generation of Object Storage is different from earlier products. This led to a lengthy discussion about API’s, a call for one standard API (I say let’s just all standardize on Amazon) and complaints about lock-ins by … yes, EMC. Vendors be warned, that trick is getting old and is not getting any respect. The audience included some of the better analysts and bloggers, including the451′s Simon Robinson and Storagebod. The latter, known for being a critic of the Object Storage paradigm (with great arguments), helped us bring the discussion to the next level by bringing up interesting topics such as the importance of metadata for the applications: who/what will enter metadata? The application? People? The panel acknowledged that, while applications already generate quite some metadata, companies will have to make business decisions on how much metadata they need. Adding more metadata comes at a cost as it will require manual work. The day after the panel, it was interesting to see Chris Mellor be critical of Object Storage in his review of the show (how dare the Object Storage vendors doubt the many benefits of tape?). Chris, join us on the panel next time!

Read the original blog entry...

More Stories By Tom Leyden

Tom Leyden is VP Product Marketing at Scality. Scality was founded in 2009 by a team of entrepreneurs and technologists. The idea wasn’t storage, per se. When the Scality team talked to the initial base of potential customers, the customers wanted a system that could “route” data to and from individual users in the most scalable, efficient way possible. And so began a non-traditional approach to building a storage system that no one had imagined before. No one thought an object store could have enough performance for all the files and attachments of millions of users. No one thought a system could remain up and running through software upgrades, hardware failures, capacity expansions, and even multiple hardware generations coexisting. And no one believed you could do all this and scale to petabytes of content and billions of objects in pure software.

@MicroservicesExpo Stories
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
True Story. Over the past few years, Fannie Mae transformed the way in which they delivered software. Deploys increased from 1,200/month to 15,000/month. At the same time, productivity increased by 28% while reducing costs by 30%. But, how did they do it? During the All Day DevOps conference, over 13,500 practitioners from around the world to learn from their peers in the industry. Barry Snyder, Senior Manager of DevOps at Fannie Mae, was one of 57 practitioners who shared his real world journe...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
“RackN is a software company and we take how a hybrid infrastructure scenario, which consists of clouds, virtualization, traditional data center technologies - how to make them all work together seamlessly from an operational perspective,” stated Dan Choquette, Founder of RackN, in this SYS-CON.tv interview at @DevOpsSummit at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and E...
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to c...
Synthetic monitoring is hardly a new technology. It’s been around almost as long as the commercial World Wide Web has. But the importance of monitoring the performance and availability of a web application by simulating users’ interactions with that application, from around the globe, has never been more important. We’ve seen prominent vendors in the broad APM space add this technology with new development or partnerships just in the last 18 months.
The unique combination of Amazon Web Services and Cloud Raxak, a Gartner Cool Vendor in IT Automation, provides a seamless and cost-effective way of securely moving on-premise IT workloads to Amazon Web Services. Any enterprise can now leverage the cloud, manage risk, and maintain continuous security compliance. Forrester's analysis shows that enterprises need automated security to lower security risk and decrease IT operational costs. Through the seamless integration into Amazon Web Services, ...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
A lot of time, resources and energy has been invested over the past few years on de-siloing development and operations. And with good reason. DevOps is enabling organizations to more aggressively increase their digital agility, while at the same time reducing digital costs and risks. But as 2017 approaches, the hottest trends in DevOps aren’t specifically about dev or ops. They’re about testing, security, and metrics.
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and the delivery of applications and microservices. This applies equally to enterprise data centers as well as the cloud. In his session at 20th Cloud Expo, Jari Kolehmainen, founder and CTO of Kontena, will discuss solutions and benefits of a deeply integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the drone.io Cl tool...