Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Pat Romanski, Mehdi Daoudi, Stackify Blog

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security

@CloudExpo: Blog Feed Post

The Hottest Panels of this Fall!

It’s probably a good idea to state I wrote this blog while employed by Amplidata

It’s probably a good idea to state I wrote this blog while employed by Amplidata, but during my own time. This article reflects my own opinion, not necessarily that of Amplidata or its partners.

As I am writing this,  I am crossing the Atlantic for the seventh time in about two months. I’m on my way to CloudExpo West in Santa Clara, one of the few technology trade shows that are still growing. At the event I will be sitting on the last Object Storage for Big Data panel of the season. Robin Harris – aka StorageMojo – and I have been working hard this fall educating the industry on the benefits, challenges and opportunities of Object Storage. We’ve been trying to explain how the current generation of Object Storage platforms is so much different from the first attempt at it (EMC’s Centera), how it enables companies cope with the massive amounts of unstructured data that we are all generating and how companies can even monetize archived data by re-activating their archives.

Unlike StorageMojo and some other people who I have been working with lately, I don’t have decades of experience in the storage industry. However, being located in Belgium, I’ve had the privilege of working with people who used to be  part of the Filepool team (and spent years at EMC after the acquisition). Those were the earliest object storage days, I had no idea of what was coming. Later, at Sun, I learned a lot about Object Storage when we were working on the Sun Cloud project. The architecture (ZFS) was different of what we are seeing on the market today, but the concept was – as was often the case at Sun – promising. This article is not another take at describing Object Storage and the benefits it brings, it’s more an overview of what we have learned at the past four Object Storage for Big Data panels. The setup for each of the panels was mostly the same: Robin Harris would challenge between 4 and 6 Object Storage specialists (technology vendors or users) and try to have the audience participate with. We did expect the topics of the panels to be different as we were hosted by trade shows with different audiences, but we never expected the discussions to vary as much as they did.

The common thread for each panel was the challenge companies have to store different types of Big Data and more particularly Big Unstructured Data. The latter represents up to 90% of the digital data that we will be generating over the next decades and will put traditional storage technologies under heavy stress as they are hitting their scalability limits. Unstructured data is currently mostly stored in file system based storage infrastructures. File systems will not only be unable to scale as required – try setting up a file structure for 5 petabytes of data – but they will also become obsolete as applications can provide a lot more features to keep your unstructured data organized (structured?), to analyze that information and potentially monetize what is today stored in (dead) tape archives. Rich applications that talk directly to a large and (infinitely) scalable storage pool make a lot more sense than maintenance-intensive files systems. Also, properly designed Object Storage (with erasure coding technology instead of RAID to protect the data) requires a lot less overhead, consumes a lot less power, can easily be implemented over multiple sites and does not require migration to new systems when a system cannot be further scaled. So what else did we discuss at the panels?

The first panel after summer was at Intel’s IDF in San Francisco. Panel members came from Intel and Quanta, who with Amplidata built an Object Storage reference architecture. We also had Michelle Munson of Aspera, who presented a couple of perfect use cases of Object Storage in the media and entertainment industry. Aspera developed a very smart way to transfer large amounts of data over the WAN in a much more efficient way than how it is currently done. Aspera’s bandwidth optimization software practically enables this new generation of Object Storage by taking away the latency issue, e.g. to stream high res movies over a long distance. Once we had explained the drivers for Object Storage, the opportunities and best practices, most of the discussion (questions from the audience) was about why RAID is not the right technology to architect an Object Storage platform with. We discussed the benefits of erasure coding in much detail and spent a lot of time on the differences with RAID. In short: in Erasure Coding based systems, all disks are equal (all parity) and there is no need to rebuild a disk when broken: when codes are lost due to bit errors or hardware failures, new codes can be generated spread over the whole pool, not just one system. A recent and very good independent deepdive in the Amplidata erasure coding technology can be found here.

A lot less RAID and erasure coding at the Createasphere DAM Show in New York a few weeks later. The show focusses on Digital Asset Management and the attendees are more interested in the applications and content than the actual data. That did not make the discussion any less interesting. From Sarah Berndt of Johnson Space Center we learned a *lot* about the importance of metadata, an issue that would be discussed at SNW Europe as well (see further). Interesting newcomer on the panel was Dalet, a DAM vendor who integrate with many Object Storage platforms and see a clear benefit of having their platform interface with a scale-out storage pool directly (REST) rather than through an additional file system. Dalet is the perfect valet in my car analogy that is becoming more and more popular: a file system is like a public parking lot where you have to go find your car yourself (this once took me a few hours in Paris’ CDG airport). Object storage is much more like valet parking, where you get a ticket when you leave your car and use that ticket to get it back later. The application, Dalet, is the valet.

At SNWUSA in Santa Clara in October we had David Chapa of Quantum on board for the firs time. David is an authority to explain the use cases where tape is the better alternative and when it is better to use Object Storage, or Wide Area Storage (WAS) as Quantum calls it. WAS is Quantum’s attempt to take away the confusion caused by the name Object Storage, a term first used by EMC almost a decade ago. I think it’s a good idea of Quantum to try to introduce a new term, I’m not sure WAS is the best choice though. Maybe something new will come up next month at Greg Duplessie’s Object Storage summit, although I doubt it. Once we kind of agreed that this generation of Object Storage, or whatever it will be called later, has very little or nothing to do with EMC’s product line that was most famous for locking-in customers, the conversation took a very sudden change. In an attempt to spice up the discussion, Ranajit Nevatia of Panzura claimed Object Storage provides very bad performance. This was very much true for the first generation of Object Storage platforms we just discussed and might be true of the platforms they currently promote (including Atmos, EMC’s second attempt at Object Storage), but not at all for the technologies that are most successful on the market today. Scality have been promoting their high IOPS (smaller files, IO intensive workloads). Amplidata  focus more on large file storage, which is IMO the more obviouse use case for Object Storage, but I may be biassed. In a recent independent test, Amplidata demonstrated throughout numbers that can only be called “extremely high-performant”. Howard Marks confirmed Amplidata provides 1 GB/s of throughput with a single controller. But it gets better: Amplidatas scale throughput linearly by adding more controllers. So a system with 6 controllers provides 6 GB/s of throughput.

Last week’s panel at SNW Europe, which is traditionally well attended by press and analysts, was again very interactive. Robin Harris set the stage explaining how this generation of Object Storage is different from earlier products. This led to a lengthy discussion about API’s, a call for one standard API (I say let’s just all standardize on Amazon) and complaints about lock-ins by … yes, EMC. Vendors be warned, that trick is getting old and is not getting any respect. The audience included some of the better analysts and bloggers, including the451′s Simon Robinson and Storagebod. The latter, known for being a critic of the Object Storage paradigm (with great arguments), helped us bring the discussion to the next level by bringing up interesting topics such as the importance of metadata for the applications: who/what will enter metadata? The application? People? The panel acknowledged that, while applications already generate quite some metadata, companies will have to make business decisions on how much metadata they need. Adding more metadata comes at a cost as it will require manual work. The day after the panel, it was interesting to see Chris Mellor be critical of Object Storage in his review of the show (how dare the Object Storage vendors doubt the many benefits of tape?). Chris, join us on the panel next time!

Read the original blog entry...

More Stories By Tom Leyden

Tom Leyden is VP Product Marketing at Scality. Scality was founded in 2009 by a team of entrepreneurs and technologists. The idea wasn’t storage, per se. When the Scality team talked to the initial base of potential customers, the customers wanted a system that could “route” data to and from individual users in the most scalable, efficient way possible. And so began a non-traditional approach to building a storage system that no one had imagined before. No one thought an object store could have enough performance for all the files and attachments of millions of users. No one thought a system could remain up and running through software upgrades, hardware failures, capacity expansions, and even multiple hardware generations coexisting. And no one believed you could do all this and scale to petabytes of content and billions of objects in pure software.

@MicroservicesExpo Stories
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
As today's digital disruptions bounce and smash their way through conventional technologies and conventional wisdom alike, predicting their path is a multifaceted challenge. So many areas of technology advance on Moore's Law-like exponential curves that divining the future is fraught with danger. Such is the problem with artificial intelligence (AI), and its related concepts, including cognitive computing, machine learning, and deep learning.
There are several reasons why businesses migrate their operations to the cloud. Scalability and price are among the most important factors determining this transition. Unlike legacy systems, cloud based businesses can scale on demand. The database and applications in the cloud are not rendered simply from one server located in your headquarters, but is instead distributed across several servers across the world. Such CDNs also bring about greater control in times of uncertainty. A database hack ...
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
API Security is complex! Vendors like Forum Systems, IBM, CA and Axway have invested almost 2 decades of engineering effort and significant capital in building API Security stacks to lockdown APIs. The API Security stack diagram shown below is a building block for rapidly locking down APIs. The four fundamental pillars of API Security - SSL, Identity, Content Validation and deployment architecture - are discussed in detail below.
“Why didn’t testing catch this” must become “How did this make it to testing?” Traditional quality teams are the crutch and excuse keeping organizations from making the necessary investment in people, process, and technology to accelerate test automation. Just like societies that did not build waterways because the labor to keep carrying the water was so cheap, we have created disincentives to automate. In her session at @DevOpsSummit at 20th Cloud Expo, Anne Hungate, President of Daring System...
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...