Click here to close now.




















Welcome!

Microservices Expo Authors: VictorOps Blog, Pat Romanski, Liz McMillan, Elizabeth White, Ruxit Blog

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security, @BigDataExpo, SDN Journal

@CloudExpo: Blog Feed Post

Can the Cloud Do ‘In Perpetuity’?

One thing, of course, that most public cloud providers are good at is offering a platform upon which others can build

Cloud computing is great, right? As a way to get something up and running quickly, affordably, and with a minimum of fuss, it can rarely be beaten.

But some of the most compelling attributes of the public cloud are best suited to ephemeral or (relatively!) short-term use cases. You can spin up a cloud server in minutes. You can scale a cloud-based application to cope with the peaks and troughs of demand. You can control all of this through a web console, with no more than a credit card and a laptop. Silicon Valley, SoMa, Silicon Alley, Silicon Roundabout, Silicon Allee, Silicon Wadi, Silicon Forest, Silicon Welly, and the Silicon Bog (only one of those was made up, I think) are full to bursting with bright young things building exciting new products (and silly photo sharing sites) powered only by the cloud and expensive coffee.

3166391937_f273e4e212_zAnd then you have government, private, and commercial Archives, with an over-riding imperative to keep stuff for a very, very long time. These Archives clearly can (and do) use cloud computing in the same ways as everyone else. They use clouds to cost-effectively transform data from one format to another, they use clouds to stream large and popular media files to the public, and they use clouds in all sorts of other ways to make innumerable workflows and processes easier, cheaper, or more robust. For those use cases, even the biggest, grandest, and most important of archives is actually pretty much like any other user. Cloud’s as useful to them as it is to the rest of us, and that’s great.

Does it make sense, though, for Archives to entrust any of their long-term preservation role to the cloud? I’m not sure (yet), but The National Archives (TNA) here in the UK wants to find out. They’ve commissioned a study from a small consultancy, Charles Beagrie, and I’m subcontracted to provide a bit of cloud knowledge to the team.

Out of the box, you’d have to question the sense of an archive entrusting anything to the public cloud for purposes of long-term preservation. That’s not really what Amazon’s Simple Storage Service or Rackspace’s Cloud Files or any of the other cloud-based filestores are for. Their Service Level Agreements and their technical underpinnings are all about cost-effectively storing lots of stuff and losing as little as possible. If a file is lost or damaged, the service provider might pay out a few service credits, and/or the customer might restore from a backup, and everyone continues on their way. Archivists, we were reminded at one of the project’s focus groups, have this peculiar expectation that the systems they use to preserve their primary materials won’t lose anything at all. A couple of service credits don’t really help when you just lost, truncated, or changed a few words in the digital equivalent of the Magna Carta or the Domesday Book or the Book of Kells or the Declaration of Arbroath. And, just to be totally clear, losing a digital copy of the Declaration of Arbroath would be ok. The National Archives of Scotland still has the vellum (I presume their copy was written on vellum?) in a climate-controlled vault. They probably also have a CD or two of backups for the digital images. Things become a bit more serious when the content is ‘born digital,’ and the file you’re preserving is the thing itself and not just an image of some physical artefact.

Even with archival-ish services like Glacier, which Amazon says

is designed to provide average annual durability of 99.999999999% for an archive. The service redundantly stores data in multiple facilities and on multiple devices within each facility. To increase durability, Amazon Glacier synchronously stores your data across multiple facilities before returning SUCCESS on uploading archives. Unlike traditional systems that can require laborious data verification and manual repair, Glacier performs regular, systematic data integrity checks and is built to be automatically self-healing,

(my emphasis)

the big public cloud providers aren’t really in the business of supporting the extreme needs of an Archive. Archives demand a whole extra level of error checking, resilience, redundancy and integrity, and it would be cost-prohibitive for AWS and their competitors to do all that across their sprawling data centres when most customers are actually perfectly happy with “redundantly stores data in multiple facilities” and “automatically self-healing.”

Interestingly, Seagate sees value in offering a Glacier competitor capable of storing data “intact for decades” and offering access instantly rather than in a matter of hours as Glacier does. As it’s based in Utah I doubt that European government archives would touch it, but it will be interesting to see whether their North American cousins show any interest…

One thing, of course, that most public cloud providers are good at is offering a platform upon which others can build. Archivists, like others, have begun to layer rules, policies, procedures and processes on top of the bare-bones cloud infrastructure offerings, to build something a little more robust and dependable. Services like DuraCloud take AWS and Rackspace (currently only in their US data centres, but that could change), and add things like proactive error checking and even more backups to deliver something that an archivist might be prepared to trust.

There’s a use case here, and there are plenty of (mostly university) archives in the States putting DuraCloud and similar cloud-powered tools to work as part of their preservation strategy.

But I can’t help wondering if some great big enterprise data management solution, with multiply redundant disks, multiply redundant backups and a whole heap of watertight, ironclad, fault tolerant, and ridiculously over-specified policies might be a better (albeit eye-wateringly expensive) way to preserve the truly irreplaceable? Either that, or archives and archivists need to explicitly embrace a more pragmatic approach to what they’re attempting with these systems.

‘Design for failure’ is a core tenet of cloud-powered systems. What’s the archival equivalent? ‘Lose nothing, ever’ just won’t cut it.

Disclaimer: Charles Beagrie is a client. TNA is a client of theirs. This post is not part of the project. Any opinions expressed here are my own, a work in progress… and subject to change!

Image of The National Archives by Flickr user ‘electropod’

Read the original blog entry...

More Stories By Paul Miller

Paul Miller works at the interface between the worlds of Cloud Computing and the Semantic Web, providing the insights that enable you to exploit the next wave as we approach the World Wide Database.

He blogs at www.cloudofdata.com.

@MicroservicesExpo Stories
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
The Microservices architectural pattern promises increased DevOps agility and can help enable continuous delivery of software. This session is for developers who are transforming existing applications to cloud-native applications, or creating new microservices style applications. In his session at DevOps Summit, Jim Bugwadia, CEO of Nirmata, will introduce best practices, patterns, challenges, and solutions for the development and operations of microservices style applications. He will discuss ...
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
Before becoming a developer, I was in the high school band. I played several brass instruments - including French horn and cornet - as well as keyboards in the jazz stage band. A musician and a nerd, what can I say? I even dabbled in writing music for the band. Okay, mostly I wrote arrangements of pop music, so the band could keep the crowd entertained during Friday night football games. What struck me then was that, to write parts for all the instruments - brass, woodwind, percussion, even k...
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
What does “big enough” mean? It’s sometimes useful to argue by reductio ad absurdum. Hello, world doesn’t need to be broken down into smaller services. At the other extreme, building a monolithic enterprise resource planning (ERP) system is just asking for trouble: it’s too big, and it needs to be decomposed.
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...
Brands are more than the sum of their brand elements – logos, colors, shapes, and the like. Brands are promises. Promises from a company to its customers that its products will deliver the value and experience customers expect. Today, digital is transforming enterprises across numerous industries. As companies become software-driven organizations, their brands transform into digital brands. But if brands are promises, then what do digital brands promise – and how do those promises differ from ...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
We chat again with Jason Bloomberg, a leading industry analyst and expert on achieving digital transformation by architecting business agility in the enterprise. He writes for Forbes, Wired, TechBeacon, and his biweekly newsletter, the Cortex. As president of Intellyx, he advises business executives on their digital transformation initiatives and delivers training on Agile Architecture. His latest book is The Agile Architecture Revolution. Check out his first interview on Agile trends here.