Click here to close now.

Welcome!

Microservices Journal Authors: Lori MacVittie, Ruxit Blog, Roger Strukhoff, Carmen Gonzalez, Liz McMillan

Related Topics: Cloud Expo, Microservices Journal, Virtualization, Security, Big Data Journal, SDN Journal

Cloud Expo: Blog Feed Post

Can the Cloud Do ‘In Perpetuity’?

One thing, of course, that most public cloud providers are good at is offering a platform upon which others can build

Cloud computing is great, right? As a way to get something up and running quickly, affordably, and with a minimum of fuss, it can rarely be beaten.

But some of the most compelling attributes of the public cloud are best suited to ephemeral or (relatively!) short-term use cases. You can spin up a cloud server in minutes. You can scale a cloud-based application to cope with the peaks and troughs of demand. You can control all of this through a web console, with no more than a credit card and a laptop. Silicon Valley, SoMa, Silicon Alley, Silicon Roundabout, Silicon Allee, Silicon Wadi, Silicon Forest, Silicon Welly, and the Silicon Bog (only one of those was made up, I think) are full to bursting with bright young things building exciting new products (and silly photo sharing sites) powered only by the cloud and expensive coffee.

3166391937_f273e4e212_zAnd then you have government, private, and commercial Archives, with an over-riding imperative to keep stuff for a very, very long time. These Archives clearly can (and do) use cloud computing in the same ways as everyone else. They use clouds to cost-effectively transform data from one format to another, they use clouds to stream large and popular media files to the public, and they use clouds in all sorts of other ways to make innumerable workflows and processes easier, cheaper, or more robust. For those use cases, even the biggest, grandest, and most important of archives is actually pretty much like any other user. Cloud’s as useful to them as it is to the rest of us, and that’s great.

Does it make sense, though, for Archives to entrust any of their long-term preservation role to the cloud? I’m not sure (yet), but The National Archives (TNA) here in the UK wants to find out. They’ve commissioned a study from a small consultancy, Charles Beagrie, and I’m subcontracted to provide a bit of cloud knowledge to the team.

Out of the box, you’d have to question the sense of an archive entrusting anything to the public cloud for purposes of long-term preservation. That’s not really what Amazon’s Simple Storage Service or Rackspace’s Cloud Files or any of the other cloud-based filestores are for. Their Service Level Agreements and their technical underpinnings are all about cost-effectively storing lots of stuff and losing as little as possible. If a file is lost or damaged, the service provider might pay out a few service credits, and/or the customer might restore from a backup, and everyone continues on their way. Archivists, we were reminded at one of the project’s focus groups, have this peculiar expectation that the systems they use to preserve their primary materials won’t lose anything at all. A couple of service credits don’t really help when you just lost, truncated, or changed a few words in the digital equivalent of the Magna Carta or the Domesday Book or the Book of Kells or the Declaration of Arbroath. And, just to be totally clear, losing a digital copy of the Declaration of Arbroath would be ok. The National Archives of Scotland still has the vellum (I presume their copy was written on vellum?) in a climate-controlled vault. They probably also have a CD or two of backups for the digital images. Things become a bit more serious when the content is ‘born digital,’ and the file you’re preserving is the thing itself and not just an image of some physical artefact.

Even with archival-ish services like Glacier, which Amazon says

is designed to provide average annual durability of 99.999999999% for an archive. The service redundantly stores data in multiple facilities and on multiple devices within each facility. To increase durability, Amazon Glacier synchronously stores your data across multiple facilities before returning SUCCESS on uploading archives. Unlike traditional systems that can require laborious data verification and manual repair, Glacier performs regular, systematic data integrity checks and is built to be automatically self-healing,

(my emphasis)

the big public cloud providers aren’t really in the business of supporting the extreme needs of an Archive. Archives demand a whole extra level of error checking, resilience, redundancy and integrity, and it would be cost-prohibitive for AWS and their competitors to do all that across their sprawling data centres when most customers are actually perfectly happy with “redundantly stores data in multiple facilities” and “automatically self-healing.”

Interestingly, Seagate sees value in offering a Glacier competitor capable of storing data “intact for decades” and offering access instantly rather than in a matter of hours as Glacier does. As it’s based in Utah I doubt that European government archives would touch it, but it will be interesting to see whether their North American cousins show any interest…

One thing, of course, that most public cloud providers are good at is offering a platform upon which others can build. Archivists, like others, have begun to layer rules, policies, procedures and processes on top of the bare-bones cloud infrastructure offerings, to build something a little more robust and dependable. Services like DuraCloud take AWS and Rackspace (currently only in their US data centres, but that could change), and add things like proactive error checking and even more backups to deliver something that an archivist might be prepared to trust.

There’s a use case here, and there are plenty of (mostly university) archives in the States putting DuraCloud and similar cloud-powered tools to work as part of their preservation strategy.

But I can’t help wondering if some great big enterprise data management solution, with multiply redundant disks, multiply redundant backups and a whole heap of watertight, ironclad, fault tolerant, and ridiculously over-specified policies might be a better (albeit eye-wateringly expensive) way to preserve the truly irreplaceable? Either that, or archives and archivists need to explicitly embrace a more pragmatic approach to what they’re attempting with these systems.

‘Design for failure’ is a core tenet of cloud-powered systems. What’s the archival equivalent? ‘Lose nothing, ever’ just won’t cut it.

Disclaimer: Charles Beagrie is a client. TNA is a client of theirs. This post is not part of the project. Any opinions expressed here are my own, a work in progress… and subject to change!

Image of The National Archives by Flickr user ‘electropod’

Read the original blog entry...

More Stories By Paul Miller

Paul Miller works at the interface between the worlds of Cloud Computing and the Semantic Web, providing the insights that enable you to exploit the next wave as we approach the World Wide Database.

He blogs at www.cloudofdata.com.

@MicroservicesExpo Stories
Lacking the traditional fanfare associated with any technology that can use the word "container" or mention "Docker" in its press release, Ubuntu Core and its new Snappy system management scheme was introduced late last year. Since then, it's been gaining steam with Microsoft and Amazon and Google announcing support for the stripped-down version of the operating system. Ubuntu Core is what's being called a "micro-OS"; a stripped down, lean container-supporting machine that's becoming more pop...
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
The stack is the hack, Jack. That's my takeaway from several events I attended over the past few weeks in Silicon Valley and Southeast Asia. I listened to and participated in discussions about everything from large datacenter management (think Facebook Open Compute) to enterprise-level cyberfraud (at a seminar in Manila attended by the US State Dept. and Philippine National Police) to the world of entrepreneurial startups, app deployment, and mobility (in a series of meetups and talks in bot...
This digest provides an overview of good resources that are well worth reading. We’ll be updating this page as new content becomes available, so I suggest you bookmark it. Also, expect more digests to come on different topics that make all of our IT-hearts go boom!
SYS-CON Events announced today the DevOps Foundation Certification Course, being held June ?, 2015, in conjunction with DevOps Summit and 16th Cloud Expo at the Javits Center in New York City, NY. This sixteen (16) hour course provides an introduction to DevOps – the cultural and professional movement that stresses communication, collaboration, integration and automation in order to improve the flow of work between software developers and IT operations professionals. Improved workflows will res...
I woke up this morning to the devastating news about the earthquake in Nepal. Sitting here in California that destruction is literally on the other side of the world but my mind immediately went to thinking about my good friend Jeremy Geelan. See Jeremy and his family have been living in Kathmandu for a while now. His wife, in fact, is the Danish Ambassador to Nepal!
SYS-CON Events announced today that B2Cloud, a provider of enterprise resource planning software, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. B2cloud develops the software you need. They have the ideal tools to help you work with your clients. B2Cloud’s main solutions include AGIS – ERP, CLOHC, AGIS – Invoice, and IZUM
Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch ...
SYS-CON Events announced today that MangoApps will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY., and the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. MangoApps provides private all-in-one social intranets allowing workers to securely collaborate from anywhere in the world and from any device. Social, mobile, and eas...
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
The world's leading Cloud event, Cloud Expo has launched Microservices Journal on the SYS-CON.com portal, featuring over 19,000 original articles, news stories, features, and blog entries. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. Microservices Journal offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. Follow new article posts on T...
One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher's UI or API. To implement our load balancing functionality we decided to use HAproxy, which is deployed as a contianer, and managed by the Rancher orchestration functionality. With Rancher's Load Balancing capability, users ...
There are 182 billion emails sent every day, generating a lot of data about how recipients and ISPs respond. Many marketers take a more-is-better approach to stats, preferring to have the ability to slice and dice their email lists based numerous arbitrary stats. However, fundamentally what really matters is whether or not sending an email to a particular recipient will generate value. Data Scientists can design high-level insights such as engagement prediction models and content clusters that a...
SYS-CON Events announced today the IoT Bootcamp – Jumpstart Your IoT Strategy, being held June 9–10, 2015, in conjunction with 16th Cloud Expo and Internet of @ThingsExpo at the Javits Center in New York City. This is your chance to jumpstart your IoT strategy. Combined with real-world scenarios and use cases, the IoT Bootcamp is not just based on presentations but includes hands-on demos and walkthroughs. We will introduce you to a variety of Do-It-Yourself IoT platforms including Arduino, Ras...
What’s hot in today’s cloud computing world? Containers are fast becoming a viable alternative to virtualization for the right use cases. But to understand why containers can be a better option, we need to first understand their origins. In basic terms, containers are application-centric environments that help isolate and run workloads far more efficiently than the traditional hypervisor technology found in commodity cloud Infrastructure as a Service. Modern operating systems (Linux, Windows, e...

As a company making software for Continuous Delivery and Devops at scale, at XebiaLabs we’re pretty much always in discussions with users about the benefits and challenges of new development styles, application architectures, and runtime platforms. Unsurprisingly, many of these discussions right now focus on microservices on the application side and containers and related frameworks […]

The post May. 3, 2015 10:00 AM EDT  Reads: 1,741

The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend...
Financial services organizations were among the earliest enterprise adopters of cloud computing. The ability to leverage massive compute, storage and networking resources via RESTful APIs and automated tools like Chef and Puppet made it possible for their high-horsepower IT users to develop a whole new array of applications. Companies like Wells Fargo, Fidelity and BBVA are visible, vocal and engaged supporters of the OpenStack community, running production clouds for applications ranging from d...
Chuck Piluso will present a study of cloud adoption trends and the power and flexibility of IBM Power and Pureflex cloud solutions. Speaker Bio: Prior to Data Storage Corporation (DSC), Mr. Piluso founded North American Telecommunication Corporation, a facilities-based Competitive Local Exchange Carrier licensed by the Public Service Commission in 10 states, serving as the company's chairman and president from 1997 to 2000. Between 1990 and 1997, Mr. Piluso served as chairman & founder of ...
To manage complex web services with lots of calls to the cloud, many businesses have invested in Application Performance Management (APM) and Network Performance Management (NPM) tools. Together APM and NPM tools are essential aids in improving a business's infrastructure required to support an effective web experience... but they are missing a critical component - Internet visibility. Internet connectivity has always played a role in customer access to web presence, but in the past few years u...