Click here to close now.




















Welcome!

Microservices Expo Authors: Pat Romanski, Liz McMillan, VictorOps Blog, Elizabeth White, Trevor Parsons

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

Solving the Problem of Cloud Interoperability

There are a number of organizations looking into solving the problem of cloud federation

Reuven Cohen's "Elastic Vapor" Blog

In the next few years the a key opportunity for the emerging cloud industry will be on defining a federated cloud ecosystem by connecting multiple cloud computing providers using an agreeing upon standard or interface. There are a number of organizations looking into solving the problem of cloud federation.

A fundamental challenge in creating and managing a globally decentralized cloud computing environment is that of maintaining consistent connectivity between various untrusted components that are capable of self-organization while remaining fault tolerant. In the next few years the a key opportunity for the emerging cloud industry will be on defining a federated cloud ecosystem by connecting multiple cloud computing providers using an agreeing upon standard or interface. In this post I will examine some of work being done in cloud federation ranging from adaptive authentication to modern P2P botnets.

Cloud Computing is undoubtedly a hot topic these days, lately it seems just about everyone is claiming to be a cloud of some sort. At Enomaly our focus is on the supposed "cloud enabler" Those daring enough to go out and create their very own computing clouds, either privately or publicly. In our work it has become obvious the the real problems are not in building these large clouds, but in maintaining them. Let me put it this way, deploying 50,000 machines is relatively straight forward, updating 50,000 machines or worst yet taking back control after a security exploit is not.

There are a number of organizations looking into solving the problem of cloud federation. Traditionally, there has been a lot of work done in the grid space. More recently, a notable research project being conducted by Microsoft called the “Geneva Framework" has been focusing on some the issues surrounding cloud federation. Geneva is described as a Claims Based Access Platform and is said to help simplify access to applications and other systems with an open and interoperable claims-based model.

In case you're not familiar with the claims authentication model, the general idea is using claims about a user, such as age or group membership, that are passed to obtain access to the cloud environment and to systems integrated with that environment. Claims could be built dynamically, picking up information about users and validating existing claims via a trusted source as the user traverses a multiple cloud environments. More simply, the concept allows for multiple providers to seamlessly interact with another. The model enables developers to incorporate various authentication models that works with any corporate identity system, including Active Directory, LDAPv3-based directories, application-specific databases and new user-centric identity models, such as LiveID, OpenID and InfoCard systems, including Microsoft’s CardSpace and Novell's Digital Me. For Microsoft, Authentication seems to be at heart of their interoperability focus. For anyone more microsoft inclined, Geneva is certainly worth a closer look.

For the more academically focused, I recommend reading a recent paper titled Decentralized Overlay for Federation of Enterprise Clouds published by Rajiv Ranjan and Rajkumar Buyya at the The University of Melbourne. The team outlines the need for cloud decentralization & federation to create a globalized cloud platform. In the paper they say that distributed cloud configuration should be considered to be decentralized if none of the components in the system are more important than the others, in case that one of the component fails, then it is neither more nor less harmful to the system than caused by the failure of any other component in the system. The paper also outlines the opportunities to use Peer2Peer (P2P) protocols as the basis for these decentralized systems.

The paper is very relevant given the latest discussions occurring in the cloud interoperability realm. The paper outlines several key problems areas:

  • Large scale – composed of distributed components (services, nodes, applications,users, virtualized computers) that combine together to form a massive environment. These days enterprise Clouds consisting of hundreds of thousands of computing nodes are common (Amazon EC2, Google App Engine,Microsoft Live Mesh) and hence federating them together leads to a massivescale environment;
  • Resource contention - driven by the resource demand pattern and a lack of
    cooperation among end-user’s applications, particular set of resources can get
    swamped with excessive workload, which significantly undermines the overall
    utility delivered by the system;
  • Dynamic – the components can leave and join the system at will.

Another topic of the paper is on the challenges in regards to the design and development of decentralized, scalable, self-organizing, and federated Cloud computing system as well as a applying the the characteristics of a peer-to-peer resource protocols, which they call Aneka-Federation. (I've tried to find any other references to Aneka, but it seems to be a term used solely withing the university of Melbourne, interesting none the less)

Also interesting was the problems they outline with earlier distributed computing projects such as Seti@home saying they these systems do not provide any support for multi-application and programming models. A major factors driving some of the more traditional users of grid technologies to the use of cloud computing.

One the of questions large scale cloud computing opens is not about how to many a few thousand machines, but how do you manage a few hundred thousand machines? A lot of the work being done in decentralized cloud computing can be traced back to the emergence of modern botnets. A recent paper titled "An Advanced Hybrid Peer-to-Peer Botnet" Ping Wang, Sherri Sparks, Cliff C. Zou at The University of Central Florida outlines some of the "opportunities" by examining the creation of a hybrid P2P botnet.

In the paper the UCF team outlines the problems encountered by P2P botnets which appear surprisingly similar to the problems being encountered by the cloud computing community. The paper lays out the following practical challenges faced by botmasters; (1). How to generate a robust botnet capable of maintaining control of its remaining bots even after a substantial portion of the botnet population has been removed by defenders? (2). How to prevent significant exposure of the network topology when some bots are captured by defenders? (3). How to easily monitor and obtain the complete information of a botnet by its botmaster? (4). How to prevent (or make it harder) defenders from detecting bots via their communication traffic patterns? In addition, the design should also consider many network related issues such as dynamic or private IP addresses and the diurnal online/offline property of bots. A very interesting read.

I am not condoning the use of botnets, but architecturally speaking we can learn a lot from our more criminally focused colleagues. Don't kid yourselves, they're already looking at ways to take control of your cloud and federation will be a key aspect in how you protect yourself and your users from being taken for a ride.

More Stories By Reuven Cohen

An instigator, part time provocateur, bootstrapper, amateur cloud lexicographer, and purveyor of random thoughts, 140 characters at a time.

Reuven is an early innovator in the cloud computing space as the founder of Enomaly in 2004 (Acquired by Virtustream in February 2012). Enomaly was among the first to develop a self service infrastructure as a service (IaaS) platform (ECP) circa 2005. As well as SpotCloud (2011) the first commodity style cloud computing Spot Market.

Reuven is also the co-creator of CloudCamp (100+ Cities around the Globe) CloudCamp is an unconference where early adopters of Cloud Computing technologies exchange ideas and is the largest of the ‘barcamp’ style of events.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Microservice architecture is fast becoming a go-to solution for enterprise applications, but it's not always easy to make the transition from an established, monolithic infrastructure. Lightweight and loosely coupled, building a set of microservices is arguably more difficult than building a monolithic application. However, once established, microservices offer a series of advantages over traditional architectures as deployment times become shorter and iterating becomes easier.
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
What does “big enough” mean? It’s sometimes useful to argue by reductio ad absurdum. Hello, world doesn’t need to be broken down into smaller services. At the other extreme, building a monolithic enterprise resource planning (ERP) system is just asking for trouble: it’s too big, and it needs to be decomposed.