Welcome!

Microservices Expo Authors: Zakia Bouachraoui, Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

The Transformation Toward Universal Utility Computing Is Beginning

When the cloud transforms into universal utility computing

Nick Davis's Blog

Cloud computing promises to change much of how we as developers, designers, and architects currently design and build web applications. For one, concurrency is big issue that need to be addressed if the apps of the next decade are going to scale on this cloud infrastructure.


One of the most popular themes in the last couple years on the web is the much-heralded “cloud computing”. Of course, the cloud metaphor is taken from the representation of the Internet in architecture diagrams as a big fluffy cloud to which other, more discrete networks and systems are interconnected. From the Wikipedia page, Cloud computing, defined, is:

a style of computing in which IT-related capabilities are provided “as a service”, allowing users to access technology-enabled services from the Internet (”in the cloud”) without knowledge of, expertise with, or control over the technology infrastructure that supports them.

Google popularized dynamic web-based applications that behaved much like desktop apps, and essentially ushered in the era of what we fondly (sometimes snidely) call “Web 2.0″. In the short span of 3 or 4 years (I’m counting since 2005, when the web community at large became aware of Ajax and similar technology), dynamic web applications have become the de facto UI for end-user interactive software offerings — if it makes sense to use the ubiquitous browser as the frontend, then why not? Instead of forcing users to install, configure, and learn yet another new desktop app, give them an interface with which they’re already comfortable and familiar. As an added benefit, the browser interface is supported on nearly every operating system and platform used today (proprietary plugins and extensions notwithstanding).

Since then, the “cloud” has been touted as the next generation of the web, and as a concept encompasses a few key areas:

  • storage - collect user settings/preferences, documents and media
  • computing cycles - harness the power of a thousand-node grid of servers for complex problems or CPU-intensive workloads
  • network transparency - mask low-level details such as IP addresses and other info as much as possible

Who’s Who and Challenges

Large vendors with existing market plays involving huge server farms and data centers are eagerly jumping on the bandwagon — like IBM, Google, Amazon, Sun, etc. Software vendors are touting their existing and upcoming apps as “cloud” initiatives. The previously mentioned firms, as well as Salesforce, Zimbra (now owned by Yahoo), Zoho, and a multitude of other startups are all rushing to lay claim to a piece of land in the Cloud Gold Rush. Even Microsoft, notoriously late to the web party, instead relying on its stalwart cash cows of Windows and Office, has made its own bid in the cloud wars with Mesh and announcements of a web version of Office.

Cloud computing promises to change much of how we as developers, designers, and architects currently design and build web applications. For one, concurrency is big issue that need to be addressed if the apps of the next decade are going to scale on this cloud infrastructure. Languages, platforms, and tools need to provide solutions for creating apps that scale efficiently on multiple cores, processors, and even systems. Architects will have to design solutions that are massively scalable and take advantage of the properties of the cloud. UI specialists and designers will work with browser-based frontends, as well as newer mobile phone interfaces and Internet-enabled devices (such as Nokia’s Maemo Internet tablet).

Beyond the Cloud

I envision a future beyond the current cloud computing craze, perhaps in 5 - 10 years, where computing is a utility service just like power and telephone service are today. Several companies, including Amazon and Sun, are already offering some utility-style services, and many distributed computing projects tackling specific problems run on volunteer end-user systems today, but I’m thinking of something much broader. Instead of vendor-specific mini-clouds or utility services, we should aim for what I’ll term universal utility computing (UUC), built on open protocols and standards.

Essentially, the idea is to ensure every node in the cloud is an active member. By “active”, I mean the resources of every device are available for use by others. Computing cycles can be used (when idle, or up to a certain configurable threshold percentage of total CPU), storage, etc. It’s similar in nature to a grid and distributed computing, but utilizing a general, Internet-wide approach.

So how would such a system work? For starters, a UUC protocol would be required, and agent software written for various operating systems. The protocol would specify the sequence of communication between nodes, allowing true peer-to-peer messaging. The agent would ideally sit in the kernel space, interacting with the built-in scheduler, hardware abstraction layer, and storage subsystem.

Once a device has been “UUC-enabled”, it could begin to participate in the utility cloud. Every system in the cloud would share its resources for utility computing. Applications would then have the ability to harness as much computing power as required. Nodes that didn’t participate in the utility cloud couldn’t take advantage of utility resources.

Imagine all mobile phones on Earth utilizing a small portion of their resources in protein folding computations, or all servers processing climate forecasting data, or molecular level interactions for medical applications. Internet-enabled gaming consoles, tablets, laptops, desktops, and a plethora of devices that may be idle 90% of the time can now be used for computation. Imagine if the machines available to the average botnet hacker could be used for helpful purposes instead of spam.

Naturally, there are several challenges that must be overcome. Security and privacy, as today, would have to be addressed, employing encryption and other techniques to ensure confidentiality and integrity. Outside of individual nodes, there must be built-in mechanisms for preventing DDOS-style attacks, as well as preventing malicious users from exhausting the available utility-dedicated resources on one or several devices. There must also be a system used to prioritize workloads sent to the cloud, and a way to adjust the priority of a task. Checks and balances could be automatic, ensuring that a particular workload doesn’t use more than pre-determined slice of the available resources for a system.

Universal utility computing could be the next phase of computing, one where the Internet is a true peer-to-peer system, and all nodes participate and share resources. Instead of having expensive data centers with custom hardware and software solutions, billions of devices with idle processors can be harnessed to help solve a variety of problems affecting the enterprise, health care, the scientific community, and others.


[This post appeared originally here and is reprinted by kind permission of the author, who retains full copyright.]

More Stories By Nick Davis

Nick Davis is an information security professional (CISSP) and software architect with several years of academic and professional experience. He earned an M.S. and B.S. in Computer Science from the University of Tulsa. Previously, hr was Founding Software Developer for Vidoop, an Internet security and identity company that provides some useful solutions for managing one’s identity on the web. While at Vidoop he was co-inventor of the company’s flagship patent-pending authentication technology, the ImageShield.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...