Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Pat Romanski, Zakia Bouachraoui

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

The Transformation Toward Universal Utility Computing Is Beginning

When the cloud transforms into universal utility computing

Nick Davis's Blog

Cloud computing promises to change much of how we as developers, designers, and architects currently design and build web applications. For one, concurrency is big issue that need to be addressed if the apps of the next decade are going to scale on this cloud infrastructure.


One of the most popular themes in the last couple years on the web is the much-heralded “cloud computing”. Of course, the cloud metaphor is taken from the representation of the Internet in architecture diagrams as a big fluffy cloud to which other, more discrete networks and systems are interconnected. From the Wikipedia page, Cloud computing, defined, is:

a style of computing in which IT-related capabilities are provided “as a service”, allowing users to access technology-enabled services from the Internet (”in the cloud”) without knowledge of, expertise with, or control over the technology infrastructure that supports them.

Google popularized dynamic web-based applications that behaved much like desktop apps, and essentially ushered in the era of what we fondly (sometimes snidely) call “Web 2.0″. In the short span of 3 or 4 years (I’m counting since 2005, when the web community at large became aware of Ajax and similar technology), dynamic web applications have become the de facto UI for end-user interactive software offerings — if it makes sense to use the ubiquitous browser as the frontend, then why not? Instead of forcing users to install, configure, and learn yet another new desktop app, give them an interface with which they’re already comfortable and familiar. As an added benefit, the browser interface is supported on nearly every operating system and platform used today (proprietary plugins and extensions notwithstanding).

Since then, the “cloud” has been touted as the next generation of the web, and as a concept encompasses a few key areas:

  • storage - collect user settings/preferences, documents and media
  • computing cycles - harness the power of a thousand-node grid of servers for complex problems or CPU-intensive workloads
  • network transparency - mask low-level details such as IP addresses and other info as much as possible

Who’s Who and Challenges

Large vendors with existing market plays involving huge server farms and data centers are eagerly jumping on the bandwagon — like IBM, Google, Amazon, Sun, etc. Software vendors are touting their existing and upcoming apps as “cloud” initiatives. The previously mentioned firms, as well as Salesforce, Zimbra (now owned by Yahoo), Zoho, and a multitude of other startups are all rushing to lay claim to a piece of land in the Cloud Gold Rush. Even Microsoft, notoriously late to the web party, instead relying on its stalwart cash cows of Windows and Office, has made its own bid in the cloud wars with Mesh and announcements of a web version of Office.

Cloud computing promises to change much of how we as developers, designers, and architects currently design and build web applications. For one, concurrency is big issue that need to be addressed if the apps of the next decade are going to scale on this cloud infrastructure. Languages, platforms, and tools need to provide solutions for creating apps that scale efficiently on multiple cores, processors, and even systems. Architects will have to design solutions that are massively scalable and take advantage of the properties of the cloud. UI specialists and designers will work with browser-based frontends, as well as newer mobile phone interfaces and Internet-enabled devices (such as Nokia’s Maemo Internet tablet).

Beyond the Cloud

I envision a future beyond the current cloud computing craze, perhaps in 5 - 10 years, where computing is a utility service just like power and telephone service are today. Several companies, including Amazon and Sun, are already offering some utility-style services, and many distributed computing projects tackling specific problems run on volunteer end-user systems today, but I’m thinking of something much broader. Instead of vendor-specific mini-clouds or utility services, we should aim for what I’ll term universal utility computing (UUC), built on open protocols and standards.

Essentially, the idea is to ensure every node in the cloud is an active member. By “active”, I mean the resources of every device are available for use by others. Computing cycles can be used (when idle, or up to a certain configurable threshold percentage of total CPU), storage, etc. It’s similar in nature to a grid and distributed computing, but utilizing a general, Internet-wide approach.

So how would such a system work? For starters, a UUC protocol would be required, and agent software written for various operating systems. The protocol would specify the sequence of communication between nodes, allowing true peer-to-peer messaging. The agent would ideally sit in the kernel space, interacting with the built-in scheduler, hardware abstraction layer, and storage subsystem.

Once a device has been “UUC-enabled”, it could begin to participate in the utility cloud. Every system in the cloud would share its resources for utility computing. Applications would then have the ability to harness as much computing power as required. Nodes that didn’t participate in the utility cloud couldn’t take advantage of utility resources.

Imagine all mobile phones on Earth utilizing a small portion of their resources in protein folding computations, or all servers processing climate forecasting data, or molecular level interactions for medical applications. Internet-enabled gaming consoles, tablets, laptops, desktops, and a plethora of devices that may be idle 90% of the time can now be used for computation. Imagine if the machines available to the average botnet hacker could be used for helpful purposes instead of spam.

Naturally, there are several challenges that must be overcome. Security and privacy, as today, would have to be addressed, employing encryption and other techniques to ensure confidentiality and integrity. Outside of individual nodes, there must be built-in mechanisms for preventing DDOS-style attacks, as well as preventing malicious users from exhausting the available utility-dedicated resources on one or several devices. There must also be a system used to prioritize workloads sent to the cloud, and a way to adjust the priority of a task. Checks and balances could be automatic, ensuring that a particular workload doesn’t use more than pre-determined slice of the available resources for a system.

Universal utility computing could be the next phase of computing, one where the Internet is a true peer-to-peer system, and all nodes participate and share resources. Instead of having expensive data centers with custom hardware and software solutions, billions of devices with idle processors can be harnessed to help solve a variety of problems affecting the enterprise, health care, the scientific community, and others.


[This post appeared originally here and is reprinted by kind permission of the author, who retains full copyright.]

More Stories By Nick Davis

Nick Davis is an information security professional (CISSP) and software architect with several years of academic and professional experience. He earned an M.S. and B.S. in Computer Science from the University of Tulsa. Previously, hr was Founding Software Developer for Vidoop, an Internet security and identity company that provides some useful solutions for managing one’s identity on the web. While at Vidoop he was co-inventor of the company’s flagship patent-pending authentication technology, the ImageShield.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development organizations.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.