Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Stackify Blog, Yeshim Deniz

Related Topics: Microservices Expo, Java IoT, Machine Learning

Microservices Expo: Article

HTML5 Web Sockets: A Quantum Leap in Scalability for the Web

An enormous step forward in the scalability of the real-time web

Lately there has been a lot of buzz around HTML5 Web Sockets, which defines a full-duplex communication channel that operates through a single socket over the Web. HTML5 Web Sockets is not just another incremental enhancement to conventional HTTP communications; it represents a colossal advance, especially for real-time, event-driven web applications.

HTML5 Web Sockets provides such a dramatic improvement from the old, convoluted "hacks" that are used to simulate a full-duplex connection in a browser that it prompted Google's Ian Hickson - the HTML5 specification lead - to say:

"Reducing kilobytes of data to 2 bytes...and reducing latency from 150ms to 50ms is far more than marginal. In fact, these two factors alone are enough to make Web Sockets seriously interesting to Google."

Let's look at how HTML5 Web Sockets can offer such an incredibly dramatic reduction of unnecessary network traffic and latency by comparing it to conventional solutions.

Polling, Long-Polling, and Streaming - Headache 2.0
Normally when a browser visits a web page, an HTTP request is sent to the web server that hosts that page. The web server acknowledges this request and sends back the response. In many cases - for example, for stock prices, news reports, ticket sales, traffic patterns, medical device readings, and so on - the response could be stale by the time the browser renders the page. If you want to get the most up-to-date "real-time" information, you can constantly refresh that page manually, but that's obviously not a great solution.

Current attempts to provide real-time web applications largely revolve around polling and other server-side push technologies, the most notable of which is Comet, which delays the completion of an HTTP response to deliver messages to the client. Comet-based push is generally implemented in JavaScript and uses connection strategies such as long-polling or streaming.

With polling, the browser sends HTTP requests at regular intervals and immediately receives a response. This technique was the first attempt for the browser to deliver real-time information. Obviously, this is a good solution if the exact interval of message delivery is known, because you can synchronize the client request to occur only when information is available on the server. However, real-time data is often not that predictable, making unnecessary requests inevitable and, as a result, many connections are opened and closed needlessly in low-message-rate situations.

With long-polling, the browser sends a request to the server and the server keeps the request open for a set period. If a notification is received within that period, a response containing the message is sent to the client. If a notification is not received within the set time period, the server sends a response to terminate the open request. It is important to understand, however, that when you have a high message volume, long-polling does not provide any substantial performance improvements over traditional polling. In fact, it could be worse, because the long-polling might spin out of control into an unthrottled, continuous loop of immediate polls.

With streaming, the browser sends a complete request, but the server sends and maintains an open response that is continuously updated and kept open indefinitely (or for a set period of time). The response is then updated whenever a message is ready to be sent, but the server never signals to complete the response, thus keeping the connection open to deliver future messages. However, since streaming is still encapsulated in HTTP, intervening firewalls and proxy servers may choose to buffer the response, increasing the latency of the message delivery. Therefore, many streaming Comet solutions fall back to long-polling in case a buffering proxy server is detected. Alternatively, TLS (SSL) connections can be used to shield the response from being buffered, but in that case the setup and tear down of each connection taxes the available server resources more heavily.

Ultimately, all of these methods for providing real-time data involve HTTP request and response headers, which contain lots of additional, unnecessary header data and introduce latency. On top of that, full-duplex connectivity requires more than just the downstream connection from server to client. In an effort to simulate full-duplex communication over half-duplex HTTP, many of today's solutions use two connections: one for the downstream and one for the upstream. The maintenance and coordination of these two connections introduces significant overhead in terms of resource consumption and adds lots of complexity. Simply put, HTTP wasn't designed for real-time, full-duplex communication as you can see in Figure 1, which shows the complexities associated with building a Comet web application that displays real-time data from a back-end data source using a publish/subscribe model over half-duplex HTTP.

Figure 1: The complexity of Comet applications

It gets even worse when you try to scale out those Comet solutions to the masses. Simulating bi-directional browser communication over HTTP is error-prone and complex and all that complexity does not scale. Even though your end users might be enjoying something that looks like a real-time web application, this "real-time" experience has an outrageously high price tag. It's a price that you will pay in additional latency, unnecessary network traffic, and a drag on CPU performance.

HTML5 Web Sockets to the Rescue!
Defined in the Communications section of the HTML5 specification, HTML5 Web Sockets represent the next evolution of web communications - a full-duplex, bidirectional communications channel that operates through a single socket over the Web. HTML5 Web Sockets provides a true standard that you can use to build scalable, real-time web applications. In addition, since it provides a socket that is native to the browser, it eliminates many of the problems Comet solutions are prone to. Web Sockets remove the overhead and dramatically reduce complexity.

To establish a WebSocket connection, the client and server upgrade from the HTTP protocol to the WebSocket protocol during their initial handshake, as shown in the following example:

Example 1: The WebSocket handshake (browser request and server response)

GET /text HTTP/1.1\r\n
Upgrade: WebSocket\r\n
Connection: Upgrade\r\n
Host: www.websocket.org\r\n
...\r\n

HTTP/1.1 101 WebSocket Protocol Handshake\r\n
Upgrade: WebSocket\r\n
Connection: Upgrade\r\n
...\r\n

Once established, WebSocket data frames can be sent back and forth between the client and the server in full-duplex mode. Both text and binary frames can be sent full-duplex, in either direction at the same time. The data is minimally framed with just two bytes. In the case of text frames, each frame starts with a 0x00 byte, ends with a 0xFF byte, and contains UTF-8 data in between. WebSocket text frames use a terminator, while binary frames use a length prefix.

Note: Although the Web Sockets protocol is ready to support a diverse set of clients, it cannot deliver raw binary data to JavaScript, because JavaScript does not support a byte type. Therefore, binary data is ignored if the client is JavaScript, but it can be delivered to other clients that support it.

The Showdown: Comet vs HTML5 Web Sockets
How dramatic is that reduction in unnecessary network traffic and latency? Let's compare a polling application with a WebSocket application.

For the polling example, I created a simple web application in which a web page requests real-time stock data from a RabbitMQ message broker using a traditional publish/subscribe model. It does this by polling a Java servlet that is hosted on a web server. The RabbitMQ message broker receives data from a fictitious stock price feed with continuously updating prices. The web page connects and subscribes to a specific stock channel (a topic on the message broker) and uses an XMLHttpRequest to poll for updates once per second. When updates are received, some calculations are performed and the stock data is shown in a table as shown in Figure 2.

Figure 2: A JavaScript stock ticker application

Note: The back-end stock feed actually produces a lot of stock price updates per second, so using polling at one-second intervals is actually more prudent than using a Comet long-polling solution, which would result in a series of continuous polls. Polling effectively throttles the incoming updates here.

It all looks great, but a look under the hood reveals there are some serious issues with this application. For example, in Mozilla Firefox with Firebug (a Firefox add-on that allows you to debug web pages and monitor the time it takes to load pages and execute scripts), you can see that GET requests hammer the server at one-second intervals. Turning on Live HTTP Headers (another Firefox add-on that shows live HTTP header traffic) reveals the shocking amount of header overhead that is associated with each request. The following two examples show the HTTP header data for just a single request and response:

Example 2: HTTP request header

GET /PollingStock//PollingStock HTTP/1.1
Host: localhost:8080
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://www.example.com/PollingStock/
Cookie: showInheritedConstant=false; showInheritedProtectedConstant=false; showInheritedProperty=false; showInheritedProtectedProperty=false; showInheritedMethod=false; showInheritedProtectedMethod=false; showInheritedEvent=false; showInheritedStyle=false; showInheritedEffect=false

Example 3: HTTP response header

HTTP/1.x 200 OK
X-Powered-By: Servlet/2.5
Server: Sun Java System Application Server 9.1_02
Content-Type: text/html;charset=UTF-8
Content-Length: 21
Date: Sat, 07 Nov 2009 00:32:46 GMT

Just for fun, I counted all the characters. The total HTTP request and response header information overhead contains 871 bytes and that doesn't even include any data. Of course, this is just an example and you can have less than 871 bytes of header data, but I have also seen cases where the header data exceeded 2,000 bytes. In this example application, the data for a typical stock topic message is only about 20 characters long. As you can see, it's effectively drowned out by the excessive header information, which wasn't even required in the first place.

What happens when you deploy this application to a large number of users? Let's take a look at the network throughput for just the HTTP request and response header data associated with this polling application in three different use cases.

  • Use case A: 1,000 clients polling every second:
    Network throughput is (871 x 1,000) = 871,000 bytes = 6,968,000 bits per second (6.6 Mbps)
  • Use case B: 10,000 clients polling every second:
    Network throughput is (871 x 10,000) = 8,710,000 bytes = 69,680,000 bits per second (66 Mbps)
  • Use case C: 100,000 clients polling every 1 second:
    Network throughput is (871 x 100,000) = 87,100,000 bytes = 696,800,000 bits per second (665 Mbps)

That's an enormous amount of unnecessary network throughput. If only we could just get the essential data over the wire. Well, guess what? You can with HTML5 Web Sockets. I rebuilt the application to use HTML5 Web Sockets, adding an event handler to the web page to asynchronously listen for stock update messages from the message broker (check out the many how-tos and tutorials on www.tech.kaazing.com/documentation for more information on how to build a WebSocket application). Each of these messages is a WebSocket frame that has just two bytes of overhead (instead of 871). Take a look at how that affects the network throughput overhead in our three use cases.

  • Use case A: 1,000 clients receive 1 message per second:
    Network throughput is (2 x 1,000) = 2,000 bytes = 16,000 bits per second (0.015 Mbps)
  • Use case B: 10,000 clients receive 1 message per second:
    Network throughput is (2 x 10,000) = 20,000 bytes = 160,000 bits per second (0.153 Mbps)
  • Use case C: 100,000 clients receive 1 message per second:
    Network throughput is (2 x 100,000) = 200,000 bytes = 1,600,000 bits per second (1.526 Mbps)

As you can see in Figure 3, HTML5 Web Sockets provide a dramatic reduction of unnecessary network traffic compared to the polling solution.

Figure 3: Comparison of the unnecessary network throughput overhead between the polling and the WebSocket applications

What about the reduction in latency? Take a look at Figure 4. In the top half, you can see the latency of the half-duplex polling solution. If we assume, for this example, that it takes 50 milliseconds for a message to travel from the server to the browser, then the polling application introduces a lot of extra latency, because a new request has to be sent to the server when the response is complete. This new request takes another 50ms and during this time the server cannot send any messages to the browser, resulting in additional server memory consumption.

In the bottom half of the figure, you see the reduction in latency provided by the WebSocket solution. Once the connection is upgraded to WebSocket, messages can flow from the server to the browser the moment they arrive. It still takes 50 ms for messages to travel from the server to the browser, but the WebSocket connection remains open so there is no need to send another request to the server.

Figure 4: Latency comparison between the polling and WebSocket applications

HTML5 Web Sockets and the Kaazing WebSocket Gateway
Today, only Google's Chrome browser supports HTML5 Web Sockets natively, but other browsers will soon follow. To work around that limitation, however, Kaazing WebSocket Gateway provides complete WebSocket emulation for all the older browsers (I.E. 5.5+, Firefox 1.5+, Safari 3.0+, and Opera 9.5+), so you can start using the HTML5 WebSocket APIs today.

WebSocket is great, but what you can do once you have a full-duplex socket connection available in your browser is even greater. To leverage the full power of HTML5 Web Sockets, Kaazing provides a ByteSocket library for binary communication and higher-level libraries for protocols like Stomp, AMQP, XMPP, IRC and more, built on top of WebSocket.

For example, if you use a high-level library for protocols such as Stomp or AMQP (supplied with the Kaazing Gateway), you can talk directly to back-end message brokers like the RabbitMQ broker described in the example. By having a direct connection to services, there is no need for additional application server logic to translate those bi-directional, full-duplex TCP backend protocols to uni-directional, half-duplex HTTP connections; the browser can simply understand these protocols natively.

Figure 5: Kaazing WebSocket Gateway extends TCP-based messaging to the browser with ultra high performance

Summary
HTML5 Web Sockets provides an enormous step forward in the scalability of the real-time web. As you have seen in this article, HTML5 Web Sockets can provide a 500:1 or - depending on the size of the HTTP headers - even a 1000:1 reduction in unnecessary HTTP header traffic and 3:1 reduction in latency. That is not just an incremental improvement; that is a revolutionary jump - a quantum leap.

Kaazing WebSocket Gateway makes HTML5 WebSocket code work in all the browsers today, while providing additional protocol libraries that allow you to harness the full power of the full-duplex socket connection that HTML5 Web Sockets provides and communicate directly to back-end services. For more information about Kaazing WebSocket Gateway, visit www.kaazing.com and the Kaazing technology network at tech.kaazing.com.

References

More Stories By Peter Lubbers

Peter Lubbers is the Director of Documentation and Training at Kaazing where he oversees all aspects of documentation and training. He is the co-author of the Apress book Pro HTML5 Programming and teaches HTML5 training courses. An HTML5 and WebSocket enthusiast, Peter frequently speaks at international events.

Prior to joining Kaazing, Peter worked as an information architect at Oracle, where he wrote many books. He also develops documentation automation solutions and two of his inventions are patented.

A native of the Netherlands, Peter served as a Special Forces commando in the Royal Dutch Green Berets. In his spare time (ha!) Peter likes to run ultra-marathons. He is the 2007 and 2009 ultrarunner.net series champion and three-time winner of the Tahoe Super Triple marathon. Peter lives on the edge of the Tahoe National Forest and loves to run in the Sierra Nevada foothills and around Lake Tahoe (preferably in one go!).

More Stories By Frank Greco

Frank Greco is the founder of the New York Java Special Interest Group (NYJavaSIG), one of the largest Java Users Groups (JUGs) on the planet with over 8,000 active members in the local Java community. The NYJavaSIG has had some of the most famous Java luminaries speak at their meetings, including Java Champions Brian Goetz, Rod Johnson, Doug Lea and Josh Bloch. Their members are very enthusiastic and, like most New Yorkers, don't hesitate to ask tough questions of their monthly speakers.

Frank has a long history as a "Champion" of the Java Platform; he taught a developer track session at the very first Java Day back in September 1995 in New York and started the NYJavaSIG that afternoon. Frank has been involved with software development for over 10 years and has worked on sophisticated architectures, innovative user interfaces, mobile computing and next-generation collaborative financial systems. Frank is both a Java community leader as well as a luminary technologist specializing in cloud services and enterprise architectures.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
PeterLubbers 03/15/10 03:55:00 PM EDT

Thanks Tam!
Sorry about that error in use case B and C. You're right it should be in Mbps. I'll try to get the article updated. Thanks for pointing that out.

An additional problem with streaming is that you can get unexpected results if there are intermediaries such as buffering proxy servers in the way. Streaming solutions therefore often fall back to long-polling if they detect such intermediaries (if they are smart enough to detect them at all). With WebSocket and WebSocket secure you have a better chance of traversing proxy servers and firewalls.

tbayes 03/11/10 10:11:00 AM EST

Hi Peter and Frank,

Thank you for this clear and well-written article, which does much to effectively explain the benefits of the WebSocket protocol.

If I may point out one minor correction: In the WebSocket section of Example 3, Use case B and C should show overhead rates of 0.153Mbps and 1.526Mbps respectively, as opposed to 0.153Kbps and 1.526Kbps.

Perhaps a more natural comparison would be between Web Sockets and Comet-streaming (via, for example, the "Forever IFrame" technique), as I feel it a little unsporting to compare Web Sockets directly to polling or long-polling! At around 20 bytes per message, the overhead associated with clients receiving messages via Comet-style streaming is still more than WebSocket's 2 bytes per message, but is still one to two orders of magnitude less than the overhead associated with polling techniques. Similarly, latency whether receiving messages using Comet-streaming or WebSocket is the same. Of course, clients sending messages will gain the advantages you describe only with Web Sockets, as all other approaches do indeed require new HTTP requests.

I completely agree, however, that WebSocket is a definite improvement over the Comet IFrame-approaches, which have an air of string and glue about them.

You mention your usage of various JavaScript APIs built on top of Web Sockets - such as your AmqpClient, or StompClient. This is understandable, since while the WebSocket API is commendably simple, it lacks many useful features. These APIs provide additional functionality, and do not explicitly expose the raw, underlying WebSocket API. We agree with this approach, and having implemented WebSocket interfaces in our Nirvana messaging server, we have opted to retain an existing, feature-rich and robust messaging JavaScript API unchanged except for its support for underlying WebSocket communication. See this link for more details.

I hope that we will see WebSocket support in all production mainstream browsers very soon.

Tam

@MicroservicesExpo Stories
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus intern...
DevOps at Cloud Expo – being held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real r...
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to c...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry’s single source for the cloud. Fusion’s advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including cloud...
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
There are two main reasons for infrastructure automation. First, system administrators, IT professionals and DevOps engineers need to automate as many routine tasks as possible. That’s why we build tools at Stackify to help developers automate processes like application performance management, error monitoring, and log management; automation means you have more time for mission-critical tasks. Second, automation makes the management of complex, diverse environments possible and allows rapid scal...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
This talk centers around how to automate best practices in a multi-/hybrid-cloud world based on our work with customers like GE, Discovery Communications and Fannie Mae. Today’s enterprises are reaping the benefits of cloud computing, but also discovering many risks and challenges. In the age of DevOps and the decentralization of IT, it’s easy to over-provision resources, forget that instances are running, or unintentionally expose vulnerabilities.
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, Cloud Expo and @ThingsExpo are two of the most important technology events of the year. Since its launch over eight years ago, Cloud Expo and @ThingsExpo have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, I provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading the...
The purpose of this article is draw attention to key SaaS services that are commonly overlooked during contact signing that are essential to ensuring they meet the expectations and requirements of the organization and provide guidance and recommendations for process and controls necessary for achieving quality SaaS contractual agreements.
SYS-CON Events announced today that OpsGenie will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2012, OpsGenie is an alerting and on-call management solution for dev and ops teams. OpsGenie provides the tools needed to design actionable alerts, manage on-call schedules and escalations, and ensure that the right people are notified at the right time, using multiple notification methods.
The first step to solving a problem is recognizing that it actually exists. And whether you've realized it or not, cloud services are a problem for your IT department. Even if you feel like you have a solid grasp of cloud technology and the nuances of making a cloud purchase, business leaders don't share the same confidence. Nearly 80% feel that IT lacks the skills necessary to help with cloud purchases-and they're looking to cloud brokers for help instead. It's time to admit we have a cloud s...
According to a recent Gartner study, by 2020, it will be unlikelythat any enterprise will have a “no cloud” policy, and hybrid will be the most common use of the cloud. While the benefits of leveraging public cloud infrastructures are well understood, the desire to keep critical workloads and data on-premise in the private data center still remains. For enterprises, the hybrid cloud provides a best of both worlds solution. However, the leading factor that determines the preference to the hybrid ...
In this modern world of IT, you've probably got some new colleagues in your life-namely, the cloud and SaaS providers who now hold your infrastructure in their hands. These business relationships-yes, they're technology-based, but cloud and SaaS are business models-will become as important to your IT team and your company as the hardware and software you used to install. Once you've adopted SaaS, or inherited SaaS, it's on you to avoid price hikes, licensing issues and app or provider sprawl....
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.