Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Linux Containers, Microservices Expo, Containers Expo Blog, Agile Computing, @DevOpsSummit

Linux Containers: Article

WebSocket Technology | @DevOpsSummit #DevOps #APM #Microservices

Considerations and best practices

Providing a full-duplex communication channel over a single TCP connection, WebSocket is the most efficient protocol for real-time responses over the web. If you're utilizing WebSocket technology, performance testing will boil down to simulating the bi-directional nature of your application.

Introduced with HTML5, the WebSocket protocol allows for more interaction between a browser and website, facilitating real-time applications and live content. WebSocket technology creates a persistent connection between the client and server, circumventing the requirement for a client-initiated HTTP request to trigger a server response. Providing a full-duplex communication channel over a single TCP connection, WebSocket is the most efficient protocol for real-time responses over the web.

If you're utilizing WebSocket technology, performance testing will boil down to simulating the bi-directional nature of your application.

Synchronous vs. Asynchronous Calls
First, you'll need to understand the kind of WebSocket communication your application is using: synchronous and asynchronous calls.

In addition to facilitating real-time applications, WebSockets are also used by web developers as a way of maintaining a faster, longer connection between client and server, even for traditional request-response purposes. This traditional request-response communication via WebSockets results in synchronous calls.

Asynchronous calls, on the other hand, do not require a client request to initiate a server response. The server automatically pushes information and updates over a single TCP connection (which remains open), allowing for an ongoing, bi-directional conversation.

Testers must be aware of the differences between the two in order to properly measure response times and validate the performance of their applications.

Considerations
Asynchronous Calls
Things can get a bit tricky when it comes to measuring the response times of asynchronous calls. Traditionally, testers would measure the time it takes from when a client sends a request and receives a response. With asynchronous calls, the end user's actions will determine server interactions and as such, it can be difficult to measure the time it takes to transport the message to the client, or latency.

Because messages are generated by external events and the server decides when to send messages to all connected clients, it's in testers' best interest to measure the time it takes for a client to receive a message after the triggering of an external event.

Synchronous Calls
Compared to asynchronous calls, measuring response times for synchronous calls is much easier and more straightforward. It's related to the Q&A approach where testers merely send a request and wait for the response.

Designing Tests
Designing test cases for synchronous calls is simple as testers will only need to understand each request/response as it relates to user interaction. The real challenge lies in designing tests for asynchronous calls.

The nature of asynchronous calls will change the logic required in designed load testing scenarios and testers will face many of the issues also associated with testing streaming media and long polling.

Limitations
Testers may face hardware and browser compatibility limitations when dealing with WebSockets. An open WebSocket channel facilitates a direct, open connection between the client and server. If there are thousands of customers or connections accessing data via your server, testers will need to adjust the backend  accordingly based on the number of sockets a single server can handle.

There are also a few browsers that don't support WebSocket communication. When this is the case, the application will replace the WebSocket communication with long polling. For performance engineers, this means creating two user paths for each use case (one using WebSockets, the other using long polling). To ensure realistic load testing, testers must take into account the ratio of browsers that are WebSocket compatible and ones that are not.

Tips for Load & Performance Testing WebSockets
Asynchronous Calls
The way you measure latency for asynchronous calls directly relates to the application framework. For example, when using Socket.IO, the inclusion of a timestamp within the WebSocket message should be a requirement. Testers can immediately send a message and then after receiving a response at the client level, calculate the time between the timestamps. There isn't a standard framework for WebSockets and out of the frameworks that do support WebSocket communication, few automatically include the timestamp. Testers may need to work with developers on including this information in messages. It may be a pain, but it's necessary to test the performance of WebSockets.

Synchronous Calls
To measure response times for synchronous calls, you'll need to make sure that your load testing solution first supports WebSocket technology. It should also be able to link the WebSocket request with the proper WebSocket response. It's important to note that the capability to test this asynchronous communication is a rarity among software testing products - choose your tool wisely.

Designing Tests
For newer testers and testers used to designing normal web scenarios, designing tests to handle calls via WebSocket can be confusing. It's going to come down to understanding your application and the nature of the request-response communication. When designing your tests, make sure you're reproducing the behavior of your application communicating with a real browser.

Designing test cases for synchronous calls, again, is fairly simple as these calls employ traditional request/response communication. To measure their performance, you'll need to equip your testing team with a load testing solution that enables testing of synchronous calls over WebSockets.

Designing test cases for asynchronous calls is a bit more challenging. In this case, users connected via WebSockets will take a specific action from the moment information is displayed on the screen. For example, a user might decide to purchase stock when the price reaches a certain level. Otherwise, the user may take no action at all. Keep in mind, the user action included in your use case depends on the information that does or does not arrive via the WebSocket channel.

Limitations
To address hardware issues, you'll need to ensure that you have several servers to balance the load accessing your WebSocket connections. Unlike HTTP communication where the connection is closed after a successful request-response interaction, WebSocket connections remain open. These connections will close if your servers are unable to handle the load, resulting in poor application performance for end users.

To combat browser incompatibility, you can introduce a WebSocket framework as a workaround. Otherwise, you'll need to design and execute polling scenarios during your load and performance testing.

The nature of WebSockets also poses challenges - it's a transport layer, so your project could be exchanging text data, binary data, etc. Performance engineers will need to decode or deserialize the WebSocket messages in order to correlate testing scenarios.

Conclusion
WebSockets simply provide a way to exchange data, so this technology isn't going to drastically change the way organizations deal with tests. Testing teams just have to understand the challenges they'll face when handling WebSockets-like browser incompatibility and collecting response times of asynchronous calls.

Ultimately, equipping your testing team with a load testing solution that not only provides the ability to test request-response apps that leverage WebSockets, but that can also manage the uninitiated responses sent by the server, will result in the most effective, realistic performance testing.

In terms of ensuring a seamless user experience, measuring the latency isn't enough. To truly validate the performance of an application utilizing WebSockets, you should combine your WebSocket load testing scenarios with scenarios on a browser-based tool like Selenium, but that is a topic for another post.

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...