Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Don MacVittie, Derek Weeks

Related Topics: Microservices Expo, Java IoT, Microsoft Cloud, @CloudExpo

Microservices Expo: Article

How to Test Application Throughput: Keep It Real

Rolling out a new release or architectural change relies heavily upon a web application handling a certain TPS

Often we see the workload to a web application measured by throughput. It's a way of quantifying the volume of requests/responses in relation to time. Transactions per second or TPS is the most common ratio used. A performance test plan usually contains certain throughput goals. The "GO or NO GO" decision for rolling out a new release or architectural change relies heavily upon a web application handling a certain TPS. Management wants a "Pass" stamp, but it's your job to make sure that the achieved TPS is indeed realistic - not an illusion of phony numbers. My advice is to "keep it real" by generating workloads that represent all the true characteristics of production. Faking it leads to false positive test result: a certain TPS was met and the Pass stamp was awarded, but the conditions were unrealistic. For example, you could achieve a 460 TPS result by hitting mostly lightweight transactions, or by running a low load of virtual users with little or no think time. In each of these cases, the throughput would be "High" but the workload does not represent what's really happening in production. Not even remotely. What's worse, if you "Pass" a performance test using these unrealistic methods, you have no idea if the deployment is going to withstand the production workload. If the applications falls over....guess who is on the hot seat? This could be an unintentional outcome, so be sure your tests are setup properly and create a realistic workload.

How do I accomplish that? It's actually the Design of the test which is responsible for determining how realistic the throughput is that you generate. There are several key factors here to take into consideration. They are all equally important but the underlying philosophy is (again) "keeping it real". Using a load tool to simulate virtual users executing scripts is a load test, but you also need to emulate accurate activity, conditions, behaviors, usage, etc. Each script executed in a load test contains simple requests and more complex business transactions. When setting up a test of an application, not all transactions are created equal. Throughput is affected by the "weight" of a transaction. For example, lightweight Transaction A can be as simple as serving up a static image. In another example, heavyweight Transaction B can be as complicated as executing a business transaction which involves algorithms to be run on the results of a DB query. The response time of Transaction A is going to be much quicker and will use less resources within the deployment since it is just a webserver response. Conversely, we can expect that Transaction B is going to have a much longer response time and use more resources including the database. When the load tool is executing a script, it waits for the response of a transaction before executing the next transaction. You can see how the response times of transactions affect throughput. The faster the response time of transactions, the higher the throughput. You can easily manipulate a test so you must create conditions which truly mimic expected production activity.

To accurately mimic expected production activity you must first ensure each virtual user represents each real user. If you expect a concurrent load of 2500 users actively using your web application, then you need to have a test which ramps to 2500 virtual users. This is extremely important because every virtual user has a unique footprint on the backend servers in sessions, memory usage, open sockets, etc. Trying to get away with a higher throughput test without an accurate number of virtual users will lead to inexact resource usage on the backend. Only a test which uses the true numbers of users will emulate the right load conditions.

There are typically different "types" of users per web application: shoppers, buyers, admin, etc. When setting up the test, create a population, transaction mixture, which represents the workload during peak usage in production. For example, 50% shoppers, 40% buyers, 10% admin. The most accurate transaction mixes are determined by log reviews or by the business analysis of expected usage in production. The scripts executed by the tool's load generators need to represent "true" user profiles. The scripts need to follow transaction flows: navigations, decisions, inputs, calculations, etc. These transactions need to use dynamic data to be realistic. Dynamic transaction flows include choices of different products, requests of links, submits of forms, or even more complicated: extracting response data to be used in subsequent requests. It is the dynamic scripts that emulate the diverse activity of real users.

To make those robotic Virtual Users act like human beings, pauses/delays need to be incorporated into the scripts. We all need to think, take in information, process it, make decisions, type out forms, etc. These "breathers" contribute to the accurate load characteristics. During these pauses, the servers are still performing housekeeping: closing of ports, garbage collection, gaging timeout, sweeping sessions, etc. all of which takes resources. If you hit the deployment with simultaneous users and simply crank up the throughput, the load characteristics are not realistic.

With today's Rich Internet Applications, there is the requirement to incorporate complex behaviors into the scripts. Representation of asynchronous updates of data being pushed from servers to browsers and vice versa, independent of full page refreshes. The tool needs to "listen" for updates and recreate a script which emulates the activity. This new PUSH challenge is a hurdle in performance testing but also critical in creating the right load characteristics. These rich behaviors affect throughput by usage polling, streaming, and other reactive mechanisms which must be accounted for.

Another factor to consider is the fact that users are connecting to web and mobile applications via all different network speeds. The connection speed affects the download rate. The slower the download, the higher the response time. For your populations of users, choose the bandwidth or bandwidths which really represent the end user connections. For example, you may be testing a LAN application with a known bandwidth restriction. Or there are a group of users connect from another country. A certain percentage may only access a web application via a mobile network.

Taking all these factors into consideration (user profiles, accurate number of users, transaction mixes, dynamic data, think times, bandwidth simulation, behaviors, etc) sets the stage for creating realistic throughput. Review your scripts and make sure your tests really represent expected production. Once you have designed realistic tests, you can execute and properly evaluate whether or not the web application can achieve the set throughput goals.

More Stories By Rebecca Clinard

Rebecca Clinard is a Senior Performance Engineer at Neotys, a provider of load testing software for Web applications. Previously, she worked as a web application performance engineer for Bowstreet, Fidelity Investments, Bottomline Technologies and Timberland companies, industries spanning retail, financial services, insurance and manufacturing. Her expertise lies in creating realistic load tests and performance tuning multi-tier deployments. She has been orchestrating and conducting performance tests since 2001. Clinard graduated from University of New Hampshire with a BS and also holds a UNIX Certificate from Worcester Polytechnic Institute.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things c...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task ...
"We started a Master of Science in business analytics - that's the hot topic. We serve the business community around San Francisco so we educate the working professionals and this is where they all want to be," explained Judy Lee, Associate Professor and Department Chair at Golden Gate University, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The past few years have seen a huge increase in the amount of critical IT services that companies outsource to SaaS/IaaS/PaaS providers, be it security, storage, monitoring, or operations. Of course, along with any outsourcing to a service provider comes a Service Level Agreement (SLA) to ensure that the vendor is held financially responsible for any lapses in their service which affect the customer’s end users, and ultimately, their bottom line. SLAs can be very tricky to manage for a number ...
In a recent post, titled “10 Surprising Facts About Cloud Computing and What It Really Is”, Zac Johnson highlighted some interesting facts about cloud computing in the SMB marketplace: Cloud Computing is up to 40 times more cost-effective for an SMB, compared to running its own IT system. 94% of SMBs have experienced security benefits in the cloud that they didn’t have with their on-premises service
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...