Welcome!

Microservices Expo Authors: Yeshim Deniz, Pat Romanski, Elizabeth White, Liz McMillan, Zakia Bouachraoui

Related Topics: Microservices Expo, Java IoT, Microsoft Cloud, @CloudExpo

Microservices Expo: Article

How to Test Application Throughput: Keep It Real

Rolling out a new release or architectural change relies heavily upon a web application handling a certain TPS

Often we see the workload to a web application measured by throughput. It's a way of quantifying the volume of requests/responses in relation to time. Transactions per second or TPS is the most common ratio used. A performance test plan usually contains certain throughput goals. The "GO or NO GO" decision for rolling out a new release or architectural change relies heavily upon a web application handling a certain TPS. Management wants a "Pass" stamp, but it's your job to make sure that the achieved TPS is indeed realistic - not an illusion of phony numbers. My advice is to "keep it real" by generating workloads that represent all the true characteristics of production. Faking it leads to false positive test result: a certain TPS was met and the Pass stamp was awarded, but the conditions were unrealistic. For example, you could achieve a 460 TPS result by hitting mostly lightweight transactions, or by running a low load of virtual users with little or no think time. In each of these cases, the throughput would be "High" but the workload does not represent what's really happening in production. Not even remotely. What's worse, if you "Pass" a performance test using these unrealistic methods, you have no idea if the deployment is going to withstand the production workload. If the applications falls over....guess who is on the hot seat? This could be an unintentional outcome, so be sure your tests are setup properly and create a realistic workload.

How do I accomplish that? It's actually the Design of the test which is responsible for determining how realistic the throughput is that you generate. There are several key factors here to take into consideration. They are all equally important but the underlying philosophy is (again) "keeping it real". Using a load tool to simulate virtual users executing scripts is a load test, but you also need to emulate accurate activity, conditions, behaviors, usage, etc. Each script executed in a load test contains simple requests and more complex business transactions. When setting up a test of an application, not all transactions are created equal. Throughput is affected by the "weight" of a transaction. For example, lightweight Transaction A can be as simple as serving up a static image. In another example, heavyweight Transaction B can be as complicated as executing a business transaction which involves algorithms to be run on the results of a DB query. The response time of Transaction A is going to be much quicker and will use less resources within the deployment since it is just a webserver response. Conversely, we can expect that Transaction B is going to have a much longer response time and use more resources including the database. When the load tool is executing a script, it waits for the response of a transaction before executing the next transaction. You can see how the response times of transactions affect throughput. The faster the response time of transactions, the higher the throughput. You can easily manipulate a test so you must create conditions which truly mimic expected production activity.

To accurately mimic expected production activity you must first ensure each virtual user represents each real user. If you expect a concurrent load of 2500 users actively using your web application, then you need to have a test which ramps to 2500 virtual users. This is extremely important because every virtual user has a unique footprint on the backend servers in sessions, memory usage, open sockets, etc. Trying to get away with a higher throughput test without an accurate number of virtual users will lead to inexact resource usage on the backend. Only a test which uses the true numbers of users will emulate the right load conditions.

There are typically different "types" of users per web application: shoppers, buyers, admin, etc. When setting up the test, create a population, transaction mixture, which represents the workload during peak usage in production. For example, 50% shoppers, 40% buyers, 10% admin. The most accurate transaction mixes are determined by log reviews or by the business analysis of expected usage in production. The scripts executed by the tool's load generators need to represent "true" user profiles. The scripts need to follow transaction flows: navigations, decisions, inputs, calculations, etc. These transactions need to use dynamic data to be realistic. Dynamic transaction flows include choices of different products, requests of links, submits of forms, or even more complicated: extracting response data to be used in subsequent requests. It is the dynamic scripts that emulate the diverse activity of real users.

To make those robotic Virtual Users act like human beings, pauses/delays need to be incorporated into the scripts. We all need to think, take in information, process it, make decisions, type out forms, etc. These "breathers" contribute to the accurate load characteristics. During these pauses, the servers are still performing housekeeping: closing of ports, garbage collection, gaging timeout, sweeping sessions, etc. all of which takes resources. If you hit the deployment with simultaneous users and simply crank up the throughput, the load characteristics are not realistic.

With today's Rich Internet Applications, there is the requirement to incorporate complex behaviors into the scripts. Representation of asynchronous updates of data being pushed from servers to browsers and vice versa, independent of full page refreshes. The tool needs to "listen" for updates and recreate a script which emulates the activity. This new PUSH challenge is a hurdle in performance testing but also critical in creating the right load characteristics. These rich behaviors affect throughput by usage polling, streaming, and other reactive mechanisms which must be accounted for.

Another factor to consider is the fact that users are connecting to web and mobile applications via all different network speeds. The connection speed affects the download rate. The slower the download, the higher the response time. For your populations of users, choose the bandwidth or bandwidths which really represent the end user connections. For example, you may be testing a LAN application with a known bandwidth restriction. Or there are a group of users connect from another country. A certain percentage may only access a web application via a mobile network.

Taking all these factors into consideration (user profiles, accurate number of users, transaction mixes, dynamic data, think times, bandwidth simulation, behaviors, etc) sets the stage for creating realistic throughput. Review your scripts and make sure your tests really represent expected production. Once you have designed realistic tests, you can execute and properly evaluate whether or not the web application can achieve the set throughput goals.

More Stories By Rebecca Clinard

Rebecca Clinard is a Senior Performance Engineer at Neotys, a provider of load testing software for Web applications. Previously, she worked as a web application performance engineer for Bowstreet, Fidelity Investments, Bottomline Technologies and Timberland companies, industries spanning retail, financial services, insurance and manufacturing. Her expertise lies in creating realistic load tests and performance tuning multi-tier deployments. She has been orchestrating and conducting performance tests since 2001. Clinard graduated from University of New Hampshire with a BS and also holds a UNIX Certificate from Worcester Polytechnic Institute.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development organizations.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...