Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Pat Romanski, Mehdi Daoudi, Jason Bloomberg

Related Topics: Microservices Expo, Java IoT, Microsoft Cloud, @CloudExpo

Microservices Expo: Article

How to Test Application Throughput: Keep It Real

Rolling out a new release or architectural change relies heavily upon a web application handling a certain TPS

Often we see the workload to a web application measured by throughput. It's a way of quantifying the volume of requests/responses in relation to time. Transactions per second or TPS is the most common ratio used. A performance test plan usually contains certain throughput goals. The "GO or NO GO" decision for rolling out a new release or architectural change relies heavily upon a web application handling a certain TPS. Management wants a "Pass" stamp, but it's your job to make sure that the achieved TPS is indeed realistic - not an illusion of phony numbers. My advice is to "keep it real" by generating workloads that represent all the true characteristics of production. Faking it leads to false positive test result: a certain TPS was met and the Pass stamp was awarded, but the conditions were unrealistic. For example, you could achieve a 460 TPS result by hitting mostly lightweight transactions, or by running a low load of virtual users with little or no think time. In each of these cases, the throughput would be "High" but the workload does not represent what's really happening in production. Not even remotely. What's worse, if you "Pass" a performance test using these unrealistic methods, you have no idea if the deployment is going to withstand the production workload. If the applications falls over....guess who is on the hot seat? This could be an unintentional outcome, so be sure your tests are setup properly and create a realistic workload.

How do I accomplish that? It's actually the Design of the test which is responsible for determining how realistic the throughput is that you generate. There are several key factors here to take into consideration. They are all equally important but the underlying philosophy is (again) "keeping it real". Using a load tool to simulate virtual users executing scripts is a load test, but you also need to emulate accurate activity, conditions, behaviors, usage, etc. Each script executed in a load test contains simple requests and more complex business transactions. When setting up a test of an application, not all transactions are created equal. Throughput is affected by the "weight" of a transaction. For example, lightweight Transaction A can be as simple as serving up a static image. In another example, heavyweight Transaction B can be as complicated as executing a business transaction which involves algorithms to be run on the results of a DB query. The response time of Transaction A is going to be much quicker and will use less resources within the deployment since it is just a webserver response. Conversely, we can expect that Transaction B is going to have a much longer response time and use more resources including the database. When the load tool is executing a script, it waits for the response of a transaction before executing the next transaction. You can see how the response times of transactions affect throughput. The faster the response time of transactions, the higher the throughput. You can easily manipulate a test so you must create conditions which truly mimic expected production activity.

To accurately mimic expected production activity you must first ensure each virtual user represents each real user. If you expect a concurrent load of 2500 users actively using your web application, then you need to have a test which ramps to 2500 virtual users. This is extremely important because every virtual user has a unique footprint on the backend servers in sessions, memory usage, open sockets, etc. Trying to get away with a higher throughput test without an accurate number of virtual users will lead to inexact resource usage on the backend. Only a test which uses the true numbers of users will emulate the right load conditions.

There are typically different "types" of users per web application: shoppers, buyers, admin, etc. When setting up the test, create a population, transaction mixture, which represents the workload during peak usage in production. For example, 50% shoppers, 40% buyers, 10% admin. The most accurate transaction mixes are determined by log reviews or by the business analysis of expected usage in production. The scripts executed by the tool's load generators need to represent "true" user profiles. The scripts need to follow transaction flows: navigations, decisions, inputs, calculations, etc. These transactions need to use dynamic data to be realistic. Dynamic transaction flows include choices of different products, requests of links, submits of forms, or even more complicated: extracting response data to be used in subsequent requests. It is the dynamic scripts that emulate the diverse activity of real users.

To make those robotic Virtual Users act like human beings, pauses/delays need to be incorporated into the scripts. We all need to think, take in information, process it, make decisions, type out forms, etc. These "breathers" contribute to the accurate load characteristics. During these pauses, the servers are still performing housekeeping: closing of ports, garbage collection, gaging timeout, sweeping sessions, etc. all of which takes resources. If you hit the deployment with simultaneous users and simply crank up the throughput, the load characteristics are not realistic.

With today's Rich Internet Applications, there is the requirement to incorporate complex behaviors into the scripts. Representation of asynchronous updates of data being pushed from servers to browsers and vice versa, independent of full page refreshes. The tool needs to "listen" for updates and recreate a script which emulates the activity. This new PUSH challenge is a hurdle in performance testing but also critical in creating the right load characteristics. These rich behaviors affect throughput by usage polling, streaming, and other reactive mechanisms which must be accounted for.

Another factor to consider is the fact that users are connecting to web and mobile applications via all different network speeds. The connection speed affects the download rate. The slower the download, the higher the response time. For your populations of users, choose the bandwidth or bandwidths which really represent the end user connections. For example, you may be testing a LAN application with a known bandwidth restriction. Or there are a group of users connect from another country. A certain percentage may only access a web application via a mobile network.

Taking all these factors into consideration (user profiles, accurate number of users, transaction mixes, dynamic data, think times, bandwidth simulation, behaviors, etc) sets the stage for creating realistic throughput. Review your scripts and make sure your tests really represent expected production. Once you have designed realistic tests, you can execute and properly evaluate whether or not the web application can achieve the set throughput goals.

More Stories By Rebecca Clinard

Rebecca Clinard is a Senior Performance Engineer at Neotys, a provider of load testing software for Web applications. Previously, she worked as a web application performance engineer for Bowstreet, Fidelity Investments, Bottomline Technologies and Timberland companies, industries spanning retail, financial services, insurance and manufacturing. Her expertise lies in creating realistic load tests and performance tuning multi-tier deployments. She has been orchestrating and conducting performance tests since 2001. Clinard graduated from University of New Hampshire with a BS and also holds a UNIX Certificate from Worcester Polytechnic Institute.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Most of the time there is a lot of work involved to move to the cloud, and most of that isn't really related to AWS or Azure or Google Cloud. Before we talk about public cloud vendors and DevOps tools, there are usually several technical and non-technical challenges that are connected to it and that every company needs to solve to move to the cloud. In his session at 21st Cloud Expo, Stefano Bellasio, CEO and founder of Cloud Academy Inc., will discuss what the tools, disciplines, and cultural...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
With the rise of DevOps, containers are at the brink of becoming a pervasive technology in Enterprise IT to accelerate application delivery for the business. When it comes to adopting containers in the enterprise, security is the highest adoption barrier. Is your organization ready to address the security risks with containers for your DevOps environment? In his session at @DevOpsSummit at 21st Cloud Expo, Chris Van Tuin, Chief Technologist, NA West at Red Hat, will discuss: The top security r...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Many organizations adopt DevOps to reduce cycle times and deliver software faster; some take on DevOps to drive higher quality and better end-user experience; others look to DevOps for a clearer line-of-sight to customers to drive better business impacts. In truth, these three foundations go together. In this power panel at @DevOpsSummit 21st Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, industry experts will discuss how leading organizations build application success from all...
‘Trend’ is a pretty common business term, but its definition tends to vary by industry. In performance monitoring, trend, or trend shift, is a key metric that is used to indicate change. Change is inevitable. Today’s websites must frequently update and change to keep up with competition and attract new users, but such changes can have a negative impact on the user experience if not managed properly. The dynamic nature of the Internet makes it necessary to constantly monitor different metrics. O...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
The last two years has seen discussions about cloud computing evolve from the public / private / hybrid split to the reality that most enterprises will be creating a complex, multi-cloud strategy. Companies are wary of committing all of their resources to a single cloud, and instead are choosing to spread the risk – and the benefits – of cloud computing across multiple providers and internal infrastructures, as they follow their business needs. Will this approach be successful? How large is the ...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
The nature of the technology business is forward-thinking. It focuses on the future and what’s coming next. Innovations and creativity in our world of software development strive to improve the status quo and increase customer satisfaction through speed and increased connectivity. Yet, while it's exciting to see enterprises embrace new ways of thinking and advance their processes with cutting edge technology, it rarely happens rapidly or even simultaneously across all industries.
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Today companies are looking to achieve cloud-first digital agility to reduce time-to-market, optimize utilization of resources, and rapidly deliver disruptive business solutions. However, leveraging the benefits of cloud deployments can be complicated for companies with extensive legacy computing environments. In his session at 21st Cloud Expo, Craig Sproule, founder and CEO of Metavine, will outline the challenges enterprises face in migrating legacy solutions to the cloud. He will also prese...
DevOps at Cloud Expo – being held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real r...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?