Click here to close now.

Welcome!

@MicroservicesE Blog Authors: Pat Romanski, Liz McMillan, Elizabeth White, Carmen Gonzalez, Sematext Blog

Related Topics: @MicroservicesE Blog, Java IoT, PowerBuilder, Microsoft Cloud, Agile Computing, CloudExpo® Blog, Apache

@MicroservicesE Blog: Article

The Importance of Accurately Modeling User Interactions in Performance Testing

Take a closer look at the factors that go into creating a realistic load that will yield more accurate results

Load testing, perhaps more than any other form of testing, is one of those activities that you either choose to do well or risk a result that leaves you worse off than not doing it at all. Half-hearted attempts at load testing yield "results," but too often those results are inaccurate, leading to a false sense of security for anyone who trusts them. This, in turn, leads to the release of applications that are not adequately tested and that experience performance problems soon after entering production.

I was reminded of this not long ago, when I worked with a customer who related an experience that may sound familiar to many of you. This customer was a test engineer for a bank that had recently merged with another bank, effectively doubling their customer base. He was part of a team responsible for load testing a new web application that would serve customers from both of the original banks. Before the application was rolled out, they performed load tests and confirmed that the application could handle the expected number of users with acceptable response times. When the system went live, however, it was slow as molasses - even under user loads less than what the team had tested.

The problem, as you may have guessed, was that the team had not accurately modeled the load. The virtual users used in the testing were a homogenous group that interacted with the system in roughly the same way, from roughly the same geographic locations, at the same network speed. In reality, the customers who came from Bank A tended to perform certain transactions much more frequently than those who came from Bank B. Most of Bank B's customers lived in a different part of the country than those from Bank A. More important, customers from both banks were accessing the application at widely differing connection speeds across a range of browsers. None of these factors were modeled accurately in the load tests the team had performed. In some cases it was because the team simply had not considered them, in others it was because the load testing tool they were using provided no way to handle these differences. In either case, the result was the same; the team had given the "go live" signal to an application that was not ready, basing their decision on inaccurate load test results.

Too often, organizations take a short cut to load testing. They are focused on a single number: how many concurrent users their application will support. As a result they put little effort into script development, and they end up with an unrealistic test - one of little value. I encourage all load testers to think beyond the concurrent users metric and take a closer look at other factors that go into creating a realistic load that will yield more accurate results, including:

  • Modeling user activity
  • Modeling different connection speeds
  • Modeling different browsers and mobile devices
  • Modeling geographically distributed users

Parameterizing Scripts to Better Model User Activity
Scripts that simply record a typical user's interaction with a web application and then play it back are not going to yield accurate performance data. As an example, a script that emulates a user logging into a site, searching for a product, placing it in the cart, and checking out does little to test the performance of other user activities such as checking product reviews, accessing detailed specifications, or comparing products.

More important, if the script always logs in as the same user and orders the same product, caching effects will often skew the performance measurements, making response times shorter than they would be under a real-world load. Caching on the web server, application server, and database server all come into play, compounding any caching that is done on the client side.

To minimize caching and similar effects, scripts must be parameterized. In my example above, the script would play back different users searching for different products, and purchasing them via different methods. Ideally the script would use randomization or data customization to fill in every user editable or selectable element on each form of the web application. This script parameterization, combined with creating multiple scripts to address a variety of user interactions, produces a much more realistic user load, and it's a good idea to have a load testing tool that simplifies these tasks.

Generating a Load with a Mixture of Connection Speeds and Network Characteristics
Many testing teams use the fastest available network connections when load testing a server. The belief is that if the application performs well under those connections, it will be guaranteed to work well in production when many real-world users will have slower connections. This is a faulty assumption that leads to performance problems when the application is subjected to real-world users accessing it at a variety of network bandwidths.

Testing with only high-speed connections can mask performance problems that occur only when lower speed connections are used. Slower data speeds will require connections to the server to stay open longer, and eventually the server may reach its limit for the maximum number of open connections.

Of course, testing with only low-speed connections is equally problematic. What's needed is a reasonable mixture of virtual users accessing the server at connection speeds representative of everything from 56K modems for dial-up users to T3 lines.

With more and more users accessing the web via mobile devices, it makes sense to include 3G and 4G connection rates in the mix as well. It's also important to take into account disparities in signal strength that can cause packet loss and increased network latency. Built-in support for incorporating these factors in performance testing is increasingly important, particularly for web applications that serve a high percentage of mobile users.

Emulating Different Browsers and Native Mobile Apps
Interestingly enough (and often surprising to some), not all browsers support the same number of concurrent HTTP connections. This obviously needs to be thought of as well - if a load test models the entire user population accessing a web application with a single browser that supports four connections per server, it neglects the effects of browsers that use twice that number.

This leads to a situation similar to the one that arises with inaccurate modeling of connection speeds - with more concurrent connections, it is not unusual to see slowdowns as a server reaches its limit for simultaneous connections. To minimize these effects, load tests should apply a variety of browser profiles during playback, so that the tests identify the traffic as originating from a realistic mixture of different browsers, including mobile browsers.

Mobile devices, in fact, present a new set of challenges for load testers (see Best Practices for Load Testing Mobile Applications, Part 1 and Best Practices for Load Testing Mobile Applications, Part 2), aside from the network connection issues I've already covered. Many companies now have a separate mobile version of their site, with content tailored specifically for mobile users. Again, to perform a valid load test on such sites, a test engineer must be able to override the browser identification during playback so that the virtual user appears to be using a mobile browser.

What about native mobile applications? There is no browser involved, so you'll need a testing solution that can record, parameterize, and play back the network traffic originating from the mobile device. For some cases this can be done via a proxy, but for some apps this is not an available option. These apps may call for a tunneling approach in which the testing tool acts as a DNS server. Even if you're not facing this situation today, you may want to see if your testing tool supports this feature so you're prepared when you do need it.

Generating a Geographically Distributed Load
Unless your end-user community is accessing your application from a single location, initiating tests solely from inside your datacenter is unlikely to represent a realistic load. Such tests fail to take into account the effects of third-party servers and content delivery networks that may sit between your users and your web application.

Using the cloud to generate load as part of your testing can better model a geographically distributed user base, one that may include users from around the world, enabling test engineers to generate realistic, large-scale tests across multiple regions. Cloud testing complements internal, lab-based tests and ideally test scripts from one domain are reused in the other. With separate performance metrics for each geographic region in hand, engineers can see where performance issues are likely to arise on a region-by-region basis.

If users are accessing your web site from all over the world, load testing from the cloud helps you model that reality. When this capability is combined with tests that incorporate parameterized scripts, browser differences, support for mobile apps, and a variety of connection speeds and network effects, you can trust the accuracy of your test results.

More Stories By Steve Weisfeldt

Steve Weisfeldt is a Senior Performance Engineer at Neotys, a provider of load testing software for Web applications. Previously, he has worked as the President of Engine 1 Consulting, a services firm specializing in all facets of test automation. Prior to his involvement at Engine 1 Consulting, he was a Senior Systems Engineer at Aternity. Prior to that, Steve spent seven years at automated testing vendor Segue Software (acquired by Borland). While spending most of his time at Segue delivering professional services and training, he was also involved in pre-sales and product marketing efforts.

Being in the load and performance testing space since 1999, Steve has been involved in load and performance testing projects of all sizes, in industries that span the retail, financial services, insurance and manufacturing sectors. His expertise lies in enabling organizations to optimize their ability to develop, test and launch high-quality applications efficiently, on-time and on-budget. Steve graduated from the University of Massachusetts-Lowell with a BS in Electrical Engineering and an MS in Computer Engineering.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, discussed how a user-centric Application Performance Management solution can help inspire your users with every applicati...
As enterprises engage with Big Data technologies to develop applications needed to meet operational demands, new computation fabrics are continually being introduced. To leverage these new innovations, organizations are sacrificing market opportunities to gain expertise in learning new systems. In his session at Big Data Expo, Supreet Oberoi, Vice President of Field Engineering at Concurrent, Inc., discussed how to leverage existing infrastructure and investments and future-proof them against e...
You use an agile process; your goal is to make your organization more agile. But what about your data infrastructure? The truth is, today's databases are anything but agile - they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver new features and capabilities needed to make your organization competitive. As your application an...
Once the decision has been made to move part or all of a workload to the cloud, a methodology for selecting that workload needs to be established. How do you move to the cloud? What does the discovery, assessment and planning look like? What workloads make sense? Which cloud model makes sense for each workload? What are the considerations for how to select the right cloud model? And how does that fit in with the overall IT transformation?
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises a...
SYS-CON Events announced today that SUSE, a pioneer in open source software, will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. SUSE provides reliable, interoperable Linux, cloud infrastructure and storage solutions that give enterprises greater control and flexibility. More than 20 years of engineering excellence, exceptional service and an unrivaled partner ecosystem power the products and support that help ...
The release of Kibana 4.x has had an impact on monitoring and other related activities.  In this post we’re going to get specific and show you how to add Node.js monitoring to the Kibana 4 server app.  Why Node.js?  Because Kibana 4 now comes with a little Node.js server app that sits between the Kibana UI and the […]
There’s a lot of discussion around managing outages in production via the likes of DevOps principles and the corresponding software development lifecycles that does enable higher quality output from development, however, one cannot lay all blame for “bugs” and failures at the feet of those responsible for coding and development. As developers incorporate features and benefits of these paradigm shift, there is a learning curve and a point of not-knowing-what-is-not-known. Sometimes, the only way ...
Virtualization is everywhere. Enormous and highly profitable companies have been built on nothing but virtualization. And nowhere has virtualization made more of an impact than in Cloud Computing, the rampant and unprecedented adoption of which has been the direct result of the wide availability of virtualization software and techniques that enabled it. But does the cloud actually require virtualization?
Right off the bat, Newman advises that we should "think of microservices as a specific approach for SOA in the same way that XP or Scrum are specific approaches for Agile Software development". These analogies are very interesting because my expectation was that microservices is a pattern. So I might infer that microservices is a set of process techniques as opposed to an architectural approach. Yet in the book, Newman clearly includes some elements of concept model and architecture as well as p...
I’ve been thinking a bit about microservices (μServices) recently. My immediate reaction is to think: “Isn’t this just yet another new term for the same stuff, Web Services->SOA->APIs->Microservices?” Followed shortly by the thought, “well yes it is, but there are some important differences/distinguishing factors.” Microservices is an evolutionary paradigm born out of the need for simplicity (i.e., get away from the ESB) and alignment with agile (think DevOps) and scalable (think Containerizati...
How can you compare one technology or tool to its competitors? Usually, there is no objective comparison available. So how do you know which is better? Eclipse or IntelliJ IDEA? Java EE or Spring? C# or Java? All you can usually find is a holy war and biased comparisons on vendor sites. But luckily, sometimes, you can find a fair comparison. How does this come to be? By having it co-authored by the stakeholders. The binary repository comparison matrix is one of those rare resources. It is edite...
As the world moves from DevOps to NoOps, application deployment to the cloud ought to become a lot simpler. However, applications have been architected with a much tighter coupling than it needs to be which makes deployment in different environments and migration between them harder. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, Netflix and so on is at the heart of CloudFoundry – a complete developer-oriented Platform as a Service (PaaS...
T-Mobile has been transforming the wireless industry with its “Uncarrier” initiatives. Today as T-Mobile’s IT organization works to transform itself in a like manner, technical foundations built over the last couple of years are now key to their drive for more Agile delivery practices. In his session at DevOps Summit, Martin Krienke, Sr Development Manager at T-Mobile, will discuss where they started their Continuous Delivery journey, where they are today, and where they are going in an effort ...
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block – and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming on...
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo – to be held June 3-5, 2015, at the Javits Center in New York City – will expand the DevOps community, enable a wide...
Cloud Expo, Inc. has announced today that Andi Mann returns to DevOps Summit 2015 as Conference Chair. The 4th International DevOps Summit will take place on June 9-11, 2015, at the Javits Center in New York City. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at ...
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usag...
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.