|By Steve Weisfeldt||
|February 11, 2013 05:00 PM EST||
Load testing, perhaps more than any other form of testing, is one of those activities that you either choose to do well or risk a result that leaves you worse off than not doing it at all. Half-hearted attempts at load testing yield "results," but too often those results are inaccurate, leading to a false sense of security for anyone who trusts them. This, in turn, leads to the release of applications that are not adequately tested and that experience performance problems soon after entering production.
I was reminded of this not long ago, when I worked with a customer who related an experience that may sound familiar to many of you. This customer was a test engineer for a bank that had recently merged with another bank, effectively doubling their customer base. He was part of a team responsible for load testing a new web application that would serve customers from both of the original banks. Before the application was rolled out, they performed load tests and confirmed that the application could handle the expected number of users with acceptable response times. When the system went live, however, it was slow as molasses - even under user loads less than what the team had tested.
The problem, as you may have guessed, was that the team had not accurately modeled the load. The virtual users used in the testing were a homogenous group that interacted with the system in roughly the same way, from roughly the same geographic locations, at the same network speed. In reality, the customers who came from Bank A tended to perform certain transactions much more frequently than those who came from Bank B. Most of Bank B's customers lived in a different part of the country than those from Bank A. More important, customers from both banks were accessing the application at widely differing connection speeds across a range of browsers. None of these factors were modeled accurately in the load tests the team had performed. In some cases it was because the team simply had not considered them, in others it was because the load testing tool they were using provided no way to handle these differences. In either case, the result was the same; the team had given the "go live" signal to an application that was not ready, basing their decision on inaccurate load test results.
Too often, organizations take a short cut to load testing. They are focused on a single number: how many concurrent users their application will support. As a result they put little effort into script development, and they end up with an unrealistic test - one of little value. I encourage all load testers to think beyond the concurrent users metric and take a closer look at other factors that go into creating a realistic load that will yield more accurate results, including:
- Modeling user activity
- Modeling different connection speeds
- Modeling different browsers and mobile devices
- Modeling geographically distributed users
Parameterizing Scripts to Better Model User Activity
Scripts that simply record a typical user's interaction with a web application and then play it back are not going to yield accurate performance data. As an example, a script that emulates a user logging into a site, searching for a product, placing it in the cart, and checking out does little to test the performance of other user activities such as checking product reviews, accessing detailed specifications, or comparing products.
More important, if the script always logs in as the same user and orders the same product, caching effects will often skew the performance measurements, making response times shorter than they would be under a real-world load. Caching on the web server, application server, and database server all come into play, compounding any caching that is done on the client side.
To minimize caching and similar effects, scripts must be parameterized. In my example above, the script would play back different users searching for different products, and purchasing them via different methods. Ideally the script would use randomization or data customization to fill in every user editable or selectable element on each form of the web application. This script parameterization, combined with creating multiple scripts to address a variety of user interactions, produces a much more realistic user load, and it's a good idea to have a load testing tool that simplifies these tasks.
Generating a Load with a Mixture of Connection Speeds and Network Characteristics
Many testing teams use the fastest available network connections when load testing a server. The belief is that if the application performs well under those connections, it will be guaranteed to work well in production when many real-world users will have slower connections. This is a faulty assumption that leads to performance problems when the application is subjected to real-world users accessing it at a variety of network bandwidths.
Testing with only high-speed connections can mask performance problems that occur only when lower speed connections are used. Slower data speeds will require connections to the server to stay open longer, and eventually the server may reach its limit for the maximum number of open connections.
Of course, testing with only low-speed connections is equally problematic. What's needed is a reasonable mixture of virtual users accessing the server at connection speeds representative of everything from 56K modems for dial-up users to T3 lines.
With more and more users accessing the web via mobile devices, it makes sense to include 3G and 4G connection rates in the mix as well. It's also important to take into account disparities in signal strength that can cause packet loss and increased network latency. Built-in support for incorporating these factors in performance testing is increasingly important, particularly for web applications that serve a high percentage of mobile users.
Emulating Different Browsers and Native Mobile Apps
Interestingly enough (and often surprising to some), not all browsers support the same number of concurrent HTTP connections. This obviously needs to be thought of as well - if a load test models the entire user population accessing a web application with a single browser that supports four connections per server, it neglects the effects of browsers that use twice that number.
This leads to a situation similar to the one that arises with inaccurate modeling of connection speeds - with more concurrent connections, it is not unusual to see slowdowns as a server reaches its limit for simultaneous connections. To minimize these effects, load tests should apply a variety of browser profiles during playback, so that the tests identify the traffic as originating from a realistic mixture of different browsers, including mobile browsers.
Mobile devices, in fact, present a new set of challenges for load testers (see Best Practices for Load Testing Mobile Applications, Part 1 and Best Practices for Load Testing Mobile Applications, Part 2), aside from the network connection issues I've already covered. Many companies now have a separate mobile version of their site, with content tailored specifically for mobile users. Again, to perform a valid load test on such sites, a test engineer must be able to override the browser identification during playback so that the virtual user appears to be using a mobile browser.
What about native mobile applications? There is no browser involved, so you'll need a testing solution that can record, parameterize, and play back the network traffic originating from the mobile device. For some cases this can be done via a proxy, but for some apps this is not an available option. These apps may call for a tunneling approach in which the testing tool acts as a DNS server. Even if you're not facing this situation today, you may want to see if your testing tool supports this feature so you're prepared when you do need it.
Generating a Geographically Distributed Load
Unless your end-user community is accessing your application from a single location, initiating tests solely from inside your datacenter is unlikely to represent a realistic load. Such tests fail to take into account the effects of third-party servers and content delivery networks that may sit between your users and your web application.
Using the cloud to generate load as part of your testing can better model a geographically distributed user base, one that may include users from around the world, enabling test engineers to generate realistic, large-scale tests across multiple regions. Cloud testing complements internal, lab-based tests and ideally test scripts from one domain are reused in the other. With separate performance metrics for each geographic region in hand, engineers can see where performance issues are likely to arise on a region-by-region basis.
If users are accessing your web site from all over the world, load testing from the cloud helps you model that reality. When this capability is combined with tests that incorporate parameterized scripts, browser differences, support for mobile apps, and a variety of connection speeds and network effects, you can trust the accuracy of your test results.
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
Aug. 31, 2016 02:45 AM EDT Reads: 1,886
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
Aug. 31, 2016 02:15 AM EDT Reads: 2,275
Aug. 30, 2016 11:15 PM EDT Reads: 4,963
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
Aug. 30, 2016 08:45 PM EDT Reads: 2,471
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
Aug. 30, 2016 08:00 PM EDT Reads: 2,036
Before becoming a developer, I was in the high school band. I played several brass instruments - including French horn and cornet - as well as keyboards in the jazz stage band. A musician and a nerd, what can I say? I even dabbled in writing music for the band. Okay, mostly I wrote arrangements of pop music, so the band could keep the crowd entertained during Friday night football games. What struck me then was that, to write parts for all the instruments - brass, woodwind, percussion, even k...
Aug. 30, 2016 07:45 PM EDT Reads: 3,232
Right off the bat, Newman advises that we should "think of microservices as a specific approach for SOA in the same way that XP or Scrum are specific approaches for Agile Software development". These analogies are very interesting because my expectation was that microservices is a pattern. So I might infer that microservices is a set of process techniques as opposed to an architectural approach. Yet in the book, Newman clearly includes some elements of concept model and architecture as well as p...
Aug. 30, 2016 07:45 PM EDT Reads: 10,932
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
Aug. 30, 2016 05:45 PM EDT Reads: 3,596
Modern organizations face great challenges as they embrace innovation and integrate new tools and services. They begin to mature and move away from the complacency of maintaining traditional technologies and systems that only solve individual, siloed problems and work “well enough.” In order to build...
This complete kit provides a proven process and customizable documents that will help you evaluate rapid application delivery platforms and select the ideal partner for building mobile and web apps for your organization.
Aug. 30, 2016 05:00 PM EDT Reads: 3,172
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
Aug. 30, 2016 03:30 PM EDT Reads: 3,785
Thomas Bitman of Gartner wrote a blog post last year about why OpenStack projects fail. In that article, he outlined three particular metrics which together cause 60% of OpenStack projects to fall short of expectations: Wrong people (31% of failures): a successful cloud needs commitment both from the operations team as well as from "anchor" tenants. Wrong processes (19% of failures): a successful cloud automates across silos in the software development lifecycle, not just within silos.
Aug. 30, 2016 03:15 PM EDT Reads: 2,227
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
Aug. 30, 2016 01:00 PM EDT Reads: 3,246
Let's just nip the conflation of these terms in the bud, shall we?
"MIcro" is big these days. Both microservices and microsegmentation are having and will continue to have an impact on data center architecture, but not necessarily for the same reasons. There's a growing trend in which folks - particularly those with a network background - conflate the two and use them to mean the same thing.
They are not.
One is about the application. The other, the network. T...
Aug. 30, 2016 09:45 AM EDT Reads: 4,777
[session] Architecting for the Cloud By @RagsS | @CloudExpo @IBMBluemix #Cloud #Docker #Microservices
As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...
Aug. 30, 2016 09:45 AM EDT Reads: 1,092
SYS-CON Events announced today that eCube Systems, a leading provider of middleware modernization, integration, and management solutions, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. eCube Systems offers a family of middleware evolution products and services that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
Aug. 30, 2016 09:45 AM EDT Reads: 905
The following fictional case study is a composite of actual horror stories I’ve heard over the years. Unfortunately, this scenario often occurs when in-house integration teams take on the complexities of DevOps and ALM integration with an enterprise service bus (ESB) or custom integration. It is written from the perspective of an enterprise architect tasked with leading an organization’s effort to adopt Agile to become more competitive. The company has turned to Scaled Agile Framework (SAFe) as ...
Aug. 30, 2016 09:30 AM EDT Reads: 1,006
If you are within a stones throw of the DevOps marketplace you have undoubtably noticed the growing trend in Microservices. Whether you have been staying up to date with the latest articles and blogs or you just read the definition for the first time, these 5 Microservices Resources You Need In Your Life will guide you through the ins and outs of Microservices in today’s world.
Aug. 30, 2016 08:45 AM EDT Reads: 5,265
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
Aug. 30, 2016 08:30 AM EDT Reads: 5,331
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Aug. 30, 2016 06:00 AM EDT Reads: 2,151