The quality of any application is determined by the robustness and scalability of the system. It's mandatory to simulate the actual environment and test the application for preparedness. Web Services-savvy applications need a different methodology for testing in a real-world scenario. The UI-less nature of Web Services presents a significant challenge in testing such applications. The whole persona of consumer stubs with different payloads dictates the planning of Web Services load-testing schemes. This paper talks about the different aspects of load testing and areas of contention that need special attention. This will be helpful in not only building a better application but also compiling a robust, high-quality enterprise architecture.
Web Services are the natural delivery mechanism to achieve SOA. While having the potential to free enterprises from the endless cycle of vendor-specific hardware/software upgrades by ensuring interoperability, they bring in integration complexities and the overhead of maintaining compatibility with the underlying EIS applications/systems. This brings in an absolutely different perspective to testing Web Services.
Web Services applications generally use a lot of data transformation, wraparounds (wrappers), translation, and abstraction to bring about the promised interoperability and portability. Their dependence on bandwidth-heavy protocols like SOAP doesn't ensure many performance benefits when compared to legacy applications (which tend to be very tightly coupled). Parameters like response time, throughput, and CPU utilization for transactions determine the viability of a real-world business application. Extensive testing of Web Services based on these parameters brings to the fore the most common performance constraints associated with them. The test results not only indicate whether the associated benchmarks are attained, but also if the service can scale to meet demands imposed by concurrent access from multiple users, simulated or otherwise.
Web Service endpoints generally also have very high visibility. They have to service multiple clients over the network simultaneously, maintaining robustness and availability at the same time. In such a situation, performance becomes even more crucial. Thus, the significance of proper performance testing for Web Services can't be overemphasized.
A Web Service, like any other application, can be subject to a wide range of test conditions and testing strategies. Some of them being functional testing, regression testing, performance testing, stress testing, and load testing. This paper will focus only on the load testing of Web Services. The expected behavior of a Web Service will be evaluated against various performance criteria when concurrent access by multiple clients is simulated. It becomes crucial to ensure that apart from optimizing design and implementation, Web Services have to be tested for throughput, efficiency, and response simulating real-world conditions as closely as possible. This is where load testing plays a major role. A properly designed load-testing strategy can simulate real-world load and performance scenarios with minimal hassle and cost. User loads and network conditions of varying nature can be effortlessly created and replicated. Testing can be undertaken till the output charts show a performance range considered acceptable for an application of its nature. Load-testing results can hence be taken as a strong indicator of application performance in actual business environments.
To ensure optimal testing of Web Services, the test cases have been designed keeping the following parameters in mind:
Load Testing with Reference to Web Services
Load testing of Web Services is significantly different from testing of other applications since their performance is not just attributed to how robust the underlying architecture is but also to the network overheads, underlying processing involved, and the performance of the Web server that hosts the service. The behavior of the SOAP engine also invariably adds to the architecture of service provider systems. Certain major areas of contention when evaluating the Web Service performance that will be discussed here are:
Load Testing Metrics and Parameters
The results obtained by load testing Web Services can potentially be reflected in terms of the following parameters.
A client application creates a SOAP message containing the XML payload, which can be either a SOAP-RPC-encoded request or a document-style message. The client sends this message along with the service endpoint URL to the SOAP client runtime, which in turn sends it over the network. Once the SOAP message is delivered to the SOAP runtime at the service, it passes through handlers (if any) that handle the processing of any additional tags for WS-Security, WS-Addressing, etc. Then the SOAP runtime converts the XML message into programming language-specific objects if required by the application. The Web Service processes the request message and formulates a response. The SOAP runtime on the service side takes care of creating a SOAP message and dispatching it back to the client.
So, apart from the actual processing of the Web Service, there's some additional processing involved before and after the Web Service builds a response. Let's identify the bottlenecks involved in invoking a Web Service:
Our test environment setup is described below: (see Table 1)
A document-style Web Service has different payloads being passed on as SOAP message elements. These documents vary in size to measure the response time given by a Web Service invocation. Network congestion or the time spent in the communication pipeline distorts the Web Service's actual response time. To measure the true response time of a Web Service, the service is locally hosted, eliminating any network-related bottlenecks.
The Web Services are hosted on the JBoss application server that resides on a machine with the following setup: A Dell server PE 1600SC with an Intel 2.8GHz Xeon CPU and 1GB of RAM. The various performance parameters like response time, throughput, number of transactions passed/failed, and load size are measured against different payloads for RPC and document styles of Web Services and the results are shown on graphs.
Load testing summary of a document/literal-style Web Service
Payload size: 10KB
Number of concurrent users: 25
Total test duration: 10 minutes (see Figure 2)
The graphs above depict the variation of the average response time of a document/literal-style Web Services at a constant payload size of 10KB. Note that the average response time is significantly nominal and the performance of the service remains stable over the given period of time.
Payload size: 100KB
Number of concurrent users: 50
Total test duration: 15 minutes (see Figure 3)
The same document-style Web Service when evaluated for a medium payload of 100KB performed the same as the graphs depict.
Payload size: 500KB
Number of concurrent users: 50
Total test duration: 15 minutes (see Figure 4)
When tested for a high payload of 500KB and 50 concurrent users, the document-style Web Services remain stable. The average response time remains significantly low at around 1.2 seconds. None of the transactions failed.
Load Testing Tools
There are commercial tools like Mercury's LoadRunner and Radview's Webload that are very efficient and detailed for load testing Web Services. There are various Open Source alternatives to LoadRunner that can serve our purpose of load testing to varying degrees. Some of the more popular tools include the soapUI 1.6 beta, the Grinder 3 beta, and OpenSTA. soapUI provides basic functionality to create test cases, execute them, create sample SOAP clients, etc. Grinder uses a highly detailed language called Jython to write the test scripts.
Software testing is a crucial phase of the SDLC and load testing is an integral part of any efficient testing scheme. This paper highlighted the importance of load testing with specific reference to Web Services. The design principles entailed attempted to bring about a proper plan for testing, the parameters to be looked for, and the expected results. The strategies contained in this paper can be implemented regardless of the platform on which the application is deployed and the tools used for testing.
© 2008 SYS-CON Media Inc.