Welcome!

Microservices Expo Authors: Elizabeth White, Carmen Gonzalez, Pat Romanski, Liz McMillan, Sematext Blog

Related Topics: @CloudExpo, Java IoT, Microservices Expo, IoT User Interface, Agile Computing, Release Management , @BigDataExpo

@CloudExpo: Article

How to Performance Test Automation for GWT and SmartGWT

The next “evolutionary” step is to monitor performance for every end user

This article is based on the experience of Jan Swaelens, Software Architect at Sofico. He is responsible for automatic performance testing of the company's new web platform based on GWT and SmartGWT. Sofico is specialized in software solutions for automotive finance, leasing, fleet and mobility management companies.

Choosing GWT and SmartGWT over Other Technologies
About two years ago Sofico started a project to replace its rich desktop application (built with PowerBuilder) with a browseribased rich Internet application. The developers selected GWT and SmartGWT as core technologies to leverage their in-house Java expertise because they believed in the potential of what these (fairly) new technologies had to offer. Their goal was to replace the existing desktop client with a new one that ran in a browser. Their eyes where set on a better user experience and high degree of customization possibilities to give their customers the flexibility and adaptability that they need to run their businesses.

Need End-to-End Visibility into GWT Black Box
GWT was a great choice as they could soon deliver the first basic version. The problems started when trying to figure out what was actually going on in these frameworks in order to analyze performance problems reported by the first testers.

Developers started off by using the "usual suspects" - browser-specific Dev Tools for Chrome, Firefox and IE. Back then, the built-in tools lacked first class JavaScript performance analysis capabilities which made it difficult to analyze a complex browser application. Additionally, there were no integration capabilities into server-side performance analysis tools such as JProfiler which would allow them to analyze the impact and correlation between server-side and client-side GWT code. Taking performance seriously, the performance automation team came up with some key requirements for additional tooling and process support.

Requirement #1: Browser to Database Visibility to "understand" what's going on
Do you know what really happens when a page of a GWT application is loaded? No?! Neither did the developers from Sofico. Getting insight into the "Black Box" was therefore the first requirement because they wanted to understand: what really happens in the browser, how many resources are downloaded from the web server, which transactions make it to the app server, what requests are cached, where is it cached and how the business logic and data access layer implementation impacts end user experience.

The following screenshots show the current implementation using dynaTrace (sign up for the free trial), which gives the developers full visibility from the browser to the web, app and database server. The Transaction Flow visualizes how individual requests or page loads and services by the different application tiers are processed.

End-to-End Visibility gave the developers more insight into how their GWT Application really works and what happens when pages are loaded or users interact with certain features.

A great view for front-end developers is the timeline view that shows what happens in a browser when a page gets loaded, when a user clicks a button that executes AJAX Requests, or when backend JavaScript continuously updates the page. It gives insight into performance problems of JavaScript code, inefficient use of resources (JS, CSS, Images...) and highlights whether certain requests just take a very long time on the server-side implementation:

Developers love the timeline view as it is easy to see what work is done by the browser, where performance hotspots are and even provides screenshots at certain events

To read more about additional requirements, please click here for the full article.

Requirement #2: JavaScript Performance Data to Optimize Framework Usage

Requirement #3: Correlated Server-Side Performance Data

Requirement #4: Automation, Automation, Automation

Next Step: Real User Monitoring
Giving developers the tools they need to build optimized and fast websites is great. Having a test framework that automatically verifies that performance metrics are always met is even better. Ultimately you also want to monitor performance of your real end users. The next "evolutionary" step therefore is to monitor performance for every end user, from all different geographical regions and all browsers they use. The following shows a dashboard that provides a high level analytics view of actual users. In case there are problems from specific regions, browser types, or specific web site features, you can drill down to the JavaScript error, long running method, problematic SQL Statement or thrown Exception.

After test automation comes production: You want to make sure to also monitor your real users and catch problems not found in testing

Read more and test it yourself

If you want to analyze your web site - whether it is implemented in GWT or any other Java, .NET or PHP Framework sign up for the dynaTrace Free Trial (click on try dynaTrace for free) and get 15 days full featured access to the product.

Also - here are some additional blogs you might be interested in

If you happen to be a Compuware APM/dynaTrace customer also check out the Test Automation features of dynaTrace on our APM Community Portal: Test Automation Video

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to delive...
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, showed how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful f...
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...