Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Pat Romanski, Zakia Bouachraoui

Related Topics: Microservices Expo, Java IoT, Mobile IoT, Microsoft Cloud, Machine Learning , Agile Computing

Microservices Expo: Article

Top Seven Website Performance Indicators to Monitor

Whatever the reason for a website crashing or slowing down, it’s bad for business and for your online reputation

Poorly performing websites, like Twitter's recent fiasco with Ellen's selfie, are a constant source of irritation for users. At first you think it's your computer, or maybe someone on your block is downloading the entire "Game of Thrones" series. But, when nothing changes after refreshing the page once or twice, you give up, mutter under your breath, and move on.

Whatever the reason for a website crashing or slowing down, it's bad for business and for your online reputation. According to a survey conducted by Consumer Affairs, a dissatisfied customer will tell between 9-15 people about their experience. And, if your website can't load fast enough (in 400 milliseconds), then most of your customers will search for another website.

Understanding how your website performs under pressure is extremely important for any company. But, it can be daunting trying to figure out what website performance indicators you should monitor.

We have compiled a list of the top seven website performance indicators we believe to be important. Make sure to track each of these to guarantee a great customer experience.

Top Seven Website Performance Indicators

1. Uptime
Monitoring the availability of your website is without a doubt the single most important part of website monitoring. Ideally, you should constantly check the uptime of your key pages from different locations around the world. Measure how many minutes your site is down over a period of two weeks or a month, and then express that as a percentage.

2. Initial Page Speed
Consumers' behavior and tolerance thresholds have changed. Now, people who browse a website expect it to load in a blink of an eye. If it doesn't load quickly, they will leave and turn to a competitor's site. You can check your website's speed using Ping requests (measuring the time it takes from your location until the website starts loading) and loading time measurements, for example, measuring the time it takes to download the source code of a web page. Note that this measurement reflects the time it takes for the raw page to load, but that isn't the complete user experience. For that, you must measure...

3. Full Page Load Time including images, videos, etc.
This performance indicator is usually called End User Experience testing. It's the amount of time it takes for all the images, videos, dynamically-loaded (AJAX) content, and everything else seen by the user to pop up on the their screen. This is different than the time it takes for the raw file to download to the device it's going to display on (as indicated above).

Both full page load time and page speed are important to measure because you can employ different strategies to optimize for both of them. Images, videos, and other static content can be cached on separate, dedicated systems or content delivery networks (CDNs), while dynamic content might need dedicated servers and fast databases. Knowing how your website behaves as it scales will help you put the right infrastructure in place.

4. Geographic Performance
If you are a globally active company or if you have consumers from different parts of the world, understanding your geographical performance - which is your website's speed and availability in different locations - is extremely important. Your ultimate goal is to make sure your website is easily accessible to all visitors regardless of their location to give them an excellent customer experience.

Many companies ignore this factor, only testing performance in familiar geographies. At a minimum, use your website analytics as a guide to put testing in place that shadows the locations from which your visitors are accessing your site.

5. Website Load Tolerance
Do you know how many visitors it takes to considerably slow down your website? It's an important indicator to understand because if you are running aggressive marketing campaigns or are picked up by the press you might be in a situation where your website is flooded with visitors in a matter of minutes.

Regularly run stress tests and compare the results to your visitor numbers at peak times. Once you understand how much load your website can handle then you can adjust your infrastructure to meet the demand. Look for those "tipping points" so you won't be caught by surprised when traffic spikes.

6. Web Server CPU Load
CPU usage is a common culprit in website failures. Too much processing bogs down absolutely everything on the server without much indication as to where the problem lies. You can prevent web server failures by monitoring CPU usage regularly. If you cannot install monitoring software on your web servers due to hosting arrangements or other constraints, consider running a script that publishes the values from available disk space and CPU load to a very simple html page.

7. Website Database Performance
Your database can be one of the most problematic parts of your website. A poorly optimized query, for example, can be the difference between a zippy site and an unusable one. It's important to monitor your database logs closely. Create alerts if the results contain certain error messages, or deliver results outside of expected norms. Use the built-in capabilities of the database to see which queries are taking the most time, and identify ways to optimize those through indices and other techniques. Most importantly, monitor the overall performance of the database to make sure it's not a bottleneck.

No Downtime = Happy Customers
If you can monitor all seven of these metrics, you should have a good idea of how your website performs and what needs to change when it doesn't perform well. Minimizing website downtime will keep your customers happy. If you have any questions on these metrics or load testing let me know.

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development organizations.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...