Welcome!

Microservices Expo Authors: TJ Randall, Liz McMillan, Elizabeth White, Pat Romanski, AppDynamics Blog

Related Topics: Microservices Expo, Java IoT, Industrial IoT, Microsoft Cloud, Machine Learning , Agile Computing

Microservices Expo: Article

100 Years in the Movies: One Evening’s Web Performance

Why one company performed better during this year’s Super Bowl

Both Paramount and Universal celebrated their 100th anniversary last year, which is a long time to be in the movie business. Arguably, both have made some good, some great, and some bad movies. But, during this year's Super Bowl, Paramount showed Universal how to design a ‘fast and furious' web site that stood up to the flood of visitors during and after the game.

This article will discuss not only how Paramount was able to do it, but will also compare Universal and Paramount's Super Bowl web site results, which shines a light on key factors for successful web performance: fewer connections to fewer hosts requesting smaller objects produces a smaller page size having a positive impact on page response time.

To begin, Universal and Paramount are near equals when it comes to their web age. Jumping over to http://web.archive.org, I found Paramount launched its first site 16 years ago in 1997 and Universal's first site came online 15 years ago in 1998. With the same amount of experience on the Web, it's interesting to explore why one company performed better during this year's Super Bowl.

As a result of analyzing web page performance for nearly six years, my experience tells me that the reasons some sites succeed and others don't can fall into three general categories - corporate culture, resources, and experience and knowledge. But even today, with so much information available on web site performance fundamentals, I often see companies forgetting the basics.

Why Universal Was not ‘Fast and Furious'
Looking at Paramount and Universal's site performance for the period from 5 p.m. EST until 11 p.m. EST on Sunday, February 3 (Super Bowl Sunday), I noticed some big performance differences between the two sites. For starters, Paramount's homepage average response time was 966 milliseconds while Universal's was 11.727 seconds.

Comparison of the two sites clearly shows that the differences come down to web performance basics and the fundamental construction of a web page. This includes connections, object count and type, page size, and hosts. This can be seen in the following chart:

†3 objects greater than 200KB

*11 objects greater than 200KB

The Fewer Connections the Better
Paramount designed their Super Bowl site using only nine connections, while Universal used 41 connections:

The number of connections was a significant factor in Universal's poor response time because more connections equated to more bytes transferred. The tradeoffs can be significant as Universal's response time shows.

After the IP address is resolved by a DNS lookup, the number of connections generally sets the pace for page loading. Even though modern browsers are capable of making between 6 and 8 simultaneous connections to the same host, it doesn't mean you have to use them all. Creating TCP/IP requests takes time and resources, and the milliseconds of overhead that each one requires can quickly add up to seconds, especially during a big game night when massive web traffic is expected.

Identify Flying Objects
The amount of time spent making HTTP requests for all the objects can have a marked impact on page response time. The following chart shows that Paramount's homepage contained only 41 objects, while Universal's homepage contained 121 objects:

Once a connection is established the objects will, presumably, just fly into your visitors' browser, right? Not always.

What if these objects are large files and the servers are straining because of an increased load (as during a special event like the Super Bowl). In Universal's case, there were 11 files tipping the scale at well over 200KB each (two files where over 750KB each). Paramount, on the other hand, only had three files exceeding 200KB.

Size Matters
You've probably heard this once or twice - page size matters. Page size is characterized by totaling all the files that make up a web page (typically compressed and in KB).

Here we see Universal's homepage size coming in at a whopping 4995KB while Paramount's homepage comes in at only 1625KB:

Typically, file size isn't too much of an issue during normal surfing, but Super Bowl Sunday is not a normal traffic day. You can agree that five pounds is heavier than two pounds, and it subsequently takes more effort to lift five pounds. This same concept is true for web sites - some are heavy in KB and others comparatively lighter.

In this case, Universal's page was not as ‘fast and furious' as Paramount's because its page size was 3370KB heavier. Newtonian Law of the Internet states it's going to take longer to download heavy pages than lighter ones so long as the access lines are equal.

Host Counts Count
Using the HTTP Archive Trends site (http://httparchive.org/trends.php#numDomains&maxDomainReqs), you can find information on many web site design trends. Two such trends that I find interesting are the average number of domains accessed across all websites and the maximum number of requests (Max Reqs) on the most used domain.

Here's a table comparing Paramount to Universal to the average for all websites:

The difference between the two studio sites is very clear. We can see that Paramount designed a site that was well under the average for number of Domains and Max Reqs on one domain with three and 39, respectively, where Universal was well above at 27 and 75, respectively:

Further, examining the average number of Domain/Hosts alone, Universal used 9x more hosts across their homepage over Paramount. A fast response time can be increasingly challenging to design and there will be some compromises made, and a low host count seems to be an obvious tactic to follow.

Back to the Basics
Comparing the Universal and Paramount Super Bowl web site results highlights some of the key truths of web performance. Primarily, fewer connections to fewer hosts requesting fewer, smaller objects produces a smaller page size and these items can have a positive impact on page response time. Placing these areas on high alert before going live - connections, object count and type, page size, and hosts - may be one of the best ways to ensure a successful Super Bowl any day of the year.

More Stories By Gregory Speckhart

Gregory Speckhart is a Senior APM Solutions Consultant at Compuware.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
At its core DevOps is all about collaboration. The lines of communication must be opened and it takes some effort to ensure that they stay that way. It’s easy to pay lip service to trends and talk about implementing new methodologies, but without action, real benefits cannot be realized. Success requires planning, advocates empowered to effect change, and, of course, the right tooling. To bring about a cultural shift it’s important to share challenges. In simple terms, ensuring that everyone k...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, will discuss how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galer...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...