Click here to close now.


Microservices Expo Authors: Liz McMillan, Janakiram MSV, Victoria Livschitz, Elizabeth White, Yeshim Deniz

Related Topics: Java IoT, Microservices Expo, IoT User Interface, Agile Computing, Release Management , @CloudExpo

Java IoT: Article

Super Bowl Sunday 2013 – Winners, Losers, and Casualties

Since the late 1990s, Super Bowl advertisers have tried to successfully link their TV ads to their online properties

No matter which team you were cheering for (or if you even watched the game at all), Super Bowl Sunday 2013 was more than a football game. Since the late 1990s, Super Bowl advertisers have tried to successfully link their TV ads to their online properties, sometimes with mixed results. Even 15 years later, companies can't always predict how well their sites will perform on the big day. But unlike the early days of TV/online campaigns, the problems are more complex than a site going down under heavy traffic.

This year, some of the world's premier brands spent millions of dollars on 30 second and one-minute ad blocks (as well as millions for the creation of the ads) during the Super Bowl, all of which were tied directly to online or social media campaigns. However, not all the sites successfully resisted the onslaught of traffic.

The measurement results from the Compuware network in the periods leading up to, and during, the Super Bowl showed some clear winners and losers in page load time. Events like the Super Bowl require high-frequency measurements, so we set our locations to collect data every five minutes to catch every variation in performance, no matter how fleeting.

For the period from 5 p.m. EST until 11 p.m. EST on Sunday, February 3, the top and bottom three sites were:

Top Three Performers

  1. Go Daddy
  2. Paramount
  3. Lincoln Motor Cars

Bottom 3 Performers

  1. Doritos
  2. Coca-Cola
  3. Universal Pictures

Top and Bottom Web Performers - Super Bowl 2013
All of these sites chose different approaches to delivering their message online. What we found through our analysis is that the issues that they encountered almost perfectly aligned with those that Compuware finds during every major online event.

You're Not Alone
The Super Bowl is often referred to as the perfect storm for web performance - a six-hour window, with the spotlight on your company for 30-60 seconds (or more if you bought many slots). However, the halo effect sees traffic to your site increase astronomically for the entire six hours while people prepare for your big unveiling.

But your company isn't the only one doing the same thing. And many (if not all) of the infrastructure components, datacenters, CDNs, ad providers, web analytics, and video streaming platforms you use are being used by other companies advertising during the Super Bowl.

Even if you have tested your entire site to what you think is your peak traffic volume (and beyond), remember that these shared services are all running at their maximum volume during the Super Bowl. All of the testing you did on your site can be undone by a third party that can't handle a peak load coming from two, three, or more customers simultaneously.

Lesson: Verify that your third-party services can effectively handle the maximum load from all of their customers all at once without degrading the performance of any of them.

Lose a Few Pounds
The performance solution isn't just on the third parties. It also relies on companies taking steps to focus on the most important aspect of Super Bowl Sunday online campaigns - getting people to your site. Sometimes this means that you have to make some compromises, perhaps streamline the delivery a little more than you otherwise would.

While the total amount of content is a key indicator of potential trouble - yes, big pages do tend to load more slowly than small pages - Compuware data showed that two of the three slowest sites drew content from more than 20 hosts and had over 100 objects on the page (with the slowest having over 200!). This complexity increases the likelihood that something will go wrong, and that if that happens, it could lead to a serious degradation in performance.

Lesson: While having a cool, interactive site for customers to come to is a big win for a massive marketing event like the Super Bowl, keeping a laser focus on delivering a successful experience sometimes mean leaving stuff out.

Have a Plan B (and Plan C, and Plan D...)
I know Murphy well. I have seen his work on many a customer site, whether they hired him or not. And when the inevitable red square (or flashing light or screaming siren) appears to announce a web performance problem, his name will always appear.

If you plan for a problem, when it happens, it's not a problem. If your selected CDN becomes congested due to a massive traffic influx that was not expected, have the ability to dynamically balance load between CDN providers. If an ad service or messaging platform begins to choke your site, have the ability to easily disable the offending hosts. If your cloud provider begins to rain on your parade, transfer load to the secondary provider you set up "just in case." If your dynamic page creation begins to crash your application servers, switch to a static HTML version that can be more easily delivered by your infrastructure.

If you have fallen back to Plan J, have an amusing error message that allows your customers to participate in the failure of your success. Heck, create a Twitter hashtag that says "#[your company]GoesBoom" and realize that any publicity is better than not being talked about at all.

Lesson: Murphy always puts his eggs in one basket. Learn from his mistake and plan for problems. Then test your plans. Then plan again. And test again. Wash, rinse, repeat until you have caught 95% of the possible scenarios. Then, have a plan to handle the remaining 5%.

Now What?
What have we learned from Super Bowl 2013? We have learned that during a period of peak traffic and high online interest, the performance issues that sites encounter are very consistent and predictable, with only the affected sites changing. But by taking some preventative steps, and having an emergency response plan, most of the performance issues can be predicted, planned for, and responded to when (not if) they appear.

When your company goes into the next big event, be it the Super Bowl or that one-day online sale, planning for the three items listed here will likely make you better prepared to bask in the success of the moment. We will be assisting you over the next few days by more deeply analyzing the performance of some of the top brand rivalries, in the Compuware version of the AdBowl.

More Stories By Stephen Pierzchala

With more than a decade in the web performance industry, Stephen Pierzchala has advised many organizations, from Fortune 500 to startups, in how to improve the performance of their web applications by helping them develop and evolve the unique speed, conversion, and customer experience metrics necessary to effectively measure, manage, and evolve online web and mobile applications that improve performance and increase revenue. Working on projects for top companies in the online retail, financial services, content delivery, ad-delivery, and enterprise software industries, he has developed new approaches to web performance data analysis. Stephen has led web performance methodology, CDN Assessment, SaaS load testing, technical troubleshooting, and performance assessments, demonstrating the value of the web performance. He noted for his technical analyses and knowledge of Web performance from the outside-in.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
As the world moves towards more DevOps and microservices, application deployment to the cloud ought to become a lot simpler. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. In his session at 17th Cloud Expo, Raghavan "Rags" Srinivas, an Architect/Developer Evangeli...
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, will explore HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show h...
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
Our guest on the podcast this week is Jason Bloomberg, President at Intellyx. When we build services we want them to be lightweight, stateless and scalable while doing one thing really well. In today's cloud world, we're revisiting what to takes to make a good service in the first place. Listen in to learn why following "the book" doesn't necessarily mean that you're solving key business problems.
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
In their session at DevOps Summit, Asaf Yigal, co-founder and the VP of Product at, and Tomer Levy, co-founder and CEO of, will explore the entire process that they have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data. They will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the archi...
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
For it to be SOA – let alone SOA done right – we need to pin down just what "SOA done wrong" might be. First-generation SOA with Web Services and ESBs, perhaps? But then there's second-generation, REST-based SOA. More lightweight and cloud-friendly, but many REST-based SOA practices predate the microservices wave. Today, microservices and containers go hand in hand – only the details of "container-oriented architecture" are largely on the drawing board – and are not likely to look much like S...
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
With containerization using Docker, the orchestration of containers using Kubernetes, the self-service model for provisioning your projects and applications and the workflows we built in OpenShift is the best in class Platform as a Service that enables introducing DevOps into your organization with ease. In his session at DevOps Summit, Veer Muchandi, PaaS evangelist with RedHat, will provide a deep dive overview of OpenShift v3 and demonstrate how it helps with DevOps.
All we need to do is have our teams self-organize, and behold! Emergent design and/or architecture springs up out of the nothingness! If only it were that easy, right? I follow in the footsteps of so many people who have long wondered at the meanings of such simple words, as though they were dogma from on high. Emerge? Self-organizing? Profound, to be sure. But what do we really make of this sentence?
Application availability is not just the measure of “being up”. Many apps can claim that status. Technically they are running and responding to requests, but at a rate which users would certainly interpret as being down. That’s because excessive load times can (and will be) interpreted as “not available.” That’s why it’s important to view ensuring application availability as requiring attention to all its composite parts: scalability, performance, and security.
Last month, my partners in crime – Carmen DeArdo from Nationwide, Lee Reid, my colleague from IBM and I wrote a 3-part series of blog posts on We titled our posts the Simple Math, Calculus and Art of DevOps. I would venture to say these are must-reads for any organization adopting DevOps. We examined all three ascpects – the Cultural, Automation and Process improvement side of DevOps. One of the key underlying themes of the three posts was the need for Cultural change – things like t...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
There once was a time when testers operated on their own, in isolation. They’d huddle as a group around the harsh glow of dozens of CRT monitors, clicking through GUIs and recording results. Anxiously, they’d wait for the developers in the other room to fix the bugs they found, yet they’d frequently leave the office disappointed as issues were filed away as non-critical. These teams would rarely interact, save for those scarce moments when a coder would wander in needing to reproduce a particula...
It is with great pleasure that I am able to announce that Jesse Proudman, Blue Box CTO, has been appointed to the position of IBM Distinguished Engineer. Jesse is the first employee at Blue Box to receive this honor, and I’m quite confident there will be more to follow given the amazing talent at Blue Box with whom I have had the pleasure to collaborate. I’d like to provide an overview of what it means to become an IBM Distinguished Engineer.