Welcome!

Microservices Expo Authors: John Worthington, Pat Romanski, Stackify Blog, Automic Blog, Simon Hill

Related Topics: Java IoT, Microservices Expo, Machine Learning , Agile Computing, Release Management , @CloudExpo

Java IoT: Article

Super Bowl Sunday 2013 – Winners, Losers, and Casualties

Since the late 1990s, Super Bowl advertisers have tried to successfully link their TV ads to their online properties

No matter which team you were cheering for (or if you even watched the game at all), Super Bowl Sunday 2013 was more than a football game. Since the late 1990s, Super Bowl advertisers have tried to successfully link their TV ads to their online properties, sometimes with mixed results. Even 15 years later, companies can't always predict how well their sites will perform on the big day. But unlike the early days of TV/online campaigns, the problems are more complex than a site going down under heavy traffic.

This year, some of the world's premier brands spent millions of dollars on 30 second and one-minute ad blocks (as well as millions for the creation of the ads) during the Super Bowl, all of which were tied directly to online or social media campaigns. However, not all the sites successfully resisted the onslaught of traffic.

The measurement results from the Compuware network in the periods leading up to, and during, the Super Bowl showed some clear winners and losers in page load time. Events like the Super Bowl require high-frequency measurements, so we set our locations to collect data every five minutes to catch every variation in performance, no matter how fleeting.

For the period from 5 p.m. EST until 11 p.m. EST on Sunday, February 3, the top and bottom three sites were:

Top Three Performers

  1. Go Daddy
  2. Paramount
  3. Lincoln Motor Cars

Bottom 3 Performers

  1. Doritos
  2. Coca-Cola
  3. Universal Pictures

Top and Bottom Web Performers - Super Bowl 2013
All of these sites chose different approaches to delivering their message online. What we found through our analysis is that the issues that they encountered almost perfectly aligned with those that Compuware finds during every major online event.

You're Not Alone
The Super Bowl is often referred to as the perfect storm for web performance - a six-hour window, with the spotlight on your company for 30-60 seconds (or more if you bought many slots). However, the halo effect sees traffic to your site increase astronomically for the entire six hours while people prepare for your big unveiling.

But your company isn't the only one doing the same thing. And many (if not all) of the infrastructure components, datacenters, CDNs, ad providers, web analytics, and video streaming platforms you use are being used by other companies advertising during the Super Bowl.

Even if you have tested your entire site to what you think is your peak traffic volume (and beyond), remember that these shared services are all running at their maximum volume during the Super Bowl. All of the testing you did on your site can be undone by a third party that can't handle a peak load coming from two, three, or more customers simultaneously.

Lesson: Verify that your third-party services can effectively handle the maximum load from all of their customers all at once without degrading the performance of any of them.

Lose a Few Pounds
The performance solution isn't just on the third parties. It also relies on companies taking steps to focus on the most important aspect of Super Bowl Sunday online campaigns - getting people to your site. Sometimes this means that you have to make some compromises, perhaps streamline the delivery a little more than you otherwise would.

While the total amount of content is a key indicator of potential trouble - yes, big pages do tend to load more slowly than small pages - Compuware data showed that two of the three slowest sites drew content from more than 20 hosts and had over 100 objects on the page (with the slowest having over 200!). This complexity increases the likelihood that something will go wrong, and that if that happens, it could lead to a serious degradation in performance.

Lesson: While having a cool, interactive site for customers to come to is a big win for a massive marketing event like the Super Bowl, keeping a laser focus on delivering a successful experience sometimes mean leaving stuff out.

Have a Plan B (and Plan C, and Plan D...)
I know Murphy well. I have seen his work on many a customer site, whether they hired him or not. And when the inevitable red square (or flashing light or screaming siren) appears to announce a web performance problem, his name will always appear.

If you plan for a problem, when it happens, it's not a problem. If your selected CDN becomes congested due to a massive traffic influx that was not expected, have the ability to dynamically balance load between CDN providers. If an ad service or messaging platform begins to choke your site, have the ability to easily disable the offending hosts. If your cloud provider begins to rain on your parade, transfer load to the secondary provider you set up "just in case." If your dynamic page creation begins to crash your application servers, switch to a static HTML version that can be more easily delivered by your infrastructure.

If you have fallen back to Plan J, have an amusing error message that allows your customers to participate in the failure of your success. Heck, create a Twitter hashtag that says "#[your company]GoesBoom" and realize that any publicity is better than not being talked about at all.

Lesson: Murphy always puts his eggs in one basket. Learn from his mistake and plan for problems. Then test your plans. Then plan again. And test again. Wash, rinse, repeat until you have caught 95% of the possible scenarios. Then, have a plan to handle the remaining 5%.

Now What?
What have we learned from Super Bowl 2013? We have learned that during a period of peak traffic and high online interest, the performance issues that sites encounter are very consistent and predictable, with only the affected sites changing. But by taking some preventative steps, and having an emergency response plan, most of the performance issues can be predicted, planned for, and responded to when (not if) they appear.

When your company goes into the next big event, be it the Super Bowl or that one-day online sale, planning for the three items listed here will likely make you better prepared to bask in the success of the moment. We will be assisting you over the next few days by more deeply analyzing the performance of some of the top brand rivalries, in the Compuware version of the AdBowl.

More Stories By Stephen Pierzchala

With more than a decade in the web performance industry, Stephen Pierzchala has advised many organizations, from Fortune 500 to startups, in how to improve the performance of their web applications by helping them develop and evolve the unique speed, conversion, and customer experience metrics necessary to effectively measure, manage, and evolve online web and mobile applications that improve performance and increase revenue. Working on projects for top companies in the online retail, financial services, content delivery, ad-delivery, and enterprise software industries, he has developed new approaches to web performance data analysis. Stephen has led web performance methodology, CDN Assessment, SaaS load testing, technical troubleshooting, and performance assessments, demonstrating the value of the web performance. He noted for his technical analyses and knowledge of Web performance from the outside-in.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...