Welcome!

Microservices Expo Authors: David Green, Pat Romanski, PagerDuty Blog, Christopher Keene, Elizabeth White

Related Topics: Java IoT, Microservices Expo, IoT User Interface, Agile Computing, Release Management , @CloudExpo

Java IoT: Article

Super Bowl Sunday 2013 – Winners, Losers, and Casualties

Since the late 1990s, Super Bowl advertisers have tried to successfully link their TV ads to their online properties

No matter which team you were cheering for (or if you even watched the game at all), Super Bowl Sunday 2013 was more than a football game. Since the late 1990s, Super Bowl advertisers have tried to successfully link their TV ads to their online properties, sometimes with mixed results. Even 15 years later, companies can't always predict how well their sites will perform on the big day. But unlike the early days of TV/online campaigns, the problems are more complex than a site going down under heavy traffic.

This year, some of the world's premier brands spent millions of dollars on 30 second and one-minute ad blocks (as well as millions for the creation of the ads) during the Super Bowl, all of which were tied directly to online or social media campaigns. However, not all the sites successfully resisted the onslaught of traffic.

The measurement results from the Compuware network in the periods leading up to, and during, the Super Bowl showed some clear winners and losers in page load time. Events like the Super Bowl require high-frequency measurements, so we set our locations to collect data every five minutes to catch every variation in performance, no matter how fleeting.

For the period from 5 p.m. EST until 11 p.m. EST on Sunday, February 3, the top and bottom three sites were:

Top Three Performers

  1. Go Daddy
  2. Paramount
  3. Lincoln Motor Cars

Bottom 3 Performers

  1. Doritos
  2. Coca-Cola
  3. Universal Pictures

Top and Bottom Web Performers - Super Bowl 2013
All of these sites chose different approaches to delivering their message online. What we found through our analysis is that the issues that they encountered almost perfectly aligned with those that Compuware finds during every major online event.

You're Not Alone
The Super Bowl is often referred to as the perfect storm for web performance - a six-hour window, with the spotlight on your company for 30-60 seconds (or more if you bought many slots). However, the halo effect sees traffic to your site increase astronomically for the entire six hours while people prepare for your big unveiling.

But your company isn't the only one doing the same thing. And many (if not all) of the infrastructure components, datacenters, CDNs, ad providers, web analytics, and video streaming platforms you use are being used by other companies advertising during the Super Bowl.

Even if you have tested your entire site to what you think is your peak traffic volume (and beyond), remember that these shared services are all running at their maximum volume during the Super Bowl. All of the testing you did on your site can be undone by a third party that can't handle a peak load coming from two, three, or more customers simultaneously.

Lesson: Verify that your third-party services can effectively handle the maximum load from all of their customers all at once without degrading the performance of any of them.

Lose a Few Pounds
The performance solution isn't just on the third parties. It also relies on companies taking steps to focus on the most important aspect of Super Bowl Sunday online campaigns - getting people to your site. Sometimes this means that you have to make some compromises, perhaps streamline the delivery a little more than you otherwise would.

While the total amount of content is a key indicator of potential trouble - yes, big pages do tend to load more slowly than small pages - Compuware data showed that two of the three slowest sites drew content from more than 20 hosts and had over 100 objects on the page (with the slowest having over 200!). This complexity increases the likelihood that something will go wrong, and that if that happens, it could lead to a serious degradation in performance.

Lesson: While having a cool, interactive site for customers to come to is a big win for a massive marketing event like the Super Bowl, keeping a laser focus on delivering a successful experience sometimes mean leaving stuff out.

Have a Plan B (and Plan C, and Plan D...)
I know Murphy well. I have seen his work on many a customer site, whether they hired him or not. And when the inevitable red square (or flashing light or screaming siren) appears to announce a web performance problem, his name will always appear.

If you plan for a problem, when it happens, it's not a problem. If your selected CDN becomes congested due to a massive traffic influx that was not expected, have the ability to dynamically balance load between CDN providers. If an ad service or messaging platform begins to choke your site, have the ability to easily disable the offending hosts. If your cloud provider begins to rain on your parade, transfer load to the secondary provider you set up "just in case." If your dynamic page creation begins to crash your application servers, switch to a static HTML version that can be more easily delivered by your infrastructure.

If you have fallen back to Plan J, have an amusing error message that allows your customers to participate in the failure of your success. Heck, create a Twitter hashtag that says "#[your company]GoesBoom" and realize that any publicity is better than not being talked about at all.

Lesson: Murphy always puts his eggs in one basket. Learn from his mistake and plan for problems. Then test your plans. Then plan again. And test again. Wash, rinse, repeat until you have caught 95% of the possible scenarios. Then, have a plan to handle the remaining 5%.

Now What?
What have we learned from Super Bowl 2013? We have learned that during a period of peak traffic and high online interest, the performance issues that sites encounter are very consistent and predictable, with only the affected sites changing. But by taking some preventative steps, and having an emergency response plan, most of the performance issues can be predicted, planned for, and responded to when (not if) they appear.

When your company goes into the next big event, be it the Super Bowl or that one-day online sale, planning for the three items listed here will likely make you better prepared to bask in the success of the moment. We will be assisting you over the next few days by more deeply analyzing the performance of some of the top brand rivalries, in the Compuware version of the AdBowl.

More Stories By Stephen Pierzchala

With more than a decade in the web performance industry, Stephen Pierzchala has advised many organizations, from Fortune 500 to startups, in how to improve the performance of their web applications by helping them develop and evolve the unique speed, conversion, and customer experience metrics necessary to effectively measure, manage, and evolve online web and mobile applications that improve performance and increase revenue. Working on projects for top companies in the online retail, financial services, content delivery, ad-delivery, and enterprise software industries, he has developed new approaches to web performance data analysis. Stephen has led web performance methodology, CDN Assessment, SaaS load testing, technical troubleshooting, and performance assessments, demonstrating the value of the web performance. He noted for his technical analyses and knowledge of Web performance from the outside-in.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
The following fictional case study is a composite of actual horror stories I’ve heard over the years. Unfortunately, this scenario often occurs when in-house integration teams take on the complexities of DevOps and ALM integration with an enterprise service bus (ESB) or custom integration. It is written from the perspective of an enterprise architect tasked with leading an organization’s effort to adopt Agile to become more competitive. The company has turned to Scaled Agile Framework (SAFe) as ...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...

Modern organizations face great challenges as they embrace innovation and integrate new tools and services. They begin to mature and move away from the complacency of maintaining traditional technologies and systems that only solve individual, siloed problems and work “well enough.” In order to build...

The post Gearing up for Digital Transformation appeared first on Aug. 28, 2016 03:15 PM EDT  Reads: 1,563

Thomas Bitman of Gartner wrote a blog post last year about why OpenStack projects fail. In that article, he outlined three particular metrics which together cause 60% of OpenStack projects to fall short of expectations: Wrong people (31% of failures): a successful cloud needs commitment both from the operations team as well as from "anchor" tenants. Wrong processes (19% of failures): a successful cloud automates across silos in the software development lifecycle, not just within silos.
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
Cloud Expo 2016 New York at the Javits Center New York was characterized by increased attendance and a new focus on operations. These were both encouraging signs for all involved in Cloud Computing and all that it touches. As Conference Chair, I work with the Cloud Expo team to structure three keynotes, numerous general sessions, and more than 150 breakout sessions along 10 tracks. Our job is to balance the state of enterprise IT today with the trends that will be commonplace tomorrow. Mobile...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
A company’s collection of online systems is like a delicate ecosystem – all components must integrate with and complement each other, and one single malfunction in any of them can bring the entire system to a screeching halt. That’s why, when monitoring and analyzing the health of your online systems, you need a broad arsenal of different tools for your different needs. In addition to a wide-angle lens that provides a snapshot of the overall health of your system, you must also have precise, ...
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...

Let's just nip the conflation of these terms in the bud, shall we?

"MIcro" is big these days. Both microservices and microsegmentation are having and will continue to have an impact on data center architecture, but not necessarily for the same reasons. There's a growing trend in which folks - particularly those with a network background - conflate the two and use them to mean the same thing.

They are not.

One is about the application. The other, the network. T...

If you are within a stones throw of the DevOps marketplace you have undoubtably noticed the growing trend in Microservices. Whether you have been staying up to date with the latest articles and blogs or you just read the definition for the first time, these 5 Microservices Resources You Need In Your Life will guide you through the ins and outs of Microservices in today’s world.
Sharding has become a popular means of achieving scalability in application architectures in which read/write data separation is not only possible, but desirable to achieve new heights of concurrency. The premise is that by splitting up read and write duties, it is possible to get better overall performance at the cost of a slight delay in consistency. That is, it takes a bit of time to replicate changes initiated by a "write" to the read-only master database. It's eventually consistent, and it'...
SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp...
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
It's been a busy time for tech's ongoing infatuation with containers. Amazon just announced EC2 Container Registry to simply container management. The new Azure container service taps into Microsoft's partnership with Docker and Mesosphere. You know when there's a standard for containers on the table there's money on the table, too. Everyone is talking containers because they reduce a ton of development-related challenges and make it much easier to move across production and testing environm...
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...