Click here to close now.

Welcome!

@MicroservicesE Blog Authors: Rex Morrow, Datical, Sematext Blog, Pat Romanski, John Wetherill, Mark O'Neill

Related Topics: Java IoT, @MicroservicesE Blog, IoT User Interface

Java IoT: Article

Third-Party Content Management Applied

Four steps to gain control of your Page Load Performance

Today's web sites are often cluttered up with third-party content that slows down page load and rendering times, hampering user experience. In my first blog post, I discussed how third-party content impacts your website's performance and identified common problems with its integration. Today I want to share the experience I have had as a developer and consultant with the management of third-party content. In the following, I will show you best practices for integrating third-party content and for convincing your business that they will benefit from establishing third-party management.

First the bad news: as a developer, you have to get the commitment for establishing third-party management and changing the integration of third-party content from the highest business management level possible - the best is CEO level. Otherwise you will run into problems trying to implement improvements. The good news is that, from my experience, this is an achievable goal - you just have to bring the problems up the right way with hard facts. Let's start our journey toward implementing third-party content management from two possible starting points that I've seen in the past. The first one is triggered if someone from the business has a bad user experience and wants to find out who is responsible for the slow pages. The second one is that you as the developer know that your page is slow. No matter where you are starting, the first step you should make is to get the correct hard facts.

Step 1: Detailed third-party content impact analysis
For a developer this isn't really difficult. The only thing we have to do is use the Web Performance Optimization Tool of our choice and take a look at the page load timing. What we get is a picture like the screenshot below. We as developers immediately recognize that we have a problem - but for the business this is a diagram that requires an explanation.

As we want to convince them, we should make it easier for them to understand. Something that from my experience works well is to take the time and implement a URL-Parameter that turns off all the third-party content for a webpage. Then we can capture a second timeline from the same page without the third-party requests (see Figure 2). Everybody can now easily see that there are huge differences:

We can present these timelines to the business as well but we still have to explain what all the boxes, timings, etc., mean. We should invest some more time and create a table like Table 1, where we compare some main Key Performance Indicators (KPI).

KPI

Page with TPC

Page without TPC

First Impression Time

 

 

Onload Time

 

 

Total Load Time

 

 

JavaScript Time

 

 

Number of Domains

 

 

Number of Resources

 

 

Total Pagesize

 

 

As a single page is not representative we prepare similar tables for the five most important pages. Which pages these are depends on your website. Landing pages, product pages and pages on the "money path" are potentially interesting. Our Web Analytics Tool can help us find the most interesting pages.

Step 2: Inform business about the impact
During step 1 we discovered that the impact is significant, we collected facts, and we still think we have to improve the performance of our application. Now it's time to present the results of the first step to the business. From my experience the best way to do this is a face-to-face meeting with the high-level business executives. CEOs, CTOs and other business unit executives are the appropriate attendees.

The presentation we give during this meeting should cover the following three major topics:

  • Case Study facts from other companies
  • The hard facts we have collected
  • Recommendations for improvements

Google, Bing, Amazon, etc., have done some case studies that show the impact of slow pages on the revenue and the users' interaction with the website. Amazon, for example, found out that a 100 ms slower page reduces revenue by 1%. I have attached an example presentation to this blog, which should provide some guidance for our presentation and contains some more examples.

After this general information we can show the hard facts about our system, and as the business is now aware of the relationship between performance and revenue they normally listen carefully. Now we're no longer talking about time but money.

At the end of our presentation we make some recommendations on how we can improve the integration of third-party content. Don't be shy - no third-party content plugin is untouchable at this point. Some of the recommendations can only be decided by the business, not by the development. Our goals for this meeting are that we have the commitment to proceed, that we get support from the executives when discussing the implementation alternatives, and that we have a follow-up meeting with the same attendees to show improvements. What would be nice is the consent of the executives to the recommended improvements but from my experience they seldom commit.

Step 3: Check third-party content implementation alternatives
Now as we have the commitments, we can start thinking about integration alternatives. If we stick to the standard implementation the provider recommends, we won't be able to make any improvements. We have to be creative and always try to create win-win situations. Here in this article I want to talk about four best practices I have encountered the past.

Best Practice 1: Remove it
Every developer will now say "Okay, that's easy," and every business man will say "That's not possible because we need it!" But do you really need it? Let's take a closer look at the social media plugins, tracking pixels and ads.

A lot of websites have integrated social media plugins like Twitter or Facebook. They are very popular these days and a lot of webpages have integrated such plugins. Have you ever checked how often your users really use one of the plugins? A customer of ours had integrated five plugins. After six months they checked how often each of them was used. They found out that only one was used by people other than the QA department, which checks that all of them are working after each release. With a little investigation they found out that four of the five plugins could be removed as nobody used it.

What about a Tracking Pixel? I have seen a lot of pages out there that have not only integrated one tracking pixel but five, seven or even more. Again, the question is: Do we really need all of them? It doesn't matter who we ask; we'll always get a good explanation as to why a special pixel is needed, but stick to the goal of reducing it down to one pixel. Find out which one can deliver most or even all of the data that each department needs and remove all the others. Problems we might run into will be user privileges and business objectives that are defined for teams and individuals on specific statistics. It takes some time to handle this but, at the end, things will get easier as we have only one source for our numbers; we will stop discussing which statistics delivers the correct values and from which statistics to take the numbers as there is only one left. Once at a customer we removed five Tracking Pixels with a single blow. As this led to incredible performance improvements, their marketing department made an announcement to let customers know they care about their experience. This is a textbook example of creating a win-win-situation, as mentioned earlier.

Other third-party content that is a candidate for removal is banner ads. Businessmen will now say this guy is crazy to make such a recommendation, but if your main business is not earning money with displaying ads then it might be worth taking a look. Take the numbers from Amazon that 100 ms additional page load time is reducing the revenue by one percent and think of the example page where ads consume about 1000 ms of page load time - 10 times as much. This would mean that we lose 10 * 1% = -10% of our revenue just because of ads. The question now is: "Are you really earning 10% or more of your total revenue with ads?" If not you should consider removing ads from your page.

Best Practice 2: Move loading of resources back after the Onload-event
As we have now removed all unnecessary third-party content, we still have some left. For the user experience, apart from the total load time, the first impression and the onload-time are the most important timings. To improve these timings we can implement lazy loading where parts of the page are loaded after the onload event via JavaScript; several libraries are available that help you implement this. There are two things you should be aware of: first is that you are just moving the starting point for the download of the resources so you are not reducing the download size of your page or the number of requests. The second is that lacy loading only works when JavaScript is available in the user's browser. So you have to make sure that your page is useable without JavaScript. Candidates for moving the download starting point back are plugins that only work if JavaScript is available or are not vital to the usage of the page. Ads, Social Media Plugins, Maps, etc., are in most cases such candidates.

Best Practice 3: Load on user click
This is an interesting option if you want to integrate a social media plugin. The standard implementation, for example, of such a plugin looks like the figure below. It consists of a button to trigger the like/tweet action and the number of likes / tweets.

To improve this, the question that has to be answered is: Do the users really need to know how often the page was tweeted, liked, etc.? If the answer is no, we can save several requests and download volume. All we have to do is deliver a link that looks like the action button and, if the user clicks on the link, we can open a popup window or an overlay where the user can perform the necessary actions.

Best Practice 4: Maps vs. static Maps
This practice focuses on the integration of maps like Google Maps or Bing Maps on our page. What can be seen all around the Web are map integrations where the maps are very small and only used to give the user a hint as to where the point of interest is located. To show the user this hint, several JavaScript files and images have to be downloaded. In most cases the user doesn't need to zoom or reposition the map, and as the map is small it's also hard to use. Why not use the static map implementation Bing Maps and Google Maps are offering? To figure out the advantages of the static implementation I have created two HTML pages that show the same map, one uses the standard implementation and the other the static implementation.

After capturing the timings we get the following results:

KPI

Standard Google Maps

Static Google Maps

Difference in %

First Impression Time

493 ms

324 ms

-34%

Onload Time

473 ms

368ms

-22%

Total Load Time

1801 ms

700 ms

-61%

JavaScript Time

563 ms

0 ms

-100%

Number of Domains

6

1

-83%

Number of Resources

43

2

-95%

Total Pagesize

636 Kb

77 Kb

-88%

When we take a closer look at the KPIs we can see that every KPI for the Static Google Map implementation is better. Especially when we look at the KPIs timings we can see that the first impression and the onload time are improving by 34% and 22%. The total load time decreases by 1 sec, which is 61% less and this really has a big impact on the user's experience.

Some people will argue that this approach is not applicable as they want to offer the map controls to their customers. But remember Best Practice 3 - Load on user click: As soon as the user states his intention of interacting with the map by clicking on it, we can offer him a bigger and easier-to-use map by opening a popup, overlay or a new page. The only thing the development has to do is to surround the static image with a link tag.

Step 4: Monitor the performance of your web application / third-party content
As we need to show improvements in our follow-up meeting with business executives, it's important to monitor how the performance of our website evolves over time. There are three things that should be monitored by business, operations and development:

  1. Third-party content usage by customers and generated business value - business monitoring
  2. The impact of new added third-party content - development monitoring
  3. The performance of third-party content in the client browser - operations monitoring

Business Monitoring
An essential part of the business monitoring should be a check as to whether the requested third-party features contribute to the business value. Is the feature used by the customer or does it help us to increase our revenue? We have to ask this question again and again - not only once at the beginning of the development, but every time when business, development and operations meet to discuss web application performance. If we ever can state: No, the feature is not adding value - remove it as soon as possible.

Operations Monitoring
There are only a few tools that help us monitor the impact of third-party content for our users. What we need is either a synthetic monitoring system like Gomez and Keynote provide, or a monitoring tool that really sits in our users' browsers and collects the data there like dynaTrace UEM.

Synthetic monitoring tools allow us to monitor the performance from specified locations all over the world. The only downside is that we are not getting data from our real users. With dynaTrace UEM we can monitor the third-party content performance of all our users wherever they are situated and we get the real experienced timings. The figure below shows a dashboard from dynaTrace UEM that contains all the important data from the operations point of view. The pie chart and the table below indicate which third-party content provider has the biggest impact on the page load time and the distribution. The three line charts on the right side show you the request trend, the total page load time and the onload time and the average time that third-party content contributes to your page performance.

Development Monitoring
A very important thing is that the development has the ability to compare the KPIs between two releases and the differences between the pages with and without third-party content. We have already established Functional Web Tests that integrate with a web performance optimization tool that delivers the necessary values for the KPIs. We just have to reuse the switch we established during Step 1 and run automatic tests on the pages we have identified as the most important. From this moment on we will always be able to automatically find regression caused by third-party content.

We may also consider enhancing our switch and making each of the third-party plugins switchable. This allows us to check the overhead a new plugin adds to our page. It also helps us when we have to decide which feature we want to turn if there are two or more similar plugins.

Last but not least as now business, operation and development have all the necessary data to improve the user experience, we should meet up regularly to check the performance trend of our page and find solutions to upcoming performance challenges.

Conclusion
It is not a big deal to start improving the third-party content integration. If we want to succeed it's necessary that business, development and operations work together. We have to be creative, we have to make compromises and we have to be ready to use different methods of integration - never stop aiming for a top performing website. If we take things seriously, we can improve the experience of our users and therefore increase our business.

More Stories By Klaus Enzenhofer

Klaus Enzenhofer has several years of experience and expertise in the field of Web Performance Optimization and User Experience Management. He works as Technical Strategist in the Center of Excellence Team at dynaTrace Software. In this role he influences the development of the dynaTrace Application Performance Management Solution and the Web Performance Optimization Tool dynaTrace AJAX Edition. He mainly gathered his experience in web and performance by developing and running large-scale web portals at Tiscover GmbH.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the...
Node.js and io.js are increasingly being used to run JavaScript on the server side for many types of applications, such as websites, real-time messaging and controllers for small devices with limited resources. For DevOps it is crucial to monitor the whole application stack and Node.js is rapidly becoming an important part of the stack in many organizations. Sematext has historically had a strong support for monitoring big data applications such as Elastic (aka Elasticsearch), Cassandra, Solr, S...
I read an insightful article this morning from Bernard Golden on DZone discussing the DevOps conundrum facing many enterprises today – is it better to build your own DevOps tools or go commercial? For Golden, the question arose from his observations at a number of DevOps Days events he has attended, where typically the audience is composed of startup professionals: “I have to say, though, that a typical feature of most presentations is a recitation of the various open source products and compo...
This is the final installment of the six-part series Microservices and PaaS. It seems like forever since I attended Adrian Cockroft's meetup focusing on microservices. It's actually only been a couple of months, but much has happened since then: countless articles, meetups, and conference sessions focusing on microservices have been delivered, many meetings and design efforts at companies moving towards a microservices-based approach have been endured, and five installments of this blog series ...
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block – and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming on...
Bill Doerrfeld at Nordic APIs has written today about how APIs are evolving the B2B landscape. This is a particularly interesting article for me, because my personal background is working for an EDI provider, where I linked EDI processes from the private network to the Internet, over 15 years ago. Vordel was founded to allow new Web Services APIs to be used for B2B. Axway, a B2B software company, acquired Vordel in 2012 to link B2B with Web APIs. This caused a domino effect, with other API Manag...
In the first four parts of this series I presented an introduction to microservices along with a handful of emerging microservices patterns, and a discussion of some of the downsides and challenges to using microservices. The most recent installment of this series looked at ten ways that PaaS facilitates microservices development and adoption. In this post I’ll cover some words of wisdom, advice intended for individuals, teams, and organizations considering a move to microservices. I've gleaned...
Python is really a language which has swept the scene in recent years in terms of popularity, elegance, and functionality. Research shows that 8 out 10 computer science departments in the U.S. now teach their introductory courses with Python, surpassing Java. Top-ranked CS departments at MIT and UC Berkeley have switched their introductory courses to Python. And the top three MOOC providers (edX, Coursera, and Udacity) all offer introductory programming courses in Python. Not to mention, Python ...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
It's 2:15pm on a Friday, and I'm sitting in the keynote hall at PyCon 2013 fidgeting through a succession of lightning talks that have very little relevance to my life. Topics like "Python code coverage techniques" (ho-hum) and "Controlling Christmas lights with Python” (yawn - I wonder if there's anything new on Hacker News)...when Solomon Hykes takes the stage, unveils Docker, and the world shifts. If you haven't seen it yet, you should watch the video of Solomon's Pycon The Future of Linux C...
Even though it’s now Microservices Journal, long-time fans of SOA World Magazine can take comfort in the fact that the URL – soa.sys-con.com – remains unchanged. And that’s no mistake, as microservices are really nothing more than a new and improved take on the Service-Oriented Architecture (SOA) best practices we struggled to hammer out over the last decade. Skeptics, however, might say that this change is nothing more than an exercise in buzzword-hopping. SOA is passé, and now that people are ...
SYS-CON Media named Andi Mann editor of DevOps Journal. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. DevOps Journal brings valuable information to DevOps professionals who are transforming the way enterprise IT is done. Andi Mann, Vice President, Strategic Solutions, at CA Technologies, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, communicator, and thought lea...
Virtualization is everywhere. Enormous and highly profitable companies have been built on nothing but virtualization. And nowhere has virtualization made more of an impact than in Cloud Computing, the rampant and unprecedented adoption of which has been the direct result of the wide availability of virtualization software and techniques that enabled it. But does the cloud actually require virtualization?
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
I’ve been thinking a bit about microservices (μServices) recently. My immediate reaction is to think: “Isn’t this just yet another new term for the same stuff, Web Services->SOA->APIs->Microservices?” Followed shortly by the thought, “well yes it is, but there are some important differences/distinguishing factors.” Microservices is an evolutionary paradigm born out of the need for simplicity (i.e., get away from the ESB) and alignment with agile (think DevOps) and scalable (think Containerizati...

Let's just nip the conflation of these terms in the bud, shall we?

"MIcro" is big these days. Both microservices and microsegmentation are having and will continue to have an impact on data center architecture, but not necessarily for the same reasons. There's a growing trend in which folks - particularly those with a network background - conflate the two and use them to mean the same thing.

They are not.

One is about the application. The other, the network. T...

The release of Kibana 4.x has had an impact on monitoring and other related activities.  In this post we’re going to get specific and show you how to add Node.js monitoring to the Kibana 4 server app.  Why Node.js?  Because Kibana 4 now comes with a little Node.js server app that sits between the Kibana UI and the […]
T-Mobile has been transforming the wireless industry with its “Uncarrier” initiatives. Today as T-Mobile’s IT organization works to transform itself in a like manner, technical foundations built over the last couple of years are now key to their drive for more Agile delivery practices. In his session at DevOps Summit, Martin Krienke, Sr Development Manager at T-Mobile, will discuss where they started their Continuous Delivery journey, where they are today, and where they are going in an effort ...
There are 182 billion emails sent every day, generating a lot of data about how recipients and ISPs respond. Many marketers take a more-is-better approach to stats, preferring to have the ability to slice and dice their email lists based numerous arbitrary stats. However, fundamentally what really matters is whether or not sending an email to a particular recipient will generate value. Data Scientists can design high-level insights such as engagement prediction models and content clusters that a...