Welcome!

Microservices Expo Authors: Dalibor Siroky, Elizabeth White, Pat Romanski, John Katrick, Liz McMillan

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Real User #Monitoring | @DevOpsSummit #APM #DevOps #ContinuousDelivery

Enterprises are interested in understanding how they analyze performance to positively impact business metrics

With online viewership and sales growing rapidly, enterprises are interested in understanding how they analyze performance to positively impact business metrics. Deeper insight into the user experience is needed to understand why conversions are dropping and/or bounce rates are increasing or, preferably, to understand what has been helping these metrics improve.

The digital performance management industry has evolved as application performance management companies have broadened their scope beyond synthetic testing that simulates users loading specific pages at regular intervals to include web and mobile testing, and real user monitoring (RUM).  As synthetic monitoring gained popularity, performance engineers realized the variations that exist from real end users were not being captured. This led to the introduction of RUM - the process of capturing, analyzing and reporting data from a real end user's interaction with a website. RUM has been around for more than a decade, but the technology is still in its infancy.

Five factors contributing to the shift towards RUM to complement synthetic testing

Ability to measure third-party resources
Websites are complex, with many different resources affecting performance. While there is no way to reliably detect the number of third party scripts, the number of third-party components is growing, with the average web page now requesting over 30% of their resources from third party domains, as shown in Figure 1. These components have multiple purposes, including   tracking users, ad insertion, and  A/B testing. Understanding the impact these components have on the end user experience is critical.

Figure 1 - Growth in third party vs first party resources per page, 2011-2015

Mobile matters
With more users accessing applications primarily on mobile devices, understanding mobile performance is increasingly important. Metrics must be captured from desktop and mobile devices alike. Just because an application performs well on a desktop does not mean it will perform well on a mobile device. If you have or want to have mobile customers, ensure you are able to capture metrics from them. Mobile presents unique challenges, such as congestion and latency, that can have significant impacts on page performance.

With a growing  mobile user base, RUM is frequently correlated with bandwidth measured in the last mile, to determine whether the impact to performance is a result of unpredictable last mile conditions. This need is increasingly seen in many major Asian economies, where a large proportion of consumers' primary means of internet access is a mobile phone. Major eCommerce players in Asia report over 65% of transactions are made from mobile devices. With such a big customer base, monitoring performance on the mobile web and understanding the influence of carrier impact on performance is critical to doing business. Some businesses have therefore instrumented ability to profile expected levels of user experience as it relates to carrier impact on performance.

Validate performance for specific users or geographies
Synthetic measurements may not be available from all geographies. To understand why a service level agreement in a specific region is not being met, the only way to capture information may be through real users in that geographic location. Real user measurements also enable customers to validate whether issues reported by synthetic testing are widespread across user base or localized to geos or local to the synthetic test tools.

Continuous Delivery
As more organizations move to a continuous delivery model, synthetic tests may need to be frequently re-scripted. As the time to deliver and release content decreases, organizations are looking at ways to quickly gather performance data. Some have decided the fastest way to gather performance metrics on a just-released page or feature is through data from real users.

Native applications
As organizations evolve from mobile websites to native apps, the need to gather metrics from these applications becomes increasingly important.

What features should you look for in a RUM solution?
Knowing that you need a RUM solution is the first step.   The second step is identifying what features are required to meet your business needs.  With a variety of solutions available in the market, identifying the must-have and the nice-to-have features is important to find the best fit.  Here are a few features you should consider.

Real-time and actionable data
Most RUM tools  display insights in the dashboard for the user in near real-time.  This information can be coupled with near real time tracking information from business analytics tools like Google Analytics. Performance data from RUM solutions should be cross-checked against metrics such as site visits, conversions,user location and device/browser insights. Many website operators continuously monitor any changes in the business metrics since they are indicative of problems in performance; further, it enables them to minimize false positives or isolated issues in performance.

User experience timings
Trends in performance optimization testing have  moved away from metrics like time to first byte (TTFB) and page load towards measurements more accurately reflecting the user experience - such as start render and speed index.  A user does not necessarily care when the content on the bottom of the page has loaded - when critical resources have been loaded and the page appears usable is what matters. Ensure the metrics you are gathering accurately reflect what you are attempting to measure and optimize.

Granular information
While page-level metrics are a good start, they don't reveal  precisely what resources are causing content to load slowly, nor  the relevance of each metric. Combining resource timing on specific elements with where the resource is (above or below "the fold") can help organizations filter out the noise and collect actionable information. Intersection Observer can help you identify which resources are loading above or below the fold and prioritize what to do to remedy the impact.

Impact of ads
With large numbers of pages being populated with ads, understanding the impact of the ads is important. RUM tools can identify both the performance impact of an ad in terms of when the ad was fetched and how long it took to download, as well as user engagement - such as how many users watched a video ad in its entirety.

Correlation to business metrics
While there have been many articles describing the impact of performance on business in eCommerce companies - for example, impact on conversions - the same isn't true for media companies. Media companies are more interested in scroll depth, virality of content, and session length.  Soasta recently announced an Activity Impact Score as a way to correlate web performance to session length. Measurements like the Activity Impact Score help non-eCommerce companies measure and monitor engagement and how performance can negatively or positively impact user engagement. Further, with bonuses tied to metrics such as page views, organizations are increasingly scrutinizing RUM metrics and insist on verifying the integrity of these tools.

End device support & ease of measurement
With the plethora of device types and browsers on the market, you need to ensure the RUM solution implemented will capture traffic from the majority of your users. In some Asian countries, over 35% of browsers and devices are unknown, which presents an interesting challenge: should you just forget about these users, or find a way to reliably measure performance on these unknown devices?

Another important factor to consider is how easy is it to enable RUM measurements? Does it require manual instrumentation of every web page or is this automatically done by injection of a script?

End to end perspective
Frequently the performance issues can be anywhere in the delivery network or end user. The ability to zero in on the problem quickly requires correlation of metrics from the end user, last mile, delivery network and the server.

Dynamic thresholds and alerts
The connectivity of an end user's device can change throughout the day. At work, they may be browsing the internet on a high-speed connection; on the commute home, they may be on their mobile device with high latency and congestion; and at night, they may be at home on a DSL or fiber connection. Expecting the same level of performance at all times is unrealistic. Having the ability to set variable thresholds is more indicative of the real user experience.

What solutions exist today
In addition to commercial solutions like Soasta, New Relic, and Google Analytics' Site Speed, there are three specifications from the W3C that enable you to build your own solution - navigation timing, resource timing and user timing. Browser support for these specifications vary, with navigation timing having the greatest adoption, since it has been available the longest.

Navigation timing captures the timing of various events as a page loads, from the HTTP request until all content has been received, parsed, and executed by the browser. This provides high-level information on the overall page load time and can be used to get details on items such as DNS lookups and latency.

Figure 2 shows the various timings available from the navigation timing API:

Figure 2 - Navigation timing events

Among many metrics that can be computed using the navigation timing events, the following are most often used:

  • TimeToFirstByte = responseStart - requestStart
  • TimeToInteractive = domInteractive - requestStart
  • TimeToPageLoad = loadEventEnd - requestStart

While page-level information is helpful, you may want to know how various resources on a page perform. This is where the resource timing specification comes in. Resource timing enables you to collect complete timing information for any resource within a page,with some restrictions for security purposes.  The resource timings available for the request and response are shown in Figure 3.

Figure 3 - Resource timing events

Once resource and navigation timing specifications were available for all resources, the next step was to provide the ability to gather custom metrics to understand where an application is spending the most time. The user timing specification allows marks to be inserted in code enabling the  measurement of time deltas between various marks. This makes it possible to determine information like when a hero image is displayed, when fonts are loaded, and when scripts are done blocking.

Evolving quality measurements
As quality measurements evolve, they will become better at providing actionable insights that recommend specific improvements to mitigate performance bottlenecks - not only at the browser end point, but from an end-to-end perspective.

Increasingly, RUM measurements will leverage machine learning to more deeply understand traffic patterns and dynamically adapt to  changing patterns.

RUM measurements will evolve to include the time a given resource starts to execute and completes execution in the browser.

Also, device-agnostic solutions will no doubt emerge. Metrics need to be captured across the entire spectrum of user endpoints. Not gathering statistics from large percentages of users whose browsers don't support the technology leaves gaping blind spots in the visibility you have on the end user experience.

*    *    *

RUM gives organizations the ability to isolate and identify the cause of performance degradation in a web application, whether it is related to the browser, third-party content, the network provider, the CDN, or infrastructure. RUM is a piece of the puzzle; when used in conjunction with other tools and analytics, it can be used  to quickly recommend web application optimizations.

More Stories By Krishnan Manjeri

Krishnan is a seasoned product manager and is currently a Director of Product Management at InstartLogic responsible for Data Platform, Analytics and Performance. He has nearly 2 decades of experience in leading & delivering solutions, in various capacities from Engineering to Marketing and Product Management, for a variety of fortune 500 companies in the areas of Analytics, Telecommunication Networks, Application Delivery and Security. He has extensive experience leading cross-functional teams and delivering multi-million dollars in revenue in both the Enterprise and Service Provider. He has an MS in Computer Science from Case Western Reserve University and an MBA from Santa Clara University. He has a couple of patents in the area of Networking and Security.

@MicroservicesExpo Stories
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...