Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, PagerDuty Blog, Yeshim Deniz, JP Morgenthal

Related Topics: Java IoT, Industrial IoT, Microservices Expo, Machine Learning , Agile Computing, @CloudExpo

Java IoT: Article

Evolving an APM Strategy for the 21st Century

How the approach to APM has evolved to adapt to a complex ecosystem

I started in the web performance industry - well before Application Performance Management (APM) existed - during a time when external, single page measurement ruled the land. In an ecosystem where no other solutions existed, it was the top of the data chain to support the rapidly evolving world of web applications. This was an effective approach to APM, as most online applications were self-contained and, compared to the modern era, relatively simple in their design.

A state-of-the-art web application, circa 2000

Soon, a new solution rose to the top of the ecosystem - the synthetic, multi-step business process, played back either in a browser or a browser simulator. By evolving beyond the single-page measurement, this more complex data collection methodology was able to provide a view into the most critical business processes, delivering repeatable baseline and benchmark data that could be used by operations and business teams to track the health of the applications and identify when and where issues occurred.

These multi-step processes ruled the ecosystem for nearly a decade, evolving to include finer detail, deeper analytics, wider browser selection, and greater geographic coverage. But, like anything at the apex of an ecosystem, even this approach began to show that it couldn't answer every question.

In the modern online application environment, companies are delivering data to multiple browsers and mobile devices while creating increasingly sophisticated applications. These applications are developed using a combination of in-house code, commercial and open source packages and servers, and outside services to extend the application beyond what the in-house team specializes in.

This growth and complexity means that the traditional, stand-alone tools are no longer complex and "smart" enough to help customers actually solve the problems they face in their online applications. This means that a new approach, the next step in APM evolution, was needed to displace the current technologies at the top of the pyramid.

A state-of-the-art online application, circa 2013

This ecosystem, with multiple, sometimes competing, data streams makes it extremely difficult to answer the seemingly simple question of What is happening?, and sometimes nearly impossible to answer the important question of And why does it matter to us?

Let's walk through a performance issue and show how the approach to APM has evolved to adapt to the complex ecosystem, and why we find that it requires a sophisticated, integrated approach to allow the flood of data to turn into a concentrated stream of actionable information.

Starting with synthetic data, we already have two unique perspectives that provide a broader scope of data than the traditional datacenter-only approach. By combining Backbone (traditional datacenter synthetic monitoring) with data from the Last Mile (data collected from end-user competitors running the same scripts that are run from the Backbone), the clear differences in performance appear, giving companies an idea that the datacenter-only approach needs to be extended by collecting data from a source that is much closer to the customers that use the monitored application.

Outside-In Data Capture Perspectives used to provide the user experience data for online applications

Using a real-world scenario, let's follow the diagnostic process of a detected issue from the initial synthetic errors to the deepest level of impact, and see how a new, integrated APM solution can help resolve issues in an effective, efficient, and actionable way.

Starting with a three-hour snapshot of synthetic data, it's apparent that there is an issue almost halfway through this period, affecting primarily the Backbone measurements.

Examination of Individual Synthetic Measurements to identify outliers and errors

The clear cluster of errors (red squares in the scatter plot) around 17:30 is seen to be affecting Backbone only by filtering out the blue Last Mile measurements. After this filtering, zooming in allows us to quickly see that these errors are focused on the Backbone measurement perspective.

Filtered Scatter Plot Data Showing the Backbone Perspective, Focusing on the Errors

Examining the data shows that they are all script playback issues related to a missing element on the step, preventing the next action in the script from being executed.

A waterfall chart showing that the script execution failed due to an expected page element not appearing

But there are two questions that need to be answered: Why? And Does this matter? What's interesting is that as good as the synthetic tool is, this is as far as it can go. This forces teams to investigate the issue further and replicate it using other tools, wasting precious time.

But an evolved APM strategy doesn't stop here. By isolating the time period and error, the modern, integrated toolset can now ask and answer both those questions, and extend the information to: Who else was affected?

In the above instance, we know that the issue occurred from Pennsylvania. By using a user-experience monitoring (UEM) tool that captures data from all incoming visitors, we can filter the data to examine just the synthetic test visit.

Already, we have extended the data provided by the synthetic measurement. By drilling down further, it immediately becomes clear what the issue was.

Click on "Chart" takes over 60 seconds of Server Time

And then, the final step, what was happening on the server-side? Well, it's clear that one layer of the application was causing the issue and eventually the server timed out.

Issue is in the Crystaldecision API - Something to pass to the developers and QA team!

So, the element that was needed to make the script move forward wasn't there because the process that was generating the element timed out. When the agent decided to attempt the action, the missing element caused the script to fail.

This integrated approach has identified that the Click on ‘Chart' action is one of potential concern and we can now go back and look at all instances of this action in the past 24 hours to see if there are visits that encountered a similar issue. It's clear that this is a serious issue that needs to be investigated. The following screenshot shows all click-on chart actions that experienced this problem including those from REAL users that were also impacted by this problem.

A list of all visitors - Synthetic and Real - affected by the click on "Chart" issue in a 24-hour period, indicating a high priority issue

From an error on a Synthetic chart, we have quickly been able to move down to an issue that has been repeated multiple times over the past 24 hours, affecting not only synthetic users but also real users. Exporting all of this data and sending it to the QA and development teams will allow them to focus their efforts on the critical area.

This integrated approach has shown what has been proven in ecosystems all throughout the world, whether they are in nature or in applications: a tightly integrated group that seamlessly works together is far more effective than an individual. With many eyes, perspectives, and complementary areas of expertise, the team approach has provided far more data to solve the problem than any one of the perspectives could have on its own.

More Stories By Stephen Pierzchala

With more than a decade in the web performance industry, Stephen Pierzchala has advised many organizations, from Fortune 500 to startups, in how to improve the performance of their web applications by helping them develop and evolve the unique speed, conversion, and customer experience metrics necessary to effectively measure, manage, and evolve online web and mobile applications that improve performance and increase revenue. Working on projects for top companies in the online retail, financial services, content delivery, ad-delivery, and enterprise software industries, he has developed new approaches to web performance data analysis. Stephen has led web performance methodology, CDN Assessment, SaaS load testing, technical troubleshooting, and performance assessments, demonstrating the value of the web performance. He noted for his technical analyses and knowledge of Web performance from the outside-in.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists l...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership abi...
The essence of cloud computing is that all consumable IT resources are delivered as services. In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transi...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as C...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...
The IT industry is undergoing a significant evolution to keep up with cloud application demand. We see this happening as a mindset shift, from traditional IT teams to more well-rounded, cloud-focused job roles. The IT industry has become so cloud-minded that Gartner predicts that by 2020, this cloud shift will impact more than $1 trillion of global IT spending. This shift, however, has left some IT professionals feeling a little anxious about what lies ahead. The good news is that cloud computin...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
We've all had that feeling before: The feeling that you're missing something that everyone else is in on. For today's IT leaders, that feeling might come up when you hear talk about cloud brokers. Meanwhile, you head back into your office and deal with your ever-growing shadow IT problem. But the cloud-broker whispers and your shadow IT issues are linked. If you're wondering "what the heck is a cloud broker?" we've got you covered.
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server? Sounds magical, and it is! In his session at 20th Cloud Expo, Chris Munns, Senior Developer Advocate for Serverless Applications at Amazon Web Services, will show how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverle...
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace. Traditional approaches for driving innovation are now woefully inadequate for keeping up with the breadth of disruption and change facing...
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.