|By Stephen Pierzchala||
|April 22, 2013 09:00 AM EDT||
I started in the web performance industry - well before Application Performance Management (APM) existed - during a time when external, single page measurement ruled the land. In an ecosystem where no other solutions existed, it was the top of the data chain to support the rapidly evolving world of web applications. This was an effective approach to APM, as most online applications were self-contained and, compared to the modern era, relatively simple in their design.
A state-of-the-art web application, circa 2000
Soon, a new solution rose to the top of the ecosystem - the synthetic, multi-step business process, played back either in a browser or a browser simulator. By evolving beyond the single-page measurement, this more complex data collection methodology was able to provide a view into the most critical business processes, delivering repeatable baseline and benchmark data that could be used by operations and business teams to track the health of the applications and identify when and where issues occurred.
These multi-step processes ruled the ecosystem for nearly a decade, evolving to include finer detail, deeper analytics, wider browser selection, and greater geographic coverage. But, like anything at the apex of an ecosystem, even this approach began to show that it couldn't answer every question.
In the modern online application environment, companies are delivering data to multiple browsers and mobile devices while creating increasingly sophisticated applications. These applications are developed using a combination of in-house code, commercial and open source packages and servers, and outside services to extend the application beyond what the in-house team specializes in.
This growth and complexity means that the traditional, stand-alone tools are no longer complex and "smart" enough to help customers actually solve the problems they face in their online applications. This means that a new approach, the next step in APM evolution, was needed to displace the current technologies at the top of the pyramid.
A state-of-the-art online application, circa 2013
This ecosystem, with multiple, sometimes competing, data streams makes it extremely difficult to answer the seemingly simple question of What is happening?, and sometimes nearly impossible to answer the important question of And why does it matter to us?
Let's walk through a performance issue and show how the approach to APM has evolved to adapt to the complex ecosystem, and why we find that it requires a sophisticated, integrated approach to allow the flood of data to turn into a concentrated stream of actionable information.
Starting with synthetic data, we already have two unique perspectives that provide a broader scope of data than the traditional datacenter-only approach. By combining Backbone (traditional datacenter synthetic monitoring) with data from the Last Mile (data collected from end-user competitors running the same scripts that are run from the Backbone), the clear differences in performance appear, giving companies an idea that the datacenter-only approach needs to be extended by collecting data from a source that is much closer to the customers that use the monitored application.
Outside-In Data Capture Perspectives used to provide the user experience data for online applications
Using a real-world scenario, let's follow the diagnostic process of a detected issue from the initial synthetic errors to the deepest level of impact, and see how a new, integrated APM solution can help resolve issues in an effective, efficient, and actionable way.
Starting with a three-hour snapshot of synthetic data, it's apparent that there is an issue almost halfway through this period, affecting primarily the Backbone measurements.
Examination of Individual Synthetic Measurements to identify outliers and errors
The clear cluster of errors (red squares in the scatter plot) around 17:30 is seen to be affecting Backbone only by filtering out the blue Last Mile measurements. After this filtering, zooming in allows us to quickly see that these errors are focused on the Backbone measurement perspective.
Filtered Scatter Plot Data Showing the Backbone Perspective, Focusing on the Errors
Examining the data shows that they are all script playback issues related to a missing element on the step, preventing the next action in the script from being executed.
A waterfall chart showing that the script execution failed due to an expected page element not appearing
But there are two questions that need to be answered: Why? And Does this matter? What's interesting is that as good as the synthetic tool is, this is as far as it can go. This forces teams to investigate the issue further and replicate it using other tools, wasting precious time.
But an evolved APM strategy doesn't stop here. By isolating the time period and error, the modern, integrated toolset can now ask and answer both those questions, and extend the information to: Who else was affected?
In the above instance, we know that the issue occurred from Pennsylvania. By using a user-experience monitoring (UEM) tool that captures data from all incoming visitors, we can filter the data to examine just the synthetic test visit.
Already, we have extended the data provided by the synthetic measurement. By drilling down further, it immediately becomes clear what the issue was.
Click on "Chart" takes over 60 seconds of Server Time
And then, the final step, what was happening on the server-side? Well, it's clear that one layer of the application was causing the issue and eventually the server timed out.
Issue is in the Crystaldecision API - Something to pass to the developers and QA team!
So, the element that was needed to make the script move forward wasn't there because the process that was generating the element timed out. When the agent decided to attempt the action, the missing element caused the script to fail.
This integrated approach has identified that the Click on ‘Chart' action is one of potential concern and we can now go back and look at all instances of this action in the past 24 hours to see if there are visits that encountered a similar issue. It's clear that this is a serious issue that needs to be investigated. The following screenshot shows all click-on chart actions that experienced this problem including those from REAL users that were also impacted by this problem.
A list of all visitors - Synthetic and Real - affected by the click on "Chart" issue in a 24-hour period, indicating a high priority issue
From an error on a Synthetic chart, we have quickly been able to move down to an issue that has been repeated multiple times over the past 24 hours, affecting not only synthetic users but also real users. Exporting all of this data and sending it to the QA and development teams will allow them to focus their efforts on the critical area.
This integrated approach has shown what has been proven in ecosystems all throughout the world, whether they are in nature or in applications: a tightly integrated group that seamlessly works together is far more effective than an individual. With many eyes, perspectives, and complementary areas of expertise, the team approach has provided far more data to solve the problem than any one of the perspectives could have on its own.
[video] Logging and Monitoring with @Sematext Founder @OtisG | @DevOpsSummit #DevOps #Logging #Monitoring
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Jul. 28, 2015 10:45 PM EDT Reads: 998
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Jul. 28, 2015 08:00 PM EDT Reads: 581
[slides] Storage for Docker Containers By @OnModulus | @DevOpsSummit #DevOps #Docker #Containers #Microservices
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
Jul. 28, 2015 07:15 PM EDT Reads: 711
[slides] A New Architecture for the Internet of Things By @JKirklan | @ThingsExpo @RedHatNews #IoT #M2M #InternetOfThings
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Jul. 28, 2015 06:30 PM EDT Reads: 1,369
Modern DevOps Tool Kit By @Logentries and @NewRelic | @DevOpsSummit #DevOps #Containers #Microservices
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
Jul. 28, 2015 06:15 PM EDT Reads: 287
Microservices are individual units of executable code that work within a limited framework. They are extremely useful when placed within an architecture of numerous microservices. On June 24th, 2015 I attended a webinar titled “How to Share Share-Nothing Microservices,” hosted by Jason Bloomberg, the President of Intellyx, and Scott Edwards, Director Product Marketing for Service Virtualization at CA Technologies. The webinar explained how to use microservices to your advantage in order to deliv...
Jul. 28, 2015 06:00 PM EDT Reads: 919
Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.
Jul. 28, 2015 04:00 PM EDT Reads: 136
[slides] Workloads and Public Cloud at @CloudExpo By @utollwi | @ProfitBricksUSA #DevOps #Containers #Microservices
Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...
Jul. 28, 2015 04:00 PM EDT Reads: 2,178
How do you securely enable access to your applications in AWS without exposing any attack surfaces? The answer is usually very complicated because application environments morph over time in response to growing requirements from your employee base, your partners and your customers. In his session at @DevOpsSummit, Haseeb Budhani, CEO and Co-founder of Soha, shared five common approaches that DevOps teams follow to secure access to applications deployed in AWS, Azure, etc., and the friction an...
Jul. 28, 2015 03:30 PM EDT Reads: 480
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
Jul. 28, 2015 03:00 PM EDT Reads: 457
Take the Long View with Digital Transformation By @IoT2040 | @ThingsExpo #IoT #M2M #API #Microservices #InternetOfThings
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
Jul. 28, 2015 03:00 PM EDT Reads: 1,060
Jul. 28, 2015 02:00 PM EDT Reads: 272
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
Jul. 28, 2015 02:00 PM EDT Reads: 1,156
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
Jul. 28, 2015 01:45 PM EDT Reads: 326
[video] Infrastructure as a Toolbox By @SoftLayer at @CloudExpo New York | #IoT #API #Containers #Microservices
Countless business models have spawned from the IaaS industry. Resell Web hosting, blogs, public cloud, and on and on. With the overwhelming amount of tools available to us, it's sometimes easy to overlook that many of them are just new skins of resources we've had for a long time. In his General Session at 16th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, broke down what we've got to work with and discuss the benefits and pitfalls to discover how we can best use them to d...
Jul. 28, 2015 01:00 PM EDT Reads: 1,938
[session] The Container New World By @KeGilpin | @DevOpsSummit #DevOps #Docker #Containers #Microservices
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Jul. 28, 2015 01:00 PM EDT Reads: 1,046
Microservices Total Cost of Ownership: Too Soon? By @Aruna13 | @DevOpsSummit #DevOps #Docker #Containers #Microservices
Microservices are hot. And for good reason. To compete in today’s fast-moving application economy, it makes sense to break large, monolithic applications down into discrete functional units. Such an approach makes it easier to update and add functionalities (text-messaging a customer, calculating sales tax for a specific geography, etc.) and get those updates / adds into production fast. In fact, some would argue that microservices are a prerequisite for true continuous delivery. But is it too...
Jul. 28, 2015 01:00 PM EDT Reads: 660
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
Jul. 28, 2015 12:15 PM EDT Reads: 109
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Jul. 28, 2015 11:45 AM EDT Reads: 403
Puppet Labs has published their annual State of DevOps report and it is loaded with interesting information as always. Last year’s report brought home the point that DevOps was becoming widely accepted in the enterprise. This year’s report further validates that point and provides us with some interesting insights from surveying a wide variety of companies in different phases of their DevOps journey.
Jul. 28, 2015 10:00 AM EDT Reads: 147