Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Zakia Bouachraoui, Liz McMillan, Yeshim Deniz

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers, Containers Expo Blog

@DevOpsSummit: Article

APM Tools | @DevOpsSummit #Agile #APM #DevOps #ContinuousDelivery

Imagine you are an expert for a highly customizable platform that has been adapted to the customer’s needs

Just last week a senior Hybris consultant shared the story of a customer engagement on which he was working. This customer had problems, serious problems. We're talking about response times far beyond the most liberal acceptable standard. They were unable to solve the issue in their eCommerce platform - specifically Hybris. Although the eCommerce project was delivered by a system integrator / implementation partner, the vendor still gets involved when things go really wrong. After all, the vendor knows best, right?

So when he started working with this customer his first question was:

Do you have an APM tool in place?

Why? Imagine you are an expert for a highly customizable platform that has been adapted to the customer's needs. Within a very short time you are expected to get a complete overview of a mostly unknown environment in order to solve a pressing issue. So you need information, accurate information, the best information available. Just facts, no rumors or hearsay. It's like when your child gets hurt at the playground. You take them to the hospital and one of the first things a doctor does is perform an X-Ray to get a clear image of the injury - perhaps a broken bone.

"Yes we do have an APM solution." the customer replied. "Good" the expert consultant said. "Let's take a look at this problem in your staging environment." Customer: "It's only monitoring our production environment... and we already tried using it to solve the problem." Expert consultant (confidently):"Oh, okay, then let's work on production data to investigate the problem," and then asked for access.

Looking at production data provides the benefit of using "the real data and the real problem" for investigation, not the one replicated by a "close to real" test. Don't get me wrong, I'm not saying that performance analysis and diagnosis has to happen in production, but often it's the quicker way to resolve, well, production problems. Preventing these problems from ever hitting production by first using APM best practices is a whole other topic. More on that later.

Soon after he logged into his "dynamic" monitoring solution featuring nice dashboards and alerts, blinking on violated average response times. He saw an overview of the environment and even identified one specific business-relevant transaction that was extremely slow. What he saw also confirmed the issue about which the customer was complaining. The problem was obvious, but the solution wasn't. He needed details. He found database statements that were executing often, but all were functioning fast enough and seemed fine. So, what was making the transaction take so long? A deep investigation of the transaction executions would be needed.

Can you export this live data...?

"...so I can take it to our lab for deeper investigation?" the consultant asked? The day had been long and he wanted to analyze the data offline, while on the commute home. A 45-minute ride should be enough to find the root cause, and he would be in time for family dinner.

"Export production performance data for offline analysis... how would that be possible?" a young and genuinely surprised system administrator asked. "I don't think that's even possible" - and it wasn't possible with their monitoring tool. So the Hybris expert stayed a bit longer, missed his train home, but eventually gave up investigating the problem for the day. Fortunately he was home in time for dinner, and his wife wasn't angry. Peace at home, but none for the customer who had to live another day with the persisting problem.

The next day he went back to the customer, eager to solve the issue. It didn't go away overnight, and the analysis was still hindered by missing facts - facts that their APM solution couldn't identify and report.

Click here for the full article.

More Stories By Reinhard Brandstädter

In the past 15 years Reinhard Brandstädter has worked as a system architect, performance analyst and consultant. As Product Manager for Dynatrace he has been driving product enhancements and capabilities for production monitoring. Now he focuses on performance management for eCommerce applications and supports companies on their APM journey.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...