Welcome!

SOA & WOA Authors: Yeshim Deniz, Pat Romanski, Adine Deford, Liz McMillan, Dana Gardner

Related Topics: SOA & WOA

SOA & WOA: Blog Feed Post

Deploying APM in the Enterprise | Part 5

Alerts – Storm of the Century – Every Week!

Welcome back to my series on Deploying APM in the Enterprise. In Part 4: Path of the Rockstar, we discussed how to deploy your new monitoring tool and get maximum value from your time and monetary commitment. This post will cover one of the most important aspects of monitoring: alerting. This is the topic that can make or break your entire implementation. Get it wrong and you've wasted a bunch of time and money on mediocre results. Get it right and your time and money investment will be multiplied by the value you derive every day.

App Man wrote a great blog post earlier this year about behavioral learning and analytics as they apply to alerts. If you haven't already done so, I suggest you go read it after you finish this post. Instead of repeating what was covered in that post, we will explore the issues that I saw out in real enterprise operations centers.

Traditional Alerting Methods Don't Work Well
Do any of these sound familiar?

  • "I got paged at 3 AM with a high CPU alert. It was backups running and consuming the CPU. This happens almost every week! Maybe we should turn change the threshold setting and timing."
  • "We just got a notification of high disk and network I/O rates. Is that normal? Does anyone know if our app is still working right?"
  • "We just got an alert on high JVM memory usage. Can someone use the app to see if anything is wrong?"
  • "We just got a call from a user complaining that the website is slow but there were no alerts."

Comments like these are a way of life when you set static thresholds (ex. CPU utilization > 90% for 5 minutes) on metrics that aren't direct indicators of application performance. It's the equivalent of taking a person's heart rate while they are exercising to see if the person is performing as expected. A really high heart rate might indicate that a person is performing well or that they are about to die of a heart attack. Heart rate would be a supporting metric to something more meaningful like how long it took to run the past 1/4 mile. The same holds true for application performance. We will explore this concept further a little later.

Storm of the Century ... Again!

One of the most important lessons I learned while working in large enterprise environments is that you will almost always set static thresholds wrong. Set them too high and you run the risk of missing a real problem. Set them too low and you will get so many alerts that they become irrelevant as you spend all of your time chasing "problems" that don't really exist. Getting massive amounts of alerts in a short period of time is referred to as an "Alert Storm" and is really despised in the IT Operations world. Alert storms send masses of people scrambling trying to determine if and what kind of impact there really is to the business.
Alert Storms are so detrimental to operations that companies spend a lot of money on systems designed to prevent alert storms. These systems become a central aggregation point for alerts and rules are written that try to intelligently address alert storm conditions. This method just adds to the overhead costs and complexity of your overall monitoring environment and should ideally never have to be considered.

Alerts Done Right - Business Impact

The right question to ask now is; "How can alerting be done the right way without spending more time and money than it costs to develop and run my applications?"

Your most critical, intelligent, trusted (or whatever other buzz words makes sense here) alerts should be based off metrics that directly represent business impact. Following are a few examples:

  • End user response time (good indicator of regional issues)
  • Business transaction response time (good indicator of systemic issues)
  • Business transaction throughput rate (do we see the same amount of traffic as usual?)
  • Number of widgets sold (is there a problem preventing users from buying?)

Now that you know what type of metrics should be the triggers for your alerts, you need to know what the proper alerting method is for these metrics. By now you should know that I am going to discourage the use of static thresholds. Your monitoring tool needs to support behavioral baselining and alerts based upon deviation from baselines. Simply put, your monitoring tool needs to automatically learn normal behavior for each metric and only alert if there is a large enough deviation from that normal behavior.

Now let me point out that I do not hate static thresholds. On the contrary, I find them useful in certain situations. For example, if I've promised a 300 ms response time from the service that I manage, I really want an alert if ANY transactions take longer than 300 ms so I can identify the root cause and make sure it never happens again. That is a perfect time to set up a static threshold but it is more of an outlier case when it comes to alerting.

Here is a real world example of how powerful behavioral based alerts are compared to static based. When I was working for the Investment Banking division of a global Financial Services firm, the operations center received an alert that was based upon deviation from normal behavior. The alert was routed to the application support team who quickly identified the issue and were able to avoid an outage of their trading platform. A post event analysis reveled that the behavioral based alert triggered 45 minutes before an old static based alert would have been sent out. This 45 minute head start enabled the support team to completely avoid business impact, which equated to saving millions of dollars per hour in lost revenue for that particular application.

I love it when you recoup the cost of your monitoring tools by avoiding a single outage!!!

Integration, Not Segregation
Now that we know about behavioral learning and alerting, and that we need to focus on metrics that directly correlate to business impact, what else is important when it comes to alerts?

Integration and analysis of alerts and data can help reduce your MTTR (mean time to repair) from hours/days/weeks to minutes. When your operations center receives an alert, they usually just forward it on to the appropriate support team and wait to hear back on the resolution. If done right, your operations center can pass along a full set of meaningful information to the proper support team so that they can act almost immediately. Imagine sending an email to support that contained a link to a slow "checkout" business transaction plus charts of all of the supporting metrics (CPU, garbage collection, network i/o, etc...) that deviated from normal behavior before, during, and after the time of the slow transaction. That's way more powerful than sending an alert from ops to app support that complains of high CPU utilization on a given host.

You Can't Afford to Live in the Past
The IT world is constantly changing. What once was "cutting edge" has transitioned through "good enough" and is full blown "you still use that?" Alerts from static thresholds based upon metrics that have no relationship to business impact are costing your organization time and money. Monitoring Rockstars are constantly adapting to the changing IT landscape and making sure their organization takes advantage of the strategies and technologies that enable competitive advantage.

When you use the right monitoring tools with the proper alerting strategy, you help your organization improve customer service, focus on creating new and better product, and increase profits all by reducing the number and length of application outages. So implement the strategies discussed here, document your success, and then go ask for a raise!

Thanks for taking the time to read this week's installment in my continuing series. Next week I'll share my thoughts and experience on increasing adoption of your monitoring tools across organizational silos to really crank up the value proposition.

Read the original blog entry...

More Stories By Sandi Mappic

Sandi Mappic has a passion for making apps go faster. She works with AppDynamics around the clock to help customers resolve performance pain and master application performance management