Click here to close now.




















Welcome!

Microservices Expo Authors: VictorOps Blog, Felix Xavier, Yeshim Deniz, Tad Anderson, Liz McMillan

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, Agile Computing, @BigDataExpo, SDN Journal, @ThingsExpo

@CloudExpo: Article

Predictive Analytics for IT – Filling the Gaps in APM

Predictive analytics solutions for IT can detect, trace and predict performance issues and their root cause

Application Performance Management (APM) grew out of the movement to better align IT with real business concerns. Instead of monitoring a lot of disparate components, such as servers and switches, APM would provide improved visibility into mission-critical application performance and the user experience. Today, APM solutions help IT track end-to-end application response time and troubleshoot coding errors across application components that have an impact on performance.

APM has a rightful place in the arsenal of monitoring tools that IT uses to keep its applications and systems up and running. However, today's APM solutions have some serious gaps and challenges when it comes to providing IT with the entire application performance picture.

Hardware Visibility
Most APM solutions provide minimal information about the hardware and network components underlying application performance, other than showing which components are involved in each part of the transaction. Those that do a better job usually require users to shift to another screen or monitoring system to get more hardware visibility. As with the blind men touching different parts of an elephant, this approach makes it difficult to correlate hardware performance with all the other components driving the application.

The Virtual, Distributed Environment
Most of today's APM solutions were created before virtualization, the cloud, and complex, composite applications took off in the IT environment. With virtual machines migrating back and forth among physical servers at different times of the day or week, and applications dependent on scores of components and cloud services, APM vendors are hard-pressed to provide visibility into the entire scope of a single application.

Predictive Capabilities
As 24 by 7 by 365 uptime becomes increasingly critical to business success, enterprises need to be able to predict and address issues BEFORE they affect the business, rather than after. APM has had mixed success in this area. A recent survey by TRAC Research[1] found that of organizations deploying APM solutions, 60 percent report a success rate of less than half in identifying performance issues before they have an impact on end users.

Enter Predictive Analytics for IT
Filling these APM gaps is how Big Data and predictive analytics for IT can play a significant, highly beneficial role in IT's efforts to maintain application performance. Today, when IT encounters performance issues, it typically has to collect its server, storage, network, and APM folks into a war room to search through mountains of hardware and APM logs, and correlate information manually to isolate the root cause. This resource-intensive process can frequently take hours or even days.

IT has lots of alerts and thresholds to analyze, but those are only as good as the knowledge, experience, and insight of the IT folks who configured them. Just because a server surpassed its CPU utilization threshold doesn't mean that event had anything to do with the root cause of an application issue. Often the real issue is hidden deep in all the delicate interactions among multiple hardware and software components, and may not be reflected in individual thresholds. The same TRAC Research study shows an average of 46.2 hours spent by IT each month in these war rooms searching for root cause. Even more depressing, the root cause is often not found, so IT just reboots everything in the hope that it all works until the same problem rears its ugly head again.

Predictive analytics take over where APM leaves off, harnessing third-generation machine learning and Big Data analysis techniques to efficiently plow through mountains of log data. They discover all the behavior patterns and interrelationships between the IT software and hardware components driving today's mission-critical applications. Over several hours or days, the best solutions baseline the normal behavior of all those components, relationships, and events and use complex algorithms to detect any anomalies that are the early warning signs of developing performance issues. Better yet, because the analytics understand the chain of events involved in the developing anomaly, IT support staff are immediately provided with not only the alert that something is going wrong, but also the behavior of every component involved. This information can shave hours or even days off those war room scenarios. For example, thanks to a predictive analytics for IT solution, a major retailer was able to trace periodic gift card application outages to a misconfigured VLAN. Similarly, a predictive analytics solution reduced - from six hours in the war room to ten minutes - the time it took to diagnose a financial content management performance issue.

Another advantage of predictive analytics solutions is that because they self-learn the normal behavior patterns of underlying components, they drastically reduce the educated guessing that usually goes along with IT staff identifying and setting thresholds against key performance. The inflexibility of these thresholds results in large numbers of false-positive alerts. But with predictive analytics, highly sophisticated algorithms compute the probability of certain behaviors and can therefore generate much more accurate alerts. Some users of predictive analytics solutions have called them the Donald Rumsfelds of IT management tools because they point IT to infrastructure issues they never even knew existed and never looked for. Rumsfeld called these the "unknown unknowns."

However, it is in their ability to be "predictive" that these advanced analytics solutions really shine. By detecting small anomalies early in the game, predictive analytics can alert IT to performance issues and provide enough information to address their root cause before IT or application users even notice them. This can have a dramatic effect on application uptime and performance and a direct impact on user satisfaction and even enterprise revenue. In the case of the document management application, predictive analytics discovered a developing performance issue, and its root cause, the night before it would have affected users placing the application under load on Monday morning.

APM tools have their place in the enterprise, but predictive analytics solutions for IT can kick the effectiveness of those and other IT monitoring tools up a notch by detecting, tracing, and predicting performance issues and their root cause long before any IT war room can.

Resource:

  1. TRAC Research, March 4, 2013: "2013 Application Performance Management Spectrum" report.

More Stories By Rich Collier

Rich Collier is a Principal Solutions Architect with Prelert, a provider of 100% self-learning predictive analytics solutions that augment IT expertise with machine intelligence to dramatically improve IT Operations.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
At the outset, Hyper convergence looks to be an attractive option seemingly providing lot of flexibility. In reality, it comes with so many limitation and curtail the flexibility to grow the hardware resources such as server, storage, etc independent of each other. In addition, performance nightmare bound to hit once the system gets loaded. In late 1990s, storage and networking came out of compute for a reason. Both networking and storage need some specialized processing and it doesn't make se...
ElasticBox, the agile application delivery manager, announced freely available public boxes for the DevOps community. ElasticBox works with enterprises to help them deploy any application to any cloud. Public boxes are curated reference boxes that represent some of the most popular applications and tools for orchestrating deployments at scale. Boxes are an adaptive way to represent reusable infrastructure as components of code. Boxes contain scripts, variables, and metadata to automate proces...
This is the first DevOps book that shows a realistic and achievable view of the full implementation of DevOps. Most of the books and other literature I have read on DevOps are all about the culture, the attitudes, how it relates to Agile and Lean practices, and a high level view of microservices. This book includes all that, but they are not its main focus, and it goes several steps further with respect to the architecture and infrastructure needed for the implementation.
To support developers and operations professionals in their push to implement DevOps principles for their infrastructure environments, ProfitBricks, a provider of cloud infrastructure, is adding support for DevOps tools Ansible and Chef. Ansible is a platform for configuring and managing data center infrastructure that combines multi-node software deployment, ad hoc task execution, and configuration management, and is used by DevOps professionals as they use its playbooks functionality to autom...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Learn what is going on, contribute to the discussions, and e...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
DevOps Summit, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development...
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
The word quantum often portends New Age mumbo-jumbo, in spite of the fact that quantum mechanics underlies many of today’s most important technologies, including lasers and the semiconductors found in every computer chip. Nevertheless, today quantum computing is becoming a reality. And while it may look to the layperson like mere mumbo-jumbo, in reality of the technology has largely moved out of the theoretical stage, as recent news indicates. In fact, two important announcements over the la...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...