Welcome!

Microservices Expo Authors: Stackify Blog, Jason Bloomberg, Elizabeth White, Pat Romanski, Steve Wilson

Related Topics: Linux Containers, Java IoT, Microservices Expo, Cloud Security, @BigDataExpo, SDN Journal

Linux Containers: Article

The Rise of the Machine: Intelligent Log Analysis Is Here

Developers use log data to troubleshoot and investigate what affects or causes a problem with their applications

Application logs contain a massive repository of events and come in many different formats. They can have valuable information, but gaining useful insight can be difficult without the assistance of machine learning to help reveal critical problems.

Transaction logs can contain gigabytes of data and come in proprietary formats. Some applications even have separate consoles, and captured events differ by organization depending upon compliance requirements and other considerations. Centralized log management has made it easier to troubleshoot applications and investigate security incidents from one location, but the data still must be interpreted. That often involves complex mapping of key and value structures.

Log management used to be a dirty word in the enterprise. Just four years ago, a Verizon study determined that nearly 70 percent of security breach victims were sitting on logs teeming with sufficient evidence of active exploits. That was primarily because analysis was delayed and failed to provide effective insights. It can be a burdensome undertaking without the right tools.

Developers use log data to troubleshoot and investigate what affects or causes a problem with their applications, both during testing and production. That means processing a huge volume of data and search events to find a needle in a haystack. Logs might have information about where a problem occurred, which component crashed, or which system events had an effect on the application. Previously, much effort went into managing and analyzing searchable logs.

Security Information and Event Management (SIEM) solutions then evolved to make it easier to correlate log information and identify some types of notable events with simple search and visualization solutions. There are still many options available in this category; some are free and others are commercial solutions. Log analysis remains a very time consuming and exacting process, because the onus is on the developer or information security analyst to know exactly what they are looking for. A search query in this generation of SIEM tool often returns a flat list of results without prioritizing what's important to application or network. Just imagine using Google without page rank - results would be lost.

The Rise of the Machine
The latest generation of SIEM tools has more built-in intelligence to expedite the most time-consuming work. Semantic search automates the troubleshooting process by using advanced algorithms to uncover errors, risk factors, and other signs of problems. That is accomplished through a combination of text and semantic processing, statistical models and machine learning technologies.

A pre-tuned information model, which is derived from user searches and decision- making during analysis, can be created for SIEM for each scenario - from operations to compliance and testing. User searches are augmented by machine learning analytics to find meaningful events and insight on the log data, saving time.

That's because augmented search helps to profile and gain instant insight and intelligence from the data, giving the developers a bead on where to start and what happened. While augmented search can deliver useful info out-of-the-box, it keeps getting better with more user searches. The most advanced SIEM solutions will even work with any home grown or third-party application logs without any mapping.

Expect to see new entrants, because there's now an unfolding semantic revolution. Gartner's 2013 Magic Quadrant report for SIEM concluded, "We continue to see large companies that are re-evaluating SIEM vendors to replace SIEM technology associated with partial, marginal or failed deployments." Gartner recognized that intelligence matters, and suggested that analytics should uncover both known and unknown problems.

SIEM is evolving alongside semantics so that organizations can obtain value from the first event analyzed. It can take hours to find errors in log data manually, but automated search tools can pinpoint critical events within seconds, in context and with high accuracy.

More Stories By Haim Koshchitzky

Haim Koshchitzky is the Founder and CEO of XpoLog and has over 20 years of experience in complex technology development and software architecture. Prior to XpoLog, he spent several years as the tech lead for Mercury Interactive (acquired by HP) and other startups. He has a passion for data analytics and technology, and is also an avid marathon runner and Judo black belt.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
As today's digital disruptions bounce and smash their way through conventional technologies and conventional wisdom alike, predicting their path is a multifaceted challenge. So many areas of technology advance on Moore's Law-like exponential curves that divining the future is fraught with danger. Such is the problem with artificial intelligence (AI), and its related concepts, including cognitive computing, machine learning, and deep learning.
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
In our first installment of this blog series, we went over the different types of applications migrated to the cloud and the benefits IT organizations hope to achieve by moving applications to the cloud. Unfortunately, IT can’t just press a button or even whip up a few lines of code to move applications to the cloud. Like any strategic move by IT, a cloud migration requires advanced planning.
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
“Why didn’t testing catch this” must become “How did this make it to testing?” Traditional quality teams are the crutch and excuse keeping organizations from making the necessary investment in people, process, and technology to accelerate test automation. Just like societies that did not build waterways because the labor to keep carrying the water was so cheap, we have created disincentives to automate. In her session at @DevOpsSummit at 20th Cloud Expo, Anne Hungate, President of Daring System...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies? We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measure...
If you are part of the cloud development community, you certainly know about “serverless computing”, almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity. We came from monolithic computing to client-server to services to microservices to serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-func...
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.