Welcome!

Microservices Expo Authors: Liz McMillan, Gregor Petri, Elizabeth White, Pat Romanski, AppDynamics Blog

Related Topics: @BigDataExpo, Microservices Expo, Containers Expo Blog, Agile Computing, @CloudExpo, Apache

@BigDataExpo: Article

Babies, Big Data, and IT Analytics

Machine learning is a topic that has gone from obscure niche to mainstream visibility over the last few years

Machine learning and IT analytics can be just as beneficial to IT operations as it is for monitoring vital signs of premature babies to identify danger signs too subtle or abnormal to be detected by a human. But an enterprise must be willing to implement monitoring and instrumentation that gathers data and incorporates business activity across organizational silos in order to get meaningful results from machine learning.

Machine learning is a topic that has gone from obscure niche to mainstream visibility over the last few years. High profile software companies like Splunk have tapped into the Big Data "explosion" to highlight the benefits of building systems that use algorithms and data to make decisions and evolve over time.

One recent article on machine learning on the O'Reilly Radar blog that caught my attention made a connection between web operations and medical care for premature infants. "Operations, machine learning, and premature babies" by Mike Loukides describes how machine learning is used to analyze data streamed from dozens of monitors connected to each baby. The algorithms are able to detect dangerous infections a full day before any symptoms are noticeable to a human.

An interesting point from the article is that the machine learning system is not looking for spikes or irregularities in the data; it is actually looking for the opposite. Babies who are about to become sick stop exhibiting the normal variations in vital signs shown by healthy babies. It takes a machine learning system to detect changes in behavior too subtle for a human to notice.

Mike Loukides then wonders whether machine learning can be applied to web operations. Typical performance monitoring focuses on thresholds to identify a problem. "But what if crossing a threshold isn't what indicates trouble, but the disappearance (or diminution) of some regular pattern?" Machine learning could identify symptoms that a human fails to identify because he's just looking for thresholds to be crossed.

Mike's conclusion sums up much of the state of the IT industry concerning machine learning:

At most enterprises, operations have not taken the next step. Operations staff doesn't have the resources (neither computational nor human) to apply machine intelligence to our problems. We'd have to capture all the data coming off our servers for extended periods, not just the server logs that we capture now, but any every kind of data we can collect: network data, environmental data, I/O subsystem data, you name it.

As someone who works for a company that applies a form of machine learning (Behavior Learning for predictive analytics) to IT operations and application performance management, I read this with great interest. I didn't necessarily disagree with his conclusion but tried to pull apart the reasoning behind why more companies aren't applying algorithms to their IT data to look for problems.

There are at least three requirements for companies who want to move ahead in this area:

1. Establish maturity of one's monitoring infrastructure. This is the most fundamental point. If you want to apply machine intelligence to IT operations then you need to first add instrumentation and monitoring. Numerous monitoring products and approaches abound but you have to get the data before you can analyze it.

2. Coordinate multiple enterprise silos. Modern IT applications are increasingly complex and may cross multiple enterprise silos such as server virtualization, network, databases, application development, and other middleware components. Enterprises must be willing to coordinate between these multiple groups in gathering monitoring data and performing cross-functional troubleshooting when there are performance or uptime issues.

3. Incorporate business activity monitoring (BAM). Business activity data provides the "vital signs" of a business. Examples of retail business activity data include number of units sold, total gross sales, and total net sales for a time period. Knowing the true business impact of an application performance problem requires the correlation of business data. When an outage occurred for 20 minutes, how many fewer units were sold? What was the reduction in gross and net sales?

An organization that can fulfill these requirements is capable of achieving real benefits in IT operations and can successfully apply analytics. Gartner has established the ITScore Maturity Model for determining one's sophistication in availability and performance monitoring. Here is the description for level 5, which is the top tier:

Behavior Learning engines, embedded knowledge, advanced correlation, trend analysis, pattern matching, and integrated IT and business data from sources such as BAM provide IT operations with the ability to dynamically manage the IT infrastructure in line with business policy.

Applying machine learning to IT operations isn't easy. Most enterprises don't do it because they need to overcome organizational inertia and gather data from multiple groups scattered throughout the enterprise. For the organizations willing to do this, however, they will see tangible business benefits. Just as a hospital could algorithmically detect the failing health of a premature infant, an enterprise willing to use machine learning will visibly see how abnormal problems within IT operations can impact revenue.

More Stories By Richard Park

Richard Park is Director of Product Management at Netuitive. He currently leads Netuitive's efforts to integrate with application performance and cloud monitoring solutions. He has nearly 20 years of experience in network security, database programming, and systems engineering. Some past jobs include product management at Sourcefire and Computer Associates, network engineering and security at Booz Allen Hamilton, and systems engineering at UUNET Technologies (now part of Verizon). Richard has an MS in Computer Science from Johns Hopkins, an MBA from Harvard Business School, and a BA in Social Studies from Harvard University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
In his general session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed cloud as a ‘better data center’ and how it adds new capacity (faster) and improves application availability (redundancy). The cloud is a ‘Dynamic Tool for Dynamic Apps’ and resource allocation is an integral part of your application architecture, so use only the resources you need and allocate /de-allocate resources on the fly.
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, showed how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful f...
About a year ago we tuned into “the need for speed” and how a concept like "serverless computing” was increasingly catering to this. We are now a year further and the term “serverless” is taking on unexpected proportions. With some even seeing it as the successor to cloud in general or at least as a successor to the clouds’ poorer cousin in terms of revenue, hype and adoption: PaaS. The question we need to ask is whether this constitutes an example of Hype Hopping: to effortlessly pivot to the ...
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. Big Data at Cloud Expo - to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes ho...
SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they mana...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Digitization is driving a fundamental change in society that is transforming the way businesses work with their customers, their supply chains and their people. Digital transformation leverages DevOps best practices, such as Agile Parallel Development, Continuous Delivery and Agile Operations to capitalize on opportunities and create competitive differentiation in the application economy. However, information security has been notably absent from the DevOps movement. Speed doesn’t have to negat...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
Your business relies on your applications and your employees to stay in business. Whether you develop apps or manage business critical apps that help fuel your business, what happens when users experience sluggish performance? You and all technical teams across the organization – application, network, operations, among others, as well as, those outside the organization, like ISPs and third-party providers – are called in to solve the problem.
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, will compare the Jevons Paradox to modern-day enterprise IT, e...
As applications are promoted from the development environment to the CI or the QA environment and then into the production environment, it is very common for the configuration settings to be changed as the code is promoted. For example, the settings for the database connection pools are typically lower in development environment than the QA/Load Testing environment. The primary reason for the existence of the configuration setting differences is to enhance application performance. However, occas...
SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...
While DevOps promises a better and tighter integration among an organization’s development and operation teams and transforms an application life cycle into a continual deployment, Chef and Azure together provides a speedy, cost-effective and highly scalable vehicle for realizing the business values of this transformation. In his session at @DevOpsSummit at 19th Cloud Expo, Yung Chou, a Technology Evangelist at Microsoft, will present a unique opportunity to witness how Chef and Azure work tog...
When scaling agile / Scrum, we invariable run into the alignment vs autonomy problem. In short: you cannot have autonomous self directing teams if they have no clue in what direction they should go, or even shorter: Alignment breeds autonomy. But how do we create alignment? and what tools can we use to quickly evaluate if what we want to do is part of the mission or better left out? Niel Nickolaisen created the Purpose Alignment model and I use it with innovation labs in large enterprises to de...
SYS-CON Events announced today that Numerex Corp, a leading provider of managed enterprise solutions enabling the Internet of Things (IoT), will exhibit at the 19th International Cloud Expo | @ThingsExpo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Numerex Corp. (NASDAQ:NMRX) is a leading provider of managed enterprise solutions enabling the Internet of Things (IoT). The Company's solutions produce new revenue streams or create operating...
If you’re responsible for an application that depends on the data or functionality of various IoT endpoints – either sensors or devices – your brand reputation depends on the security, reliability, and compliance of its many integrated parts. If your application fails to deliver the expected business results, your customers and partners won't care if that failure stems from the code you developed or from a component that you integrated. What can you do to ensure that the endpoints work as expect...
Analysis of 25,000 applications reveals 6.8% of packages/components used included known defects. Organizations standardizing on components between 2 - 3 years of age can decrease defect rates substantially. Open source and third-party packages/components live at the heart of high velocity software development organizations. Today, an average of 106 packages/components comprise 80 - 90% of a modern application, yet few organizations have visibility into what components are used where.
Throughout history, various leaders have risen up and tried to unify the world by conquest. Fortunately, none of their plans have succeeded. The world goes on just fine with each country ruling itself; no single ruler is necessary. That’s how it is with the container platform ecosystem, as well. There’s no need for one all-powerful, all-encompassing container platform. Think about any other technology sector out there – there are always multiple solutions in every space. The same goes for conta...