Microservices Expo Authors: Liz McMillan, XebiaLabs Blog, Sematext Blog, David Sprott, Don MacVittie

Related Topics: @BigDataExpo, Java IoT, Microservices Expo, Microsoft Cloud, @CloudExpo

@BigDataExpo: Blog Feed Post

A New Breed of Actionable Analytics

Being smart about your monitoring data collection allows you to isolate and resolve problems much faster

Data is dumb but we can't seem to get enough of it these days. An entire industry has evolved to make sense of the massive amounts of data being generated every day. Massive data collection by itself does not guarantee the context required to solve business and IT problems... If we are smart about data, however, it will lead us in the right direction.

Today's businesses are defined by the software they run on, enabling them to innovate and create new services faster. Apple re-invented the music industry through software; Netflix changed the way we consume movies through software; grocery shopping, taxi-hire, parcel-logistics, car rental, travel booking... these are all industries that have been completely disrupted by software.

As more and more organisations depend upon the software they run on, the consumer demand for superior services, faster innovation, and improved performance continues to accelerate. As a result the software required to compete in this unforgiving market only continues to increase in its complexity.

It was only a few short years ago that when I wanted to book my family vacation, we all made the trip down to the local travel agent and sat around a desk while the agent tapped away on her green-screen searching for available flights and hotels. The process of booking our hotels has obviously changed dramatically, and so have the applications that support the process.

For example, below is a screen shot of the Business Transaction flow through a global travel site when a user makes a simple search for a hotel. All the user does is select a destination, some dates and maybe one or two preferences, but the resultant transaction kicks off more than 200 service requests before the results are returned to the user they can make their selection of preferred hotel.

Hotel Search

This rise in application complexity often means we generate increasingly more structured and unstructured data from our applications such as log files, email, alerts, infrastructure stats, network stats etc. Close to 200 billion emails will be sent in 2014, I wonder how many of those emails are false alerts from monitoring tools?

Keep nothing?

I have been helping companies build monitoring strategies for nearly a decade, and the problem I used to regularly face was dealing with organisations that just didn't have enough data. IT departments didn't have the required information to be able to quickly diagnose and remediate problems when they occurred.

Keep everything?

As organisations continue to have a greater dependency on the software they run on and application complexity continues to increase, there is a danger we swing the other way by collecting and storing too much data to be able to make sense of it in a timely manner. My colleague Jim Hirschauer often refers to this as the "home hoarders effect" drawing parallels with people who simply never throw anything out from their homes. The bigger the piles get the harder it is to find what you need, and at some point that pile is going to topple over.

Keep what you need

There are lots of good reasons to keep data, but "just in-case we need it" is not a good reason. It is important to think about why the data is being captured and stored and how it will help solve problems in the future. By keeping the relevant data and throwing away the clutter, you become more efficient and effective in troubleshooting and resolving problems.

But data alone isn't good enough to solve your business and IT problems. Data in and of itself is one-dimensional and dumb. Data doesn't tell you when there are problems, it doesn't tell you when business is going well, it doesn't tell you anything meaningful without some help and a lot more context. Let's explore an example to illustrate my point.

Let's say we have a couple of data points about a person. The data points are...

Heart Rate = 150 bpm

Blood Pressure = 200 over 100

Now tell me, is this person performing well? With these data points we have no idea. We need more data. The table below shows a list of some possible data points we can collect.

Screen Shot 2014-03-24 at 10.42.29 PM.png

Notice the last attribute in the table. The activity provides us with the context we need to focus on the proper data points to figure out if the person is performing well or not. Here are some more relevant data points...

Distance Run = 100 meters

Time = 9.58s

Now can we determine if the person is performing well or not? I'm not much of a track and field aficionado so I have no idea if this is a good performance or not. I need point of comparison to determine if 9.58 seconds in the 100-meter dash is good. So here is our baseline for comparison sake...

100-Meter World Record Time = 9.69s

Well it looks like the person was performing really well. They set a world record in the 100-meter dash. All of the data points individually didn't tell us anything. We required correlation (context) and analytics (comparison to baseline) in order to turn data into information. I like to refer to this concept as creating Smart Data.

Smart Data Defined

Smart Data is actionable, intelligent, information.

Smart Data is created by performing correlation and analytics on data sets. AppDynamics correlates end user business transaction details with completion status (success, error, exception), response times, and all other data points measured at any given time. It automatically analyzes the entire data set to provide information from which to draw conclusions and take the appropriate action. This information is called Smart Data.

Being smart about your monitoring data collection allows you to isolate and resolve problems much faster, and with a much lower cost of ownership and overhead.

I have recently been working with a company to replace their legacy monitoring tool with AppDynamics. The environment they are monitoring is a good size, consisting of about 1,200 servers and their main application processes approximately 300,000 transactions per minute. They invested in a monitoring tool to help manage the performance of their applications which captured and stored all the data they could "just in-case" it was needed. Unfortunately this approach required an additional 92 servers to be provisioned for the monitoring tool itself, which consumed approximately 80TB of data storage per year. The increasing investments this customer needed to make in hardware, storage, people, and maintenance was too much to manage for them and they decided to look for a different approach.

AppDynamics "smart data" approach to analytics means this particular customer now only requires two reporting servers and the storage requirements were reduced to just 1TB per year. Collecting only the data required to make smart decisions gave them both a 98% reduction in hardware costs and  more effective analytics in the process.

Adding business context

Smart data is not just about resolving problems faster though. In October last year we introduced Real-time Business Metrics and described how AppDynamics customers can use Real-time Business Metrics to extract and present business metrics directly from within their applications. These business metrics provide business context enabling customers to turn smart data into actionable information. Our customers, for example, can see the exact revenue impact of performance problems, the end user experience during an app upgrade, or the real-time impact of marketing campaigns. Smart analytics not only clearly show the business benefits of making immediate improvements, they help direct where limited resources should be invested for further business and application improvement going forward.

AppDynamics is focused on delivering actionable intelligence to sell problems for IT operations and development teams as well as business owners. To learn more about Real-time Business Metrics read here.

The post A New Breed of Actionable Analytics written by appeared first on Application Performance Monitoring Blog from AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@MicroservicesExpo Stories
If you are within a stones throw of the DevOps marketplace you have undoubtably noticed the growing trend in Microservices. Whether you have been staying up to date with the latest articles and blogs or you just read the definition for the first time, these 5 Microservices Resources You Need In Your Life will guide you through the ins and outs of Microservices in today’s world.
In many organizations governance is still practiced by phase or stage gate peer review, and Agile projects are forced to accommodate, which leads to WaterScrumFall or worse. But governance criteria and policies are often very weak anyway, out of date or non-existent. Consequently governance is frequently a matter of opinion and experience, highly dependent upon the experience of individual reviewers. As we all know, a basic principle of Agile methods is delegation of responsibility, and ideally ...
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Right off the bat, Newman advises that we should "think of microservices as a specific approach for SOA in the same way that XP or Scrum are specific approaches for Agile Software development". These analogies are very interesting because my expectation was that microservices is a pattern. So I might infer that microservices is a set of process techniques as opposed to an architectural approach. Yet in the book, Newman clearly includes some elements of concept model and architecture as well as p...
In his general session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed cloud as a ‘better data center’ and how it adds new capacity (faster) and improves application availability (redundancy). The cloud is a ‘Dynamic Tool for Dynamic Apps’ and resource allocation is an integral part of your application architecture, so use only the resources you need and allocate /de-allocate resources on the fly.
For those unfamiliar, as a developer working in marketing for an infrastructure automation company, I have tried to clarify the different versions of DevOps by capitalizing the part that benefits in a given DevOps scenario. In this case we’re talking about operations improvements. While devs – particularly those involved in automation or DevOps will find it interesting, it really talks to growing issues Operations are finding. The problem is right in front of us, we’re confronting it every day,...
The general concepts of DevOps have played a central role advancing the modern software delivery industry. With the library of DevOps best practices, tips and guides expanding quickly, it can be difficult to track down the best and most accurate resources and information. In order to help the software development community, and to further our own learning, we reached out to leading industry analysts and asked them about an increasingly popular tenet of a DevOps transformation: collaboration.
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

Digitization is driving a fundamental change in society that is transforming the way businesses work with their customers, their supply chains and their people. Digital transformation leverages DevOps best practices, such as Agile Parallel Development, Continuous Delivery and Agile Operations to capitalize on opportunities and create competitive differentiation in the application economy. However, information security has been notably absent from the DevOps movement. Speed doesn’t have to negat...
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, will discuss what every business should plan for how to structure their teams to d...
SYS-CON Events announced today that eCube Systems, the leading provider of modern development tools and best practices for Continuous Integration on OpenVMS, will exhibit at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. eCube Systems offers a family of middleware products and development tools that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center. What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will present at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they manag...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
The evolution of JavaScript and HTML 5 to support a genuine component based framework (Web Components) with the necessary tools to deliver something close to a native experience including genuine realtime networking (UDP using WebRTC). HTML5 is evolving to offer built in templating support, the ability to watch objects (which will speed up Angular) and Web Components (which offer Angular Directives). The native level support will offer a massive performance boost to frameworks having to fake all...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...