|By AppDynamics Blog||
|April 2, 2014 10:30 AM EDT||
Data is dumb but we can't seem to get enough of it these days. An entire industry has evolved to make sense of the massive amounts of data being generated every day. Massive data collection by itself does not guarantee the context required to solve business and IT problems... If we are smart about data, however, it will lead us in the right direction.
Today's businesses are defined by the software they run on, enabling them to innovate and create new services faster. Apple re-invented the music industry through software; Netflix changed the way we consume movies through software; grocery shopping, taxi-hire, parcel-logistics, car rental, travel booking... these are all industries that have been completely disrupted by software.
As more and more organisations depend upon the software they run on, the consumer demand for superior services, faster innovation, and improved performance continues to accelerate. As a result the software required to compete in this unforgiving market only continues to increase in its complexity.
It was only a few short years ago that when I wanted to book my family vacation, we all made the trip down to the local travel agent and sat around a desk while the agent tapped away on her green-screen searching for available flights and hotels. The process of booking our hotels has obviously changed dramatically, and so have the applications that support the process.
For example, below is a screen shot of the Business Transaction flow through a global travel site when a user makes a simple search for a hotel. All the user does is select a destination, some dates and maybe one or two preferences, but the resultant transaction kicks off more than 200 service requests before the results are returned to the user they can make their selection of preferred hotel.
This rise in application complexity often means we generate increasingly more structured and unstructured data from our applications such as log files, email, alerts, infrastructure stats, network stats etc. Close to 200 billion emails will be sent in 2014, I wonder how many of those emails are false alerts from monitoring tools?
I have been helping companies build monitoring strategies for nearly a decade, and the problem I used to regularly face was dealing with organisations that just didn't have enough data. IT departments didn't have the required information to be able to quickly diagnose and remediate problems when they occurred.
As organisations continue to have a greater dependency on the software they run on and application complexity continues to increase, there is a danger we swing the other way by collecting and storing too much data to be able to make sense of it in a timely manner. My colleague Jim Hirschauer often refers to this as the "home hoarders effect" drawing parallels with people who simply never throw anything out from their homes. The bigger the piles get the harder it is to find what you need, and at some point that pile is going to topple over.
Keep what you need
There are lots of good reasons to keep data, but "just in-case we need it" is not a good reason. It is important to think about why the data is being captured and stored and how it will help solve problems in the future. By keeping the relevant data and throwing away the clutter, you become more efficient and effective in troubleshooting and resolving problems.
But data alone isn't good enough to solve your business and IT problems. Data in and of itself is one-dimensional and dumb. Data doesn't tell you when there are problems, it doesn't tell you when business is going well, it doesn't tell you anything meaningful without some help and a lot more context. Let's explore an example to illustrate my point.
Let's say we have a couple of data points about a person. The data points are...
Heart Rate = 150 bpm
Blood Pressure = 200 over 100
Now tell me, is this person performing well? With these data points we have no idea. We need more data. The table below shows a list of some possible data points we can collect.
Notice the last attribute in the table. The activity provides us with the context we need to focus on the proper data points to figure out if the person is performing well or not. Here are some more relevant data points...
Distance Run = 100 meters
Time = 9.58s
Now can we determine if the person is performing well or not? I'm not much of a track and field aficionado so I have no idea if this is a good performance or not. I need point of comparison to determine if 9.58 seconds in the 100-meter dash is good. So here is our baseline for comparison sake...
100-Meter World Record Time = 9.69s
Well it looks like the person was performing really well. They set a world record in the 100-meter dash. All of the data points individually didn't tell us anything. We required correlation (context) and analytics (comparison to baseline) in order to turn data into information. I like to refer to this concept as creating Smart Data.
Smart Data Defined
Smart Data is actionable, intelligent, information.
Smart Data is created by performing correlation and analytics on data sets. AppDynamics correlates end user business transaction details with completion status (success, error, exception), response times, and all other data points measured at any given time. It automatically analyzes the entire data set to provide information from which to draw conclusions and take the appropriate action. This information is called Smart Data.
Being smart about your monitoring data collection allows you to isolate and resolve problems much faster, and with a much lower cost of ownership and overhead.
I have recently been working with a company to replace their legacy monitoring tool with AppDynamics. The environment they are monitoring is a good size, consisting of about 1,200 servers and their main application processes approximately 300,000 transactions per minute. They invested in a monitoring tool to help manage the performance of their applications which captured and stored all the data they could "just in-case" it was needed. Unfortunately this approach required an additional 92 servers to be provisioned for the monitoring tool itself, which consumed approximately 80TB of data storage per year. The increasing investments this customer needed to make in hardware, storage, people, and maintenance was too much to manage for them and they decided to look for a different approach.
AppDynamics "smart data" approach to analytics means this particular customer now only requires two reporting servers and the storage requirements were reduced to just 1TB per year. Collecting only the data required to make smart decisions gave them both a 98% reduction in hardware costs and more effective analytics in the process.
Adding business context
Smart data is not just about resolving problems faster though. In October last year we introduced Real-time Business Metrics and described how AppDynamics customers can use Real-time Business Metrics to extract and present business metrics directly from within their applications. These business metrics provide business context enabling customers to turn smart data into actionable information. Our customers, for example, can see the exact revenue impact of performance problems, the end user experience during an app upgrade, or the real-time impact of marketing campaigns. Smart analytics not only clearly show the business benefits of making immediate improvements, they help direct where limited resources should be invested for further business and application improvement going forward.
AppDynamics is focused on delivering actionable intelligence to sell problems for IT operations and development teams as well as business owners. To learn more about Real-time Business Metrics read here.
The post A New Breed of Actionable Analytics written by Tom Levey appeared first on Application Performance Monitoring Blog from AppDynamics.
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MyS...
Dec. 10, 2016 08:45 AM EST Reads: 5,481
About a year ago we tuned into “the need for speed” and how a concept like "serverless computing” was increasingly catering to this. We are now a year further and the term “serverless” is taking on unexpected proportions. With some even seeing it as the successor to cloud in general or at least as a successor to the clouds’ poorer cousin in terms of revenue, hype and adoption: PaaS. The question we need to ask is whether this constitutes an example of Hype Hopping: to effortlessly pivot to the ...
Dec. 10, 2016 07:45 AM EST Reads: 2,509
Today’s IT environments are increasingly heterogeneous, with Linux, Java, Oracle and MySQL considered nearly as common as traditional Windows environments. In many cases, these platforms have been integrated into an organization’s Windows-based IT department by way of an acquisition of a company that leverages one of those platforms. In other cases, the applications may have been part of the IT department for years, but managed by a separate department or singular administrator. Still, whether...
Dec. 10, 2016 06:30 AM EST Reads: 867
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud: This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Dec. 10, 2016 05:00 AM EST Reads: 3,035
I’m a huge fan of open source DevOps tools. I’m also a huge fan of scaling open source tools for the enterprise. But having talked with my fair share of companies over the years, one important thing I’ve learned is that you can’t scale your release process using open source tools alone. They simply require too much scripting and maintenance when used that way. Scripting may be fine for smaller organizations, but it’s not ok in an enterprise environment that includes many independent teams and to...
Dec. 10, 2016 04:15 AM EST Reads: 846
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
Dec. 10, 2016 02:45 AM EST Reads: 2,275
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
Dec. 10, 2016 02:00 AM EST Reads: 1,998
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dec. 10, 2016 01:00 AM EST Reads: 1,285
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Dec. 9, 2016 10:30 PM EST Reads: 1,747
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Dec. 9, 2016 08:15 PM EST Reads: 5,811
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Dec. 9, 2016 05:30 PM EST Reads: 2,391
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Dec. 9, 2016 05:15 PM EST Reads: 1,911
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
Dec. 9, 2016 05:00 PM EST Reads: 2,103
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 9, 2016 03:30 PM EST Reads: 1,266
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Dec. 9, 2016 03:00 PM EST Reads: 2,040
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
Dec. 9, 2016 02:30 PM EST Reads: 1,291
Cloud Expo, Inc. has announced today that Andi Mann returns to 'DevOps at Cloud Expo 2017' as Conference Chair The @DevOpsSummit at Cloud Expo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great t...
Dec. 9, 2016 02:30 PM EST Reads: 854
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Dec. 9, 2016 02:15 PM EST Reads: 1,318
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to delive...
Dec. 9, 2016 11:45 AM EST Reads: 1,578
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Dec. 9, 2016 11:30 AM EST Reads: 1,069