Welcome!

Microservices Expo Authors: Carmen Gonzalez, Liz McMillan, Yeshim Deniz, Elizabeth White, Gopala Krishna Behara

Related Topics: Microservices Expo, Agile Computing

Microservices Expo: Article

End-User Experience Management Drives ROI

Five ways for CIOs to convince their CFOs

Enterprise CFOs are taking an increasingly active role in managing the IT decision making process. As a result, IT professionals need to be prepared to quickly show how innovative technology investments equate to real business value. According to a recent industry report, only 5 percent of CIOs have the power to authorize IT investments. That same study reports that IT departments are now, more than ever, reporting directly to the CFO (42 percent) as opposed to the 33 percent reporting to the CEO. A company can't stay competitive and productive without the right tools, technology and infrastructure. And, today's CIOs and IT leaders need the approval of the finance department, which means technology investments need to be more tightly coupled and aligned to company goals and business value.

It has also become increasingly important to prove how new investments can drive additional ROI from the company's existing infrastructure, reduce overall business costs, optimize business processes and improve business productivity, while also demonstrating overall interoperability with existing and future IT investments. Common questions finance asks IT are: "Why do we need another XYZ tool when we already have several throughout the enterprise?" or "How will this product add value to the business and when?"

Making the Case for End User Experience Management
When it comes to application performance management (APM) and end user experience, enterprises could have anywhere from five to over 100 management tools that they use in tandem to help manage their IT services. However, these tools are typically data center-centric - they are unable to monitor, detect and pinpoint the cause of application performance problems from the desktop vantage point. So despite the fact there's been a substantial investment in these APM tools, they cannot provide IT with visibility into understanding how end users are actually experiencing the IT services they consume. Therefore, IT is unable to proactively manage and address performance issues before business productivity is impacted.

Below are some suggested ways for IT to bridge the ROI use case gap and how to describe the real business value that can be delivered with the adoption of real end user experience monitoring and management tools, especially those focused on the desktop vantage point - which, by the way, is becoming more widely recognized as a necessity in order to fuel a true user-centric approach to proactive IT management.

According to Will Cappelli, research vice president of Gartner, Inc., "Technologies are required which will be able to penetrate the increasingly complex and opaque Internet edge and the correlation between edge events and host processes. In fact, we believe that such technologies that gather and analyze data from the point where users and customers actually access the data could, in some cases, replace APM technologies that are reliant on data-center-bound instrumentation points."

1. Define and Communicate the Limitations of Existing Products
Industry-leading analysts have established that in 74 percent of the reported help desk cases, IT first learns about performance and availability problems when users call the help desk. That's because existing application performance management products are data center-focused and provide very little visibility into real end user experience. As mentioned earlier, the average company has five APM products in place, many have as many as 25 different products, and 100+ is not uncommon. While each of these products may be critical for managing certain components of the IT infrastructure, they do not address the effective management of the end user experience, thereby curtailing both user and business productivity.

The recently published Gartner Magic Quadrant for User Application Performance Monitoring (by Will Cappelli and Jonah Kowall, September 19, 2011) has identified end user experience monitoring as one of the central components to the application performance management process because it "is precisely where business process execution interfaces with the IT stack; any monitoring effort that fails to capture the end user's experience is ultimately blind at the most direct point of encounter between IT and the business."

The problem seems to be a lack of the right tools - those that support business optimization, improvements to IT processes, and increased user productivity. User-centric, proactive IT management of the end user experience addresses this critical problem - the visibility gap - at its core.

2. Share How Current and Future IT Investments Impact Real End User Experience and Business Productivity
In the application world, companies make continued investments to improve application adoption and how they perform for their end users. And it's no secret that when key business applications aren't performing optimally, the impact can be felt both on the business and on corporate morale. According to analyst firm Enterprise Management Associates, large companies report that "downtime can cost in excess of $15,000 per minute for technology-dependent organizations, as applications drive revenue, productivity and brand value."

When precise insights are available into how users utilize their applications, real end user performance data can be leveraged to increase the ROI on these investments. For example, if IT management can validate that users are actively adopting and successfully using the application and their productivity is meeting expectations, this insight can eliminate the unnecessary and costly investment of additional licenses or "premium" packages based on usage. It can also help highlight the need for improved training for increased productivity and even reduce the deadly "shelf ware" sin, where 57 percent of global enterprises own more software licenses than actually deployed.

3. Be Prepared with Empirical Evidence to Prove Your Case
Desktop and infrastructure investments are critical to any IT organization. It is a challenge to keep up with the plethora of devices, protocols and innovations to understand which is better for your unique business and IT environment. The cost of selecting, purchasing and deploying new infrastructure technologies can be exorbitant. On top of that, you enter very risky waters if you lack the empirical evidence that the performance improvements will actually improve the end user experience.

Understanding the potential impact of software and application upgrades on end user experience is critical for the enterprise, as a single vulnerability can severely impact business continuity. The same applies to testing configuration management of infrastructure components and various operating systems. Comprehensive before-and-after comparisons validate performance and functionality prior to rolling out the application enterprise-wide. Visibility into the actual end user experience and performance that proves proposed changes substantially impact performance (or not) ensure that the investment decisions are based on empirical evidence versus assumption.

4. Evangelize the "Visibility Gap" and Reveal How It Threatens the Business
To understand the impact real end user experience has on business productivity, and ultimately the return on your existing and future IT investments, you must be able to measure and monitor the three primary components that dynamically interact to constantly impact how end users experience the IT services they consume in real-time. Otherwise, decisions will be made with a significant "visibility gap" in performance knowledge that can lead to unnecessary and costly missteps.

The first component is application performance. Latency, response time and end-to-end transaction time are all key elements in both the experience and measurement of application performance.

The second component is device performance. Even if the end-to-end transaction time of a particular application is excellent, if the underlying platform is sluggish - perhaps due to CPU power, memory availability or other background processes that are resource hogs - the end user's experience with what should be a high-performing application will be poor.

Ultimately, businesses deliver applications to enhance the productivity of end users, making user productivity the third component of end user experience. How many trades, calls or emails can the end user complete using a particular application running on a specific desktop platform? Productivity is impacted by error messages, non-responding and crashed applications, boot time, and the actual usability of the application.

Measuring and monitoring the three primary components (application, device and user) that dynamically interact to constantly impact end users experience in real-time is critical in order to close the "visibility gap" between how IT believes services are being consumed and how they are actually being experienced by the frontline.

5. Comprehensive Visibility Delivers Rapid Response and Cost Reduction
More often than not, the first and only indication of a problem on the frontline is when end users call the help desk - if they call at all - and by this time business and end user productivity has already been disrupted. Moreover, when users start calling it is very difficult for them to accurately describe the problems they are experiencing, while at the same time IT can't determine whether the issues are isolated or endemic.

Immediate awareness and rapid response to end user problems before end users call the help desk can reduce the costs associated with the need for manual monitoring and dedicated help desk/IT operations resources for severity type 1 and 2 problems. The ability to monitor the "Key-to-Glass" for any business activity running on any desktop (type), autonomic performance profiling and proactive incident detection all help contribute to cost reduction.

Summary
Today's CFO is "increasingly becoming the top technology investment decision maker in many organizations," but even though one report discussed earlier indicates this shift, another report, CIO magazine's 2010 State of the CIO, found that 43 percent of CIOs still report to CEOs, and just 19 percent report to CFOs. Whether the number is 19 percent or 42 percent, the more important takeaway is that CIOs and IT leaders are indeed increasing the communication path and their transparency with financial leaders.

To do so, CIOs are demonstrating their understanding of the direct impact IT tools and technologies have on improving business performance by delivering effective control, efficiency and business insight. They are making this crystal clear to their financial leader counterpart(s) and therefore bridging the gap between the enigma of IT and the real benefits and ROI being witnessed on an ongoing basis. This is how CIOs and CFOs will achieve common business goals. Gaining true end user experience management and monitoring is just one example of how an IT investment can be pivotal in driving more ROI from existing and planned IT infrastructure investments - something that everyone can rally behind.

More Stories By Donna Parent

Donna Parent is vice president of marketing for Aternity Inc. She has held a number of senior marketing positions at emerging software vendors spanning real-time business intelligence solutions for SOAs, Sales Force Automation software, high-performance Complex Event Processing (CEP) solutions, and online intelligence applications for monitoring business activity through open Internet resources.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you...
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ...
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
As Enterprise business moves from Monoliths to Microservices, adoption and successful implementations of Microservices become more evident. The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Documenting hurdles and problems for the use of Microservices will help consultants, architects and specialists to avoid repeating the same mistakes and learn how and when to use (or not use) Microservices at the enterprise level. The circumstance w...
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
DevOps and microservices are permeating software engineering teams broadly, whether these teams are in pure software shops but happen to run a business, such Uber and Airbnb, or in companies that rely heavily on software to run more traditional business, such as financial firms or high-end manufacturers. Microservices and DevOps have created software development and therefore business speed and agility benefits, but they have also created problems; specifically, they have created software securi...
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions with...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his general session at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore...
This week's news brings us further reminders that if you're betting on cloud, you're headed in the right direction. The cloud is growing seven times faster than the rest of IT, according to IDC, with a 25% spending increase just from 2016 to 2017. SaaS still leads the pack, with an estimated two-thirds of public cloud spending going that way. Large enterprises, with more than 1,000 employees, are predicted to account for more than half of cloud spending and have the fastest annual growth rate.
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists l...
When building DevOps or continuous delivery practices you can learn a great deal from others. What choices did they make, what practices did they put in place, and how did they connect the dots? At Sonatype, we pulled together a set of 21 reference architectures for folks building continuous delivery and DevOps practices using Docker. Why? After 3,000 DevOps professionals attended our webinar on "Continuous Integration using Docker" discussing just one reference architecture example, we recogn...
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and the delivery of applications and microservices. This applies equally to enterprise data centers as well as the cloud. In his session at 20th Cloud Expo, Jari Kolehmainen, founder and CTO of Kontena, will discuss solutions and benefits of a deeply integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the drone.io Cl tool...
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...