Welcome!

Microservices Expo Authors: Elizabeth White, Gopala Krishna Behara, Sridhar Chalasani, Tirumala Khandrika, Kelly Burford

Related Topics: Microservices Expo, Agile Computing

Microservices Expo: Article

End-User Experience Management Drives ROI

Five ways for CIOs to convince their CFOs

Enterprise CFOs are taking an increasingly active role in managing the IT decision making process. As a result, IT professionals need to be prepared to quickly show how innovative technology investments equate to real business value. According to a recent industry report, only 5 percent of CIOs have the power to authorize IT investments. That same study reports that IT departments are now, more than ever, reporting directly to the CFO (42 percent) as opposed to the 33 percent reporting to the CEO. A company can't stay competitive and productive without the right tools, technology and infrastructure. And, today's CIOs and IT leaders need the approval of the finance department, which means technology investments need to be more tightly coupled and aligned to company goals and business value.

It has also become increasingly important to prove how new investments can drive additional ROI from the company's existing infrastructure, reduce overall business costs, optimize business processes and improve business productivity, while also demonstrating overall interoperability with existing and future IT investments. Common questions finance asks IT are: "Why do we need another XYZ tool when we already have several throughout the enterprise?" or "How will this product add value to the business and when?"

Making the Case for End User Experience Management
When it comes to application performance management (APM) and end user experience, enterprises could have anywhere from five to over 100 management tools that they use in tandem to help manage their IT services. However, these tools are typically data center-centric - they are unable to monitor, detect and pinpoint the cause of application performance problems from the desktop vantage point. So despite the fact there's been a substantial investment in these APM tools, they cannot provide IT with visibility into understanding how end users are actually experiencing the IT services they consume. Therefore, IT is unable to proactively manage and address performance issues before business productivity is impacted.

Below are some suggested ways for IT to bridge the ROI use case gap and how to describe the real business value that can be delivered with the adoption of real end user experience monitoring and management tools, especially those focused on the desktop vantage point - which, by the way, is becoming more widely recognized as a necessity in order to fuel a true user-centric approach to proactive IT management.

According to Will Cappelli, research vice president of Gartner, Inc., "Technologies are required which will be able to penetrate the increasingly complex and opaque Internet edge and the correlation between edge events and host processes. In fact, we believe that such technologies that gather and analyze data from the point where users and customers actually access the data could, in some cases, replace APM technologies that are reliant on data-center-bound instrumentation points."

1. Define and Communicate the Limitations of Existing Products
Industry-leading analysts have established that in 74 percent of the reported help desk cases, IT first learns about performance and availability problems when users call the help desk. That's because existing application performance management products are data center-focused and provide very little visibility into real end user experience. As mentioned earlier, the average company has five APM products in place, many have as many as 25 different products, and 100+ is not uncommon. While each of these products may be critical for managing certain components of the IT infrastructure, they do not address the effective management of the end user experience, thereby curtailing both user and business productivity.

The recently published Gartner Magic Quadrant for User Application Performance Monitoring (by Will Cappelli and Jonah Kowall, September 19, 2011) has identified end user experience monitoring as one of the central components to the application performance management process because it "is precisely where business process execution interfaces with the IT stack; any monitoring effort that fails to capture the end user's experience is ultimately blind at the most direct point of encounter between IT and the business."

The problem seems to be a lack of the right tools - those that support business optimization, improvements to IT processes, and increased user productivity. User-centric, proactive IT management of the end user experience addresses this critical problem - the visibility gap - at its core.

2. Share How Current and Future IT Investments Impact Real End User Experience and Business Productivity
In the application world, companies make continued investments to improve application adoption and how they perform for their end users. And it's no secret that when key business applications aren't performing optimally, the impact can be felt both on the business and on corporate morale. According to analyst firm Enterprise Management Associates, large companies report that "downtime can cost in excess of $15,000 per minute for technology-dependent organizations, as applications drive revenue, productivity and brand value."

When precise insights are available into how users utilize their applications, real end user performance data can be leveraged to increase the ROI on these investments. For example, if IT management can validate that users are actively adopting and successfully using the application and their productivity is meeting expectations, this insight can eliminate the unnecessary and costly investment of additional licenses or "premium" packages based on usage. It can also help highlight the need for improved training for increased productivity and even reduce the deadly "shelf ware" sin, where 57 percent of global enterprises own more software licenses than actually deployed.

3. Be Prepared with Empirical Evidence to Prove Your Case
Desktop and infrastructure investments are critical to any IT organization. It is a challenge to keep up with the plethora of devices, protocols and innovations to understand which is better for your unique business and IT environment. The cost of selecting, purchasing and deploying new infrastructure technologies can be exorbitant. On top of that, you enter very risky waters if you lack the empirical evidence that the performance improvements will actually improve the end user experience.

Understanding the potential impact of software and application upgrades on end user experience is critical for the enterprise, as a single vulnerability can severely impact business continuity. The same applies to testing configuration management of infrastructure components and various operating systems. Comprehensive before-and-after comparisons validate performance and functionality prior to rolling out the application enterprise-wide. Visibility into the actual end user experience and performance that proves proposed changes substantially impact performance (or not) ensure that the investment decisions are based on empirical evidence versus assumption.

4. Evangelize the "Visibility Gap" and Reveal How It Threatens the Business
To understand the impact real end user experience has on business productivity, and ultimately the return on your existing and future IT investments, you must be able to measure and monitor the three primary components that dynamically interact to constantly impact how end users experience the IT services they consume in real-time. Otherwise, decisions will be made with a significant "visibility gap" in performance knowledge that can lead to unnecessary and costly missteps.

The first component is application performance. Latency, response time and end-to-end transaction time are all key elements in both the experience and measurement of application performance.

The second component is device performance. Even if the end-to-end transaction time of a particular application is excellent, if the underlying platform is sluggish - perhaps due to CPU power, memory availability or other background processes that are resource hogs - the end user's experience with what should be a high-performing application will be poor.

Ultimately, businesses deliver applications to enhance the productivity of end users, making user productivity the third component of end user experience. How many trades, calls or emails can the end user complete using a particular application running on a specific desktop platform? Productivity is impacted by error messages, non-responding and crashed applications, boot time, and the actual usability of the application.

Measuring and monitoring the three primary components (application, device and user) that dynamically interact to constantly impact end users experience in real-time is critical in order to close the "visibility gap" between how IT believes services are being consumed and how they are actually being experienced by the frontline.

5. Comprehensive Visibility Delivers Rapid Response and Cost Reduction
More often than not, the first and only indication of a problem on the frontline is when end users call the help desk - if they call at all - and by this time business and end user productivity has already been disrupted. Moreover, when users start calling it is very difficult for them to accurately describe the problems they are experiencing, while at the same time IT can't determine whether the issues are isolated or endemic.

Immediate awareness and rapid response to end user problems before end users call the help desk can reduce the costs associated with the need for manual monitoring and dedicated help desk/IT operations resources for severity type 1 and 2 problems. The ability to monitor the "Key-to-Glass" for any business activity running on any desktop (type), autonomic performance profiling and proactive incident detection all help contribute to cost reduction.

Summary
Today's CFO is "increasingly becoming the top technology investment decision maker in many organizations," but even though one report discussed earlier indicates this shift, another report, CIO magazine's 2010 State of the CIO, found that 43 percent of CIOs still report to CEOs, and just 19 percent report to CFOs. Whether the number is 19 percent or 42 percent, the more important takeaway is that CIOs and IT leaders are indeed increasing the communication path and their transparency with financial leaders.

To do so, CIOs are demonstrating their understanding of the direct impact IT tools and technologies have on improving business performance by delivering effective control, efficiency and business insight. They are making this crystal clear to their financial leader counterpart(s) and therefore bridging the gap between the enigma of IT and the real benefits and ROI being witnessed on an ongoing basis. This is how CIOs and CFOs will achieve common business goals. Gaining true end user experience management and monitoring is just one example of how an IT investment can be pivotal in driving more ROI from existing and planned IT infrastructure investments - something that everyone can rally behind.

More Stories By Donna Parent

Donna Parent is vice president of marketing for Aternity Inc. She has held a number of senior marketing positions at emerging software vendors spanning real-time business intelligence solutions for SOAs, Sales Force Automation software, high-performance Complex Event Processing (CEP) solutions, and online intelligence applications for monitoring business activity through open Internet resources.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
Many IT organizations have come to learn that leveraging cloud infrastructure is not just unavoidable, it’s one of the most effective paths for IT organizations to become more responsive to business needs. Yet with the cloud comes new challenges, including minimizing downtime, decreasing the cost of operations, and preventing employee burnout to name a few. As companies migrate their processes and procedures to their new reality of a cloud-based infrastructure, an incident management solution...
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things ...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
Cloud Governance means many things to many people. Heck, just the word cloud means different things depending on who you are talking to. While definitions can vary, controlling access to cloud resources is invariably a central piece of any governance program. Enterprise cloud computing has transformed IT. Cloud computing decreases time-to-market, improves agility by allowing businesses to adapt quickly to changing market demands, and, ultimately, drives down costs.
Recent survey done across top 500 fortune companies shows almost 70% of the CIO have either heard about IAC from their infrastructure head or they are on their way to implement IAC. Yet if you look under the hood while some level of automation has been done, most of the infrastructure is still managed in much tradition/legacy way. So, what is Infrastructure as Code? how do you determine if your IT infrastructure is truly automated?
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.