Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Yeshim Deniz, Kevin Jackson, ManageEngine IT Matters

Related Topics: Microservices Expo

Microservices Expo: Blog Feed Post

A Formula for Quantifying Productivity of Web Applications

Ever wanted to prove or understand how the network impacts productivity? There is a formula for that…

We often talk in abstract terms about the affects of application performance on productivity. It seems to make sense that if an application is performing poorly – or unavailable – that it will certainly affect the productivity of those who rely upon that application. But it’s hard enough to justify the investment in application acceleration or optimization without being able to demonstrate a real impact on the organization. And right now justification is more of an issue than it’s ever been. 

So let’s take the example of a call center to begin with. Could be customer service supporting customers/users, or a help desk supporting internal users, or even a phone-based order-entry department. Any “call center” that relies on a combination of the telephone and an application to support its processes is sensitive to delays in delivering and outages of applications. 

This excellent article from Call Center Magazine details some of the essential Call Center KPIs, the metrics upon which call center efficiency and thus productivity is measured.

tiredThe best measure of labor efficiency is agent utilization. Because labor costs represent the overwhelming majority of call center expenses, if agent utilization is high, the cost per call will inevitably be low. Conversely, when agent utilization is low, labor costs, and hence cost per call, will be high.

That all makes sense, but what we want – and need – is a formula for determining “agent utilization.”

The formula for determining agent utilization is somewhat complicated. It factors in the length of the work day, break times, vacation and sick time, training time and a number of other factors. But there is an easy way to approximate agent utilization without going to all this trouble:

Let's say, for example that the agents in a particular call center handle an average of 1,250 calls per month at an average handle time of 5 minutes. Additionally, these agents work an average of 21 days per month, and their work day is 7.5 hours after subtracting lunch and break times. The simplified utilization formula above would work out to the following:

Once again, this is not a perfect measure of agent utilization, but it is quick and easy, and gets you within 5% of the true agent utilization figure.

Okay, again that makes sense. And now that we’ve got a formula from which to work we can look at the impact of application performance – both negative and positive – on “agent utilization.”


HIGHER UTILIZATION NOT ALWAYS DESIRABLE


You’ve heard it, I’m sure. The plaintive “my computer is slow today, please hang on a moment…” coming from the other end of the phone is a dead-ringer for “application performance has degraded.” Those of us intimately familiar with data centers and application delivery understand it isn’t really the “computer” that’s slow, but the application – and likely the underlying application delivery network responsible for ensuring things are going smoothly.

The reason the explanation is plaintive is because call center employees of every kind know exactly how they’re rated and measured and understand that a “slow computer” necessarily adds time to their average call handle time. And the higher the average call handle time, the lower their utilization, which brings down the overall efficiency of the call center. But just how much does application performance affect average call handle time?

Let’s assume that the number of “screens” or “pages” a call center handler has to navigate during a call to retrieve relevant information is five. If the average handle time is five minutes, that’s one minute per page. If application performance problems increase the average time per page to one minute and twelve seconds, that’d bring our total time per call up to six minutes.

          1250 x 6 / 9450 = 79.3%

Hey, that’s actually better, isn’t it? Higher utilization of agents means lower costs per call, which certainly makes it appear as though we ought to introduce some latency into the network to make the numbers look better. There are a couple of reasons why this is not true. First and foremost is the effect of high utilization on people. As is pointed out by the aforementioned article:

Whenever utilization numbers approach 80% - 90%, that call center will see relatively high agent turnover rates because they are pushing the agents too hard.

Turnover, of course, is bad because it incurs costs in terms of employee acquisition and training, during which time the efficiency of the call center is reduced. There is also the potential for a cascading effect from turnover in which the bulk of calls are placed upon the shoulders of experienced call center workers which increases their utilization and leads to even higher turnover rates. Like a snowball, the effect of turnover on a call center is quickly cumulative.

Secondly, increasing call handle time also adversely affects the total number of calls a handler can deal with in any given time period. As handle time per call increases, total number of calls per month decreases, which actually changes the equation. There are 9450 minutes in a month, which means at 5 minutes per call there is a maximum of 1890 calls that can be handled. At 6 minutes per call that decreases to 1575. That’s a 17% decrease in total for every minute the average call handle time increases. No call center handles 100% of the calls it theoretically could, but the impact on the number of calls possible will still be affected – decreased – by an increase in the average call handle time due to poor application performance.


GENERALIZING THE FORMULA


What this ultimately means is that worsening application performance reduces the efficiency of call centers by decreasing the number of calls it can handle. That’s productivity in a call center. Applying the same theory to other applications should yield unsurprisingly similar results: degradation of application performance means degrading productivity which means less work is getting done. Any role within the organization that relies upon an application can be essentially measured in terms of the number of “processes” that can be completed in a given time interval. Using that figure it then becomes a matter of decomposing the process into steps (pages, screens, etc…) and determining how much time is spent per step. Application performance affects the whole, but is especially detrimental to individual steps in a process as lengthening one draws out the entire process and thus reduces the total number of “processes” that can be completed.

So we can generalize into a formula that is:

    ((total # of processes per month) * (average number of minutes to complete a process)) / 9450

where 9450 is the total number of minutes available per month. Adjust as necessary.

To determine the impact of degrading application performance, lengthen the process complete time in minutes appropriately while simultaneously adjusting the total number of processes that can be carried out in a month. Try not to exceed a 70% utilization rate as just as with call center employees, burnout from too many back-to-back processes can result in a higher turnover rate.

 


THE IMPACT OF APPLICATION DELIVERY


 

Finally, we can examine whether or not application delivery can improve the productivity of those who rely on the applications you are charged with delivering. To determine the impact of application delivery this time shorten the process complete time in minutes appropriately while simultaneously adjusting the total number of processes that can be handled per month. Again, try not to exceed a 70% utilization rate.

Alternatively, you could use the base formula to determine what kind of improvements in application performance are necessary in order to increase productivity or, in some cases, maintain it. Many folks have experienced an “upgrade” in an enterprise application that causes productivity to plummet because the newer system my have more bells and whistles, but it’s slower for some reason. Basically you need to determine the number of processes you need to handle per month and the utilization rate you’re trying to achieve and use the following formula to determine exactly how much time each process can take before you miss that mark:

(9450 x Utilization Rate ) / # of processes = process time

This allows you to work backward and understand how much time any given process can take before it starts to adversely affect productivity. You’ll need to understand how much of the process time should be allotted to mundane steps in the process, i.e. taking information from customers, entering the data, etc…, and factor that out to determine how much time can be spent traversing the network and in application execution time. Given that number you can then figure out what kind of application delivery solutions will be able to help you meet that target number and ensure that IT is not a productivity bottleneck. Whether it’s acceleration or optimization, or scaling out to meet higher capacity you are likely to find what you need to meet your piece of the productivity puzzle in an application delivery solution.

This also means that you can be confident that “the computer was slow” is not a valid excuse when productivity metrics are missed, and probably more importantly, you can prove it.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
What's the role of an IT self-service portal when you get to continuous delivery and Infrastructure as Code? This general session showed how to create the continuous delivery culture and eight accelerators for leading the change. Don Demcsak is a DevOps and Cloud Native Modernization Principal for Dell EMC based out of New Jersey. He is a former, long time, Microsoft Most Valuable Professional, specializing in building and architecting Application Delivery Pipelines for hybrid legacy, and cloud ...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
"Tintri focuses on the Ops side of the DevOps, which basically is pushing more and more of the accessibility of the infrastructure to the developers and trying to get behind the scenes," explained Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business - from apparel to energy - is being rewritten by software. From planning to development to management to security, CA creates software that fuels transformation for companies in the applic...
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Busine...
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo Silicon Valley which will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is at the intersection of technology and business-optimizing tools, organizations and processes to bring measurable improvements in productivity and profitability," said Aruna Ravichandran, vice president, DevOps product and solutions marketing...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Managing mission-critical SAP systems and landscapes has never been easy. Add public cloud with its myriad of powerful cloud native services and this may not change any time soon. Public cloud offers exciting new possibilities for enterprise workloads. But to make use of these possibilities and capabilities, IT teams need to re-think everything they have done before. Otherwise, they will just end up using public cloud as a hosting platform for their workloads, aka known as “lift and shift.”
There's a lot to gain from cloud computing, but success requires a thoughtful and enterprise focused approach. Cloud computing decouples data and information from the infrastructure on which it lies. A process that is a LOT more involved than dragging some folders from your desktop to a shared drive. Cloud computing as a mission transformation activity, not a technological one. As an organization moves from local information hosting to the cloud, one of the most important challenges is addressi...
The reality of data ubiquity is here—data is buried in operational statistics, machine logs, stacks of overflowing tickets and customer details, among other things. How can any user get valuable information amid this rapid influx of data? Imagine a situation where your firm’s revenue takes a hit owing to an unexpected failure in some business process. It would be a nightmare for IT admins to sift through the interminable piles of data to deduce exactly why and where the problem occurred. To sav...
Hybrid IT is today’s reality, and while its implementation may seem daunting at times, more and more organizations are migrating to the cloud. In fact, according to SolarWinds 2017 IT Trends Index: Portrait of a Hybrid IT Organization 95 percent of organizations have migrated crucial applications to the cloud in the past year. As such, it’s in every IT professional’s best interest to know what to expect.
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
In the decade following his article, cloud computing further cemented Carr’s perspective. Compute, storage, and network resources have become simple utilities, available at the proverbial turn of the faucet. The value they provide is immense, but the cloud playing field is amazingly level. Carr’s quote above presaged the cloud to a T. Today, however, we’re in the digital era. Mark Andreesen’s ‘software is eating the world’ prognostication is coming to pass, as enterprises realize they must be...
A common misconception about the cloud is that one size fits all. Companies expecting to run all of their operations using one cloud solution or service must realize that doing so is akin to forcing the totality of their business functionality into a straightjacket. Unlocking the full potential of the cloud means embracing the multi-cloud future where businesses use their own cloud, and/or clouds from different vendors, to support separate functions or product groups. There is no single cloud so...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, Doug Vanderweide, an instructor at Linux Academy, discussed why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers wit...
Companies have always been concerned that traditional enterprise software is slow and complex to install, often disrupting critical and time-sensitive operations during roll-out. With the growing need to integrate new digital technologies into the enterprise to transform business processes, this concern has become even more pressing. A 2016 Panorama Consulting Solutions study revealed that enterprise resource planning (ERP) projects took an average of 21 months to install, with 57 percent of th...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists examined how DevOps helps to meet the de...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities. In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, posited that disruption is inevitable for comp...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.