Welcome!

Microservices Expo Authors: Pat Romanski, Dalibor Siroky, Stackify Blog, Elizabeth White, Liz McMillan

Related Topics: Microservices Expo

Microservices Expo: Blog Feed Post

A Formula for Quantifying Productivity of Web Applications

Ever wanted to prove or understand how the network impacts productivity? There is a formula for that…

We often talk in abstract terms about the affects of application performance on productivity. It seems to make sense that if an application is performing poorly – or unavailable – that it will certainly affect the productivity of those who rely upon that application. But it’s hard enough to justify the investment in application acceleration or optimization without being able to demonstrate a real impact on the organization. And right now justification is more of an issue than it’s ever been. 

So let’s take the example of a call center to begin with. Could be customer service supporting customers/users, or a help desk supporting internal users, or even a phone-based order-entry department. Any “call center” that relies on a combination of the telephone and an application to support its processes is sensitive to delays in delivering and outages of applications. 

This excellent article from Call Center Magazine details some of the essential Call Center KPIs, the metrics upon which call center efficiency and thus productivity is measured.

tiredThe best measure of labor efficiency is agent utilization. Because labor costs represent the overwhelming majority of call center expenses, if agent utilization is high, the cost per call will inevitably be low. Conversely, when agent utilization is low, labor costs, and hence cost per call, will be high.

That all makes sense, but what we want – and need – is a formula for determining “agent utilization.”

The formula for determining agent utilization is somewhat complicated. It factors in the length of the work day, break times, vacation and sick time, training time and a number of other factors. But there is an easy way to approximate agent utilization without going to all this trouble:

Let's say, for example that the agents in a particular call center handle an average of 1,250 calls per month at an average handle time of 5 minutes. Additionally, these agents work an average of 21 days per month, and their work day is 7.5 hours after subtracting lunch and break times. The simplified utilization formula above would work out to the following:

Once again, this is not a perfect measure of agent utilization, but it is quick and easy, and gets you within 5% of the true agent utilization figure.

Okay, again that makes sense. And now that we’ve got a formula from which to work we can look at the impact of application performance – both negative and positive – on “agent utilization.”


HIGHER UTILIZATION NOT ALWAYS DESIRABLE


You’ve heard it, I’m sure. The plaintive “my computer is slow today, please hang on a moment…” coming from the other end of the phone is a dead-ringer for “application performance has degraded.” Those of us intimately familiar with data centers and application delivery understand it isn’t really the “computer” that’s slow, but the application – and likely the underlying application delivery network responsible for ensuring things are going smoothly.

The reason the explanation is plaintive is because call center employees of every kind know exactly how they’re rated and measured and understand that a “slow computer” necessarily adds time to their average call handle time. And the higher the average call handle time, the lower their utilization, which brings down the overall efficiency of the call center. But just how much does application performance affect average call handle time?

Let’s assume that the number of “screens” or “pages” a call center handler has to navigate during a call to retrieve relevant information is five. If the average handle time is five minutes, that’s one minute per page. If application performance problems increase the average time per page to one minute and twelve seconds, that’d bring our total time per call up to six minutes.

          1250 x 6 / 9450 = 79.3%

Hey, that’s actually better, isn’t it? Higher utilization of agents means lower costs per call, which certainly makes it appear as though we ought to introduce some latency into the network to make the numbers look better. There are a couple of reasons why this is not true. First and foremost is the effect of high utilization on people. As is pointed out by the aforementioned article:

Whenever utilization numbers approach 80% - 90%, that call center will see relatively high agent turnover rates because they are pushing the agents too hard.

Turnover, of course, is bad because it incurs costs in terms of employee acquisition and training, during which time the efficiency of the call center is reduced. There is also the potential for a cascading effect from turnover in which the bulk of calls are placed upon the shoulders of experienced call center workers which increases their utilization and leads to even higher turnover rates. Like a snowball, the effect of turnover on a call center is quickly cumulative.

Secondly, increasing call handle time also adversely affects the total number of calls a handler can deal with in any given time period. As handle time per call increases, total number of calls per month decreases, which actually changes the equation. There are 9450 minutes in a month, which means at 5 minutes per call there is a maximum of 1890 calls that can be handled. At 6 minutes per call that decreases to 1575. That’s a 17% decrease in total for every minute the average call handle time increases. No call center handles 100% of the calls it theoretically could, but the impact on the number of calls possible will still be affected – decreased – by an increase in the average call handle time due to poor application performance.


GENERALIZING THE FORMULA


What this ultimately means is that worsening application performance reduces the efficiency of call centers by decreasing the number of calls it can handle. That’s productivity in a call center. Applying the same theory to other applications should yield unsurprisingly similar results: degradation of application performance means degrading productivity which means less work is getting done. Any role within the organization that relies upon an application can be essentially measured in terms of the number of “processes” that can be completed in a given time interval. Using that figure it then becomes a matter of decomposing the process into steps (pages, screens, etc…) and determining how much time is spent per step. Application performance affects the whole, but is especially detrimental to individual steps in a process as lengthening one draws out the entire process and thus reduces the total number of “processes” that can be completed.

So we can generalize into a formula that is:

    ((total # of processes per month) * (average number of minutes to complete a process)) / 9450

where 9450 is the total number of minutes available per month. Adjust as necessary.

To determine the impact of degrading application performance, lengthen the process complete time in minutes appropriately while simultaneously adjusting the total number of processes that can be carried out in a month. Try not to exceed a 70% utilization rate as just as with call center employees, burnout from too many back-to-back processes can result in a higher turnover rate.

 


THE IMPACT OF APPLICATION DELIVERY


 

Finally, we can examine whether or not application delivery can improve the productivity of those who rely on the applications you are charged with delivering. To determine the impact of application delivery this time shorten the process complete time in minutes appropriately while simultaneously adjusting the total number of processes that can be handled per month. Again, try not to exceed a 70% utilization rate.

Alternatively, you could use the base formula to determine what kind of improvements in application performance are necessary in order to increase productivity or, in some cases, maintain it. Many folks have experienced an “upgrade” in an enterprise application that causes productivity to plummet because the newer system my have more bells and whistles, but it’s slower for some reason. Basically you need to determine the number of processes you need to handle per month and the utilization rate you’re trying to achieve and use the following formula to determine exactly how much time each process can take before you miss that mark:

(9450 x Utilization Rate ) / # of processes = process time

This allows you to work backward and understand how much time any given process can take before it starts to adversely affect productivity. You’ll need to understand how much of the process time should be allotted to mundane steps in the process, i.e. taking information from customers, entering the data, etc…, and factor that out to determine how much time can be spent traversing the network and in application execution time. Given that number you can then figure out what kind of application delivery solutions will be able to help you meet that target number and ensure that IT is not a productivity bottleneck. Whether it’s acceleration or optimization, or scaling out to meet higher capacity you are likely to find what you need to meet your piece of the productivity puzzle in an application delivery solution.

This also means that you can be confident that “the computer was slow” is not a valid excuse when productivity metrics are missed, and probably more importantly, you can prove it.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way. The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tool...
DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies? We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measure...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
With continuous delivery (CD) almost always in the spotlight, continuous integration (CI) is often left out in the cold. Indeed, it's been in use for so long and so widely, we often take the model for granted. So what is CI and how can you make the most of it? This blog is intended to answer those questions. Before we step into examining CI, we need to look back. Software developers often work in small teams and modularity, and need to integrate their changes with the rest of the project code b...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud4U builds software services that help people build DevOps platforms for cloud-based software and using our platform people can draw a picture of the system, network, software," explained Kihyeon Kim, CEO and Head of R&D at Cloud4U, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...