Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Steve Wilson, Stackify Blog, Derek Weeks

Related Topics: Microservices Expo

Microservices Expo: Blog Feed Post

A Formula for Quantifying Productivity of Web Applications

Ever wanted to prove or understand how the network impacts productivity? There is a formula for that…

We often talk in abstract terms about the affects of application performance on productivity. It seems to make sense that if an application is performing poorly – or unavailable – that it will certainly affect the productivity of those who rely upon that application. But it’s hard enough to justify the investment in application acceleration or optimization without being able to demonstrate a real impact on the organization. And right now justification is more of an issue than it’s ever been. 

So let’s take the example of a call center to begin with. Could be customer service supporting customers/users, or a help desk supporting internal users, or even a phone-based order-entry department. Any “call center” that relies on a combination of the telephone and an application to support its processes is sensitive to delays in delivering and outages of applications. 

This excellent article from Call Center Magazine details some of the essential Call Center KPIs, the metrics upon which call center efficiency and thus productivity is measured.

tiredThe best measure of labor efficiency is agent utilization. Because labor costs represent the overwhelming majority of call center expenses, if agent utilization is high, the cost per call will inevitably be low. Conversely, when agent utilization is low, labor costs, and hence cost per call, will be high.

That all makes sense, but what we want – and need – is a formula for determining “agent utilization.”

The formula for determining agent utilization is somewhat complicated. It factors in the length of the work day, break times, vacation and sick time, training time and a number of other factors. But there is an easy way to approximate agent utilization without going to all this trouble:

Let's say, for example that the agents in a particular call center handle an average of 1,250 calls per month at an average handle time of 5 minutes. Additionally, these agents work an average of 21 days per month, and their work day is 7.5 hours after subtracting lunch and break times. The simplified utilization formula above would work out to the following:

Once again, this is not a perfect measure of agent utilization, but it is quick and easy, and gets you within 5% of the true agent utilization figure.

Okay, again that makes sense. And now that we’ve got a formula from which to work we can look at the impact of application performance – both negative and positive – on “agent utilization.”


HIGHER UTILIZATION NOT ALWAYS DESIRABLE


You’ve heard it, I’m sure. The plaintive “my computer is slow today, please hang on a moment…” coming from the other end of the phone is a dead-ringer for “application performance has degraded.” Those of us intimately familiar with data centers and application delivery understand it isn’t really the “computer” that’s slow, but the application – and likely the underlying application delivery network responsible for ensuring things are going smoothly.

The reason the explanation is plaintive is because call center employees of every kind know exactly how they’re rated and measured and understand that a “slow computer” necessarily adds time to their average call handle time. And the higher the average call handle time, the lower their utilization, which brings down the overall efficiency of the call center. But just how much does application performance affect average call handle time?

Let’s assume that the number of “screens” or “pages” a call center handler has to navigate during a call to retrieve relevant information is five. If the average handle time is five minutes, that’s one minute per page. If application performance problems increase the average time per page to one minute and twelve seconds, that’d bring our total time per call up to six minutes.

          1250 x 6 / 9450 = 79.3%

Hey, that’s actually better, isn’t it? Higher utilization of agents means lower costs per call, which certainly makes it appear as though we ought to introduce some latency into the network to make the numbers look better. There are a couple of reasons why this is not true. First and foremost is the effect of high utilization on people. As is pointed out by the aforementioned article:

Whenever utilization numbers approach 80% - 90%, that call center will see relatively high agent turnover rates because they are pushing the agents too hard.

Turnover, of course, is bad because it incurs costs in terms of employee acquisition and training, during which time the efficiency of the call center is reduced. There is also the potential for a cascading effect from turnover in which the bulk of calls are placed upon the shoulders of experienced call center workers which increases their utilization and leads to even higher turnover rates. Like a snowball, the effect of turnover on a call center is quickly cumulative.

Secondly, increasing call handle time also adversely affects the total number of calls a handler can deal with in any given time period. As handle time per call increases, total number of calls per month decreases, which actually changes the equation. There are 9450 minutes in a month, which means at 5 minutes per call there is a maximum of 1890 calls that can be handled. At 6 minutes per call that decreases to 1575. That’s a 17% decrease in total for every minute the average call handle time increases. No call center handles 100% of the calls it theoretically could, but the impact on the number of calls possible will still be affected – decreased – by an increase in the average call handle time due to poor application performance.


GENERALIZING THE FORMULA


What this ultimately means is that worsening application performance reduces the efficiency of call centers by decreasing the number of calls it can handle. That’s productivity in a call center. Applying the same theory to other applications should yield unsurprisingly similar results: degradation of application performance means degrading productivity which means less work is getting done. Any role within the organization that relies upon an application can be essentially measured in terms of the number of “processes” that can be completed in a given time interval. Using that figure it then becomes a matter of decomposing the process into steps (pages, screens, etc…) and determining how much time is spent per step. Application performance affects the whole, but is especially detrimental to individual steps in a process as lengthening one draws out the entire process and thus reduces the total number of “processes” that can be completed.

So we can generalize into a formula that is:

    ((total # of processes per month) * (average number of minutes to complete a process)) / 9450

where 9450 is the total number of minutes available per month. Adjust as necessary.

To determine the impact of degrading application performance, lengthen the process complete time in minutes appropriately while simultaneously adjusting the total number of processes that can be carried out in a month. Try not to exceed a 70% utilization rate as just as with call center employees, burnout from too many back-to-back processes can result in a higher turnover rate.

 


THE IMPACT OF APPLICATION DELIVERY


 

Finally, we can examine whether or not application delivery can improve the productivity of those who rely on the applications you are charged with delivering. To determine the impact of application delivery this time shorten the process complete time in minutes appropriately while simultaneously adjusting the total number of processes that can be handled per month. Again, try not to exceed a 70% utilization rate.

Alternatively, you could use the base formula to determine what kind of improvements in application performance are necessary in order to increase productivity or, in some cases, maintain it. Many folks have experienced an “upgrade” in an enterprise application that causes productivity to plummet because the newer system my have more bells and whistles, but it’s slower for some reason. Basically you need to determine the number of processes you need to handle per month and the utilization rate you’re trying to achieve and use the following formula to determine exactly how much time each process can take before you miss that mark:

(9450 x Utilization Rate ) / # of processes = process time

This allows you to work backward and understand how much time any given process can take before it starts to adversely affect productivity. You’ll need to understand how much of the process time should be allotted to mundane steps in the process, i.e. taking information from customers, entering the data, etc…, and factor that out to determine how much time can be spent traversing the network and in application execution time. Given that number you can then figure out what kind of application delivery solutions will be able to help you meet that target number and ensure that IT is not a productivity bottleneck. Whether it’s acceleration or optimization, or scaling out to meet higher capacity you are likely to find what you need to meet your piece of the productivity puzzle in an application delivery solution.

This also means that you can be confident that “the computer was slow” is not a valid excuse when productivity metrics are missed, and probably more importantly, you can prove it.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...
There are several reasons why businesses migrate their operations to the cloud. Scalability and price are among the most important factors determining this transition. Unlike legacy systems, cloud based businesses can scale on demand. The database and applications in the cloud are not rendered simply from one server located in your headquarters, but is instead distributed across several servers across the world. Such CDNs also bring about greater control in times of uncertainty. A database hack ...
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
API Security is complex! Vendors like Forum Systems, IBM, CA and Axway have invested almost 2 decades of engineering effort and significant capital in building API Security stacks to lockdown APIs. The API Security stack diagram shown below is a building block for rapidly locking down APIs. The four fundamental pillars of API Security - SSL, Identity, Content Validation and deployment architecture - are discussed in detail below.
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...