Welcome!

Microservices Expo Authors: Elizabeth White, Aruna Ravichandran, Liz McMillan, Pat Romanski, Cameron Van Orman

Related Topics: Microservices Expo

Microservices Expo: Blog Feed Post

A Formula for Quantifying Productivity of Web Applications

Ever wanted to prove or understand how the network impacts productivity? There is a formula for that…

We often talk in abstract terms about the affects of application performance on productivity. It seems to make sense that if an application is performing poorly – or unavailable – that it will certainly affect the productivity of those who rely upon that application. But it’s hard enough to justify the investment in application acceleration or optimization without being able to demonstrate a real impact on the organization. And right now justification is more of an issue than it’s ever been. 

So let’s take the example of a call center to begin with. Could be customer service supporting customers/users, or a help desk supporting internal users, or even a phone-based order-entry department. Any “call center” that relies on a combination of the telephone and an application to support its processes is sensitive to delays in delivering and outages of applications. 

This excellent article from Call Center Magazine details some of the essential Call Center KPIs, the metrics upon which call center efficiency and thus productivity is measured.

tiredThe best measure of labor efficiency is agent utilization. Because labor costs represent the overwhelming majority of call center expenses, if agent utilization is high, the cost per call will inevitably be low. Conversely, when agent utilization is low, labor costs, and hence cost per call, will be high.

That all makes sense, but what we want – and need – is a formula for determining “agent utilization.”

The formula for determining agent utilization is somewhat complicated. It factors in the length of the work day, break times, vacation and sick time, training time and a number of other factors. But there is an easy way to approximate agent utilization without going to all this trouble:

Let's say, for example that the agents in a particular call center handle an average of 1,250 calls per month at an average handle time of 5 minutes. Additionally, these agents work an average of 21 days per month, and their work day is 7.5 hours after subtracting lunch and break times. The simplified utilization formula above would work out to the following:

Once again, this is not a perfect measure of agent utilization, but it is quick and easy, and gets you within 5% of the true agent utilization figure.

Okay, again that makes sense. And now that we’ve got a formula from which to work we can look at the impact of application performance – both negative and positive – on “agent utilization.”


HIGHER UTILIZATION NOT ALWAYS DESIRABLE


You’ve heard it, I’m sure. The plaintive “my computer is slow today, please hang on a moment…” coming from the other end of the phone is a dead-ringer for “application performance has degraded.” Those of us intimately familiar with data centers and application delivery understand it isn’t really the “computer” that’s slow, but the application – and likely the underlying application delivery network responsible for ensuring things are going smoothly.

The reason the explanation is plaintive is because call center employees of every kind know exactly how they’re rated and measured and understand that a “slow computer” necessarily adds time to their average call handle time. And the higher the average call handle time, the lower their utilization, which brings down the overall efficiency of the call center. But just how much does application performance affect average call handle time?

Let’s assume that the number of “screens” or “pages” a call center handler has to navigate during a call to retrieve relevant information is five. If the average handle time is five minutes, that’s one minute per page. If application performance problems increase the average time per page to one minute and twelve seconds, that’d bring our total time per call up to six minutes.

          1250 x 6 / 9450 = 79.3%

Hey, that’s actually better, isn’t it? Higher utilization of agents means lower costs per call, which certainly makes it appear as though we ought to introduce some latency into the network to make the numbers look better. There are a couple of reasons why this is not true. First and foremost is the effect of high utilization on people. As is pointed out by the aforementioned article:

Whenever utilization numbers approach 80% - 90%, that call center will see relatively high agent turnover rates because they are pushing the agents too hard.

Turnover, of course, is bad because it incurs costs in terms of employee acquisition and training, during which time the efficiency of the call center is reduced. There is also the potential for a cascading effect from turnover in which the bulk of calls are placed upon the shoulders of experienced call center workers which increases their utilization and leads to even higher turnover rates. Like a snowball, the effect of turnover on a call center is quickly cumulative.

Secondly, increasing call handle time also adversely affects the total number of calls a handler can deal with in any given time period. As handle time per call increases, total number of calls per month decreases, which actually changes the equation. There are 9450 minutes in a month, which means at 5 minutes per call there is a maximum of 1890 calls that can be handled. At 6 minutes per call that decreases to 1575. That’s a 17% decrease in total for every minute the average call handle time increases. No call center handles 100% of the calls it theoretically could, but the impact on the number of calls possible will still be affected – decreased – by an increase in the average call handle time due to poor application performance.


GENERALIZING THE FORMULA


What this ultimately means is that worsening application performance reduces the efficiency of call centers by decreasing the number of calls it can handle. That’s productivity in a call center. Applying the same theory to other applications should yield unsurprisingly similar results: degradation of application performance means degrading productivity which means less work is getting done. Any role within the organization that relies upon an application can be essentially measured in terms of the number of “processes” that can be completed in a given time interval. Using that figure it then becomes a matter of decomposing the process into steps (pages, screens, etc…) and determining how much time is spent per step. Application performance affects the whole, but is especially detrimental to individual steps in a process as lengthening one draws out the entire process and thus reduces the total number of “processes” that can be completed.

So we can generalize into a formula that is:

    ((total # of processes per month) * (average number of minutes to complete a process)) / 9450

where 9450 is the total number of minutes available per month. Adjust as necessary.

To determine the impact of degrading application performance, lengthen the process complete time in minutes appropriately while simultaneously adjusting the total number of processes that can be carried out in a month. Try not to exceed a 70% utilization rate as just as with call center employees, burnout from too many back-to-back processes can result in a higher turnover rate.

 


THE IMPACT OF APPLICATION DELIVERY


 

Finally, we can examine whether or not application delivery can improve the productivity of those who rely on the applications you are charged with delivering. To determine the impact of application delivery this time shorten the process complete time in minutes appropriately while simultaneously adjusting the total number of processes that can be handled per month. Again, try not to exceed a 70% utilization rate.

Alternatively, you could use the base formula to determine what kind of improvements in application performance are necessary in order to increase productivity or, in some cases, maintain it. Many folks have experienced an “upgrade” in an enterprise application that causes productivity to plummet because the newer system my have more bells and whistles, but it’s slower for some reason. Basically you need to determine the number of processes you need to handle per month and the utilization rate you’re trying to achieve and use the following formula to determine exactly how much time each process can take before you miss that mark:

(9450 x Utilization Rate ) / # of processes = process time

This allows you to work backward and understand how much time any given process can take before it starts to adversely affect productivity. You’ll need to understand how much of the process time should be allotted to mundane steps in the process, i.e. taking information from customers, entering the data, etc…, and factor that out to determine how much time can be spent traversing the network and in application execution time. Given that number you can then figure out what kind of application delivery solutions will be able to help you meet that target number and ensure that IT is not a productivity bottleneck. Whether it’s acceleration or optimization, or scaling out to meet higher capacity you are likely to find what you need to meet your piece of the productivity puzzle in an application delivery solution.

This also means that you can be confident that “the computer was slow” is not a valid excuse when productivity metrics are missed, and probably more importantly, you can prove it.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launchi...
Digital transformation leaders have poured tons of money and effort into coding in recent years. And with good reason. To succeed at digital, you must be able to write great code. You also have to build a strong Agile culture so your coding efforts tightly align with market signals and business outcomes. But if your investments in testing haven’t kept pace with your investments in coding, you’ll lose. But if your investments in testing haven’t kept pace with your investments in coding, you’ll...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with lega...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances ...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
SYS-CON Events announced today that Cloud Academy has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cloud Academy is the leading technology training platform for enterprise multi-cloud infrastructure. Cloud Academy is trusted by leading companies to deliver continuous learning solutions across Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most...
The last two years has seen discussions about cloud computing evolve from the public / private / hybrid split to the reality that most enterprises will be creating a complex, multi-cloud strategy. Companies are wary of committing all of their resources to a single cloud, and instead are choosing to spread the risk – and the benefits – of cloud computing across multiple providers and internal infrastructures, as they follow their business needs. Will this approach be successful? How large is the ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations adopt DevOps to reduce cycle times and deliver software faster; some take on DevOps to drive higher quality and better end-user experience; others look to DevOps for a clearer line-of-sight to customers to drive better business impacts. In truth, these three foundations go together. In this power panel at @DevOpsSummit 21st Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, industry experts will discuss how leading organizations build application success from all...
DevSecOps – a trend around transformation in process, people and technology – is about breaking down silos and waste along the software development lifecycle and using agile methodologies, automation and insights to help get apps to market faster. This leads to higher quality apps, greater trust in organizations, less organizational friction, and ultimately a five-star customer experience. These apps are the new competitive currency in this digital economy and they’re powered by data. Without ...
A common misconception about the cloud is that one size fits all. Companies expecting to run all of their operations using one cloud solution or service must realize that doing so is akin to forcing the totality of their business functionality into a straightjacket. Unlocking the full potential of the cloud means embracing the multi-cloud future where businesses use their own cloud, and/or clouds from different vendors, to support separate functions or product groups. There is no single cloud so...
For most organizations, the move to hybrid cloud is now a question of when, not if. Fully 82% of enterprises plan to have a hybrid cloud strategy this year, according to Infoholic Research. The worldwide hybrid cloud computing market is expected to grow about 34% annually over the next five years, reaching $241.13 billion by 2022. Companies are embracing hybrid cloud because of the many advantages it offers compared to relying on a single provider for all of their cloud needs. Hybrid offers bala...
With the modern notion of digital transformation, enterprises are chipping away at the fundamental organizational and operational structures that have been with us since the nineteenth century or earlier. One remarkable casualty: the business process. Business processes have become so ingrained in how we envision large organizations operating and the roles people play within them that relegating them to the scrap heap is almost unimaginable, and unquestionably transformative. In the Digital ...
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
The nature of the technology business is forward-thinking. It focuses on the future and what’s coming next. Innovations and creativity in our world of software development strive to improve the status quo and increase customer satisfaction through speed and increased connectivity. Yet, while it's exciting to see enterprises embrace new ways of thinking and advance their processes with cutting edge technology, it rarely happens rapidly or even simultaneously across all industries.
It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
With the rise of DevOps, containers are at the brink of becoming a pervasive technology in Enterprise IT to accelerate application delivery for the business. When it comes to adopting containers in the enterprise, security is the highest adoption barrier. Is your organization ready to address the security risks with containers for your DevOps environment? In his session at @DevOpsSummit at 21st Cloud Expo, Chris Van Tuin, Chief Technologist, NA West at Red Hat, will discuss: The top security r...
Most of the time there is a lot of work involved to move to the cloud, and most of that isn't really related to AWS or Azure or Google Cloud. Before we talk about public cloud vendors and DevOps tools, there are usually several technical and non-technical challenges that are connected to it and that every company needs to solve to move to the cloud. In his session at 21st Cloud Expo, Stefano Bellasio, CEO and founder of Cloud Academy Inc., will discuss what the tools, disciplines, and cultural...