Welcome!

Microservices Expo Authors: Liz McMillan, Jyoti Bansal, Yeshim Deniz, Dan Blacharski, Elizabeth White

Related Topics: Microservices Expo

Microservices Expo: Blog Feed Post

A Formula for Quantifying Productivity of Web Applications

Ever wanted to prove or understand how the network impacts productivity? There is a formula for that…

We often talk in abstract terms about the affects of application performance on productivity. It seems to make sense that if an application is performing poorly – or unavailable – that it will certainly affect the productivity of those who rely upon that application. But it’s hard enough to justify the investment in application acceleration or optimization without being able to demonstrate a real impact on the organization. And right now justification is more of an issue than it’s ever been. 

So let’s take the example of a call center to begin with. Could be customer service supporting customers/users, or a help desk supporting internal users, or even a phone-based order-entry department. Any “call center” that relies on a combination of the telephone and an application to support its processes is sensitive to delays in delivering and outages of applications. 

This excellent article from Call Center Magazine details some of the essential Call Center KPIs, the metrics upon which call center efficiency and thus productivity is measured.

tiredThe best measure of labor efficiency is agent utilization. Because labor costs represent the overwhelming majority of call center expenses, if agent utilization is high, the cost per call will inevitably be low. Conversely, when agent utilization is low, labor costs, and hence cost per call, will be high.

That all makes sense, but what we want – and need – is a formula for determining “agent utilization.”

The formula for determining agent utilization is somewhat complicated. It factors in the length of the work day, break times, vacation and sick time, training time and a number of other factors. But there is an easy way to approximate agent utilization without going to all this trouble:

Let's say, for example that the agents in a particular call center handle an average of 1,250 calls per month at an average handle time of 5 minutes. Additionally, these agents work an average of 21 days per month, and their work day is 7.5 hours after subtracting lunch and break times. The simplified utilization formula above would work out to the following:

Once again, this is not a perfect measure of agent utilization, but it is quick and easy, and gets you within 5% of the true agent utilization figure.

Okay, again that makes sense. And now that we’ve got a formula from which to work we can look at the impact of application performance – both negative and positive – on “agent utilization.”


HIGHER UTILIZATION NOT ALWAYS DESIRABLE


You’ve heard it, I’m sure. The plaintive “my computer is slow today, please hang on a moment…” coming from the other end of the phone is a dead-ringer for “application performance has degraded.” Those of us intimately familiar with data centers and application delivery understand it isn’t really the “computer” that’s slow, but the application – and likely the underlying application delivery network responsible for ensuring things are going smoothly.

The reason the explanation is plaintive is because call center employees of every kind know exactly how they’re rated and measured and understand that a “slow computer” necessarily adds time to their average call handle time. And the higher the average call handle time, the lower their utilization, which brings down the overall efficiency of the call center. But just how much does application performance affect average call handle time?

Let’s assume that the number of “screens” or “pages” a call center handler has to navigate during a call to retrieve relevant information is five. If the average handle time is five minutes, that’s one minute per page. If application performance problems increase the average time per page to one minute and twelve seconds, that’d bring our total time per call up to six minutes.

          1250 x 6 / 9450 = 79.3%

Hey, that’s actually better, isn’t it? Higher utilization of agents means lower costs per call, which certainly makes it appear as though we ought to introduce some latency into the network to make the numbers look better. There are a couple of reasons why this is not true. First and foremost is the effect of high utilization on people. As is pointed out by the aforementioned article:

Whenever utilization numbers approach 80% - 90%, that call center will see relatively high agent turnover rates because they are pushing the agents too hard.

Turnover, of course, is bad because it incurs costs in terms of employee acquisition and training, during which time the efficiency of the call center is reduced. There is also the potential for a cascading effect from turnover in which the bulk of calls are placed upon the shoulders of experienced call center workers which increases their utilization and leads to even higher turnover rates. Like a snowball, the effect of turnover on a call center is quickly cumulative.

Secondly, increasing call handle time also adversely affects the total number of calls a handler can deal with in any given time period. As handle time per call increases, total number of calls per month decreases, which actually changes the equation. There are 9450 minutes in a month, which means at 5 minutes per call there is a maximum of 1890 calls that can be handled. At 6 minutes per call that decreases to 1575. That’s a 17% decrease in total for every minute the average call handle time increases. No call center handles 100% of the calls it theoretically could, but the impact on the number of calls possible will still be affected – decreased – by an increase in the average call handle time due to poor application performance.


GENERALIZING THE FORMULA


What this ultimately means is that worsening application performance reduces the efficiency of call centers by decreasing the number of calls it can handle. That’s productivity in a call center. Applying the same theory to other applications should yield unsurprisingly similar results: degradation of application performance means degrading productivity which means less work is getting done. Any role within the organization that relies upon an application can be essentially measured in terms of the number of “processes” that can be completed in a given time interval. Using that figure it then becomes a matter of decomposing the process into steps (pages, screens, etc…) and determining how much time is spent per step. Application performance affects the whole, but is especially detrimental to individual steps in a process as lengthening one draws out the entire process and thus reduces the total number of “processes” that can be completed.

So we can generalize into a formula that is:

    ((total # of processes per month) * (average number of minutes to complete a process)) / 9450

where 9450 is the total number of minutes available per month. Adjust as necessary.

To determine the impact of degrading application performance, lengthen the process complete time in minutes appropriately while simultaneously adjusting the total number of processes that can be carried out in a month. Try not to exceed a 70% utilization rate as just as with call center employees, burnout from too many back-to-back processes can result in a higher turnover rate.

 


THE IMPACT OF APPLICATION DELIVERY


 

Finally, we can examine whether or not application delivery can improve the productivity of those who rely on the applications you are charged with delivering. To determine the impact of application delivery this time shorten the process complete time in minutes appropriately while simultaneously adjusting the total number of processes that can be handled per month. Again, try not to exceed a 70% utilization rate.

Alternatively, you could use the base formula to determine what kind of improvements in application performance are necessary in order to increase productivity or, in some cases, maintain it. Many folks have experienced an “upgrade” in an enterprise application that causes productivity to plummet because the newer system my have more bells and whistles, but it’s slower for some reason. Basically you need to determine the number of processes you need to handle per month and the utilization rate you’re trying to achieve and use the following formula to determine exactly how much time each process can take before you miss that mark:

(9450 x Utilization Rate ) / # of processes = process time

This allows you to work backward and understand how much time any given process can take before it starts to adversely affect productivity. You’ll need to understand how much of the process time should be allotted to mundane steps in the process, i.e. taking information from customers, entering the data, etc…, and factor that out to determine how much time can be spent traversing the network and in application execution time. Given that number you can then figure out what kind of application delivery solutions will be able to help you meet that target number and ensure that IT is not a productivity bottleneck. Whether it’s acceleration or optimization, or scaling out to meet higher capacity you are likely to find what you need to meet your piece of the productivity puzzle in an application delivery solution.

This also means that you can be confident that “the computer was slow” is not a valid excuse when productivity metrics are missed, and probably more importantly, you can prove it.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Is your application too difficult to manage? Do changes take dozens of developers hundreds of hours to execute, and frequently result in downtime across all your site’s functions? It sounds like you have a monolith! A monolith is one of the three main software architectures that define most applications. Whether you’ve intentionally set out to create a monolith or not, it’s worth at least weighing the pros and cons of the different architectural approaches and deciding which one makes the most s...
Developers want to create better apps faster. Static clouds are giving way to scalable systems, with dynamic resource allocation and application monitoring. You won't hear that chant from users on any picket line, but helping developers to create better apps faster is the mission of Lee Atchison, principal cloud architect and advocate at New Relic Inc., based in San Francisco. His singular job is to understand and drive the industry in the areas of cloud architecture, microservices, scalability ...
When you decide to launch a startup company, business advisors, counselors, bankers and armchair know-it-alls will tell you that the first thing you need to do is get funding. While there is some validity to that boilerplate piece of wisdom, the availability of and need for startup funding has gone through a dramatic transformation over the past decade, and the next few years will see even more of a shift. A perfect storm of events is causing this seismic shift. On the macroeconomic side this ...
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing be...
Cloud Expo, Inc. has announced today that Aruna Ravichandran, vice president of DevOps Product and Solutions Marketing at CA Technologies, has been named co-conference chair of DevOps at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
A Man in the Middle attack, or MITM, is a situation wherein a malicious entity can read/write data that is being transmitted between two or more systems (in most cases, between you and the website that you are surfing). MITMs are common in China, thanks to the “Great Cannon.” The “Great Cannon” is slightly different from the “The Great Firewall.” The firewall monitors web traffic moving in and out of China and blocks prohibited content. The Great Cannon, on the other hand, acts as a man in the...
To more closely examine the variety of ways in which IT departments around the world are integrating cloud services, and the effect hybrid IT has had on their organizations and IT job roles, SolarWinds recently released the SolarWinds IT Trends Report 2017: Portrait of a Hybrid Organization. This annual study consists of survey-based research that explores significant trends, developments, and movements related to and directly affecting IT and IT professionals.
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
This recent research on cloud computing from the Register delves a little deeper than many of the "We're all adopting cloud!" surveys we've seen. They found that meaningful cloud adoption and the idea of the cloud-first enterprise are still not reality for many businesses. The Register's stats also show a more gradual cloud deployment trend over the past five years, not any sort of explosion. One important takeaway is that coherence across internal and external clouds is essential for IT right n...
Back in February of 2017, Andrew Clay Schafer of Pivotal tweeted the following: “seriously tho, the whole software industry is stuck on deployment when we desperately need architecture and telemetry.” Intrigue in a 140 characters. For me, I hear Andrew saying, “we’re jumping to step 5 before we’ve successfully completed steps 1-4.”
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, will discuss how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He will discuss how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
In large enterprises, environment provisioning and server provisioning account for a significant portion of the operations team's time. This often leaves users frustrated while they wait for these services. For instance, server provisioning can take several days and sometimes even weeks. At the same time, digital transformation means the need for server and environment provisioning is constantly growing. Organizations are adopting agile methodologies and software teams are increasing the speed ...
Software as a service (SaaS), one of the earliest and most successful cloud services, has reached mainstream status. According to Cisco, by 2019 more than four-fifths (83 percent) of all data center traffic will be based in the cloud, up from 65 percent today. The majority of this traffic will be applications. Businesses of all sizes are adopting a variety of SaaS-based services – everything from collaboration tools to mission-critical commerce-oriented applications. The rise in SaaS usage has m...
The proper isolation of resources is essential for multi-tenant environments. The traditional approach to isolate resources is, however, rather heavyweight. In his session at 18th Cloud Expo, Igor Drobiazko, co-founder of elastic.io, drew upon his own experience with operating a Docker container-based infrastructure on a large scale and present a lightweight solution for resource isolation using microservices. He also discussed the implementation of microservices in data and application integrat...
We'd all like to fulfill that "find a job you love and you'll never work a day in your life" cliché. But in reality, every job (even if it's our dream job) comes with its downsides. For you, the constant fight against shadow IT might get on your last nerves. For your developer coworkers, infrastructure management is the roadblock that stands in the way of focusing on coding. As you watch more and more applications and processes move to the cloud, technology is coming to developers' rescue-most r...
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.