|By Toddy Mladenov||
|June 16, 2014 11:28 AM EDT||
Last week's Joyent outage brought us thinking how many IT teams make the effort what is the meaningful downtime that will not have significant impact on their business. In this post I will not discuss this particular outage although it is a yet another good example for improving the IT practices and processes but will concentrate on an important step in the Business Impact Analysis (BIA) that is a prerequisite for Disaster Recovery - namely the Cost of Downtime.
Very often because the lack of understanding of the overall IT application portfolio the cost of downtime is calculated using made up numbers or numbers for the whole company. Each application must be considered separately and the analysis must be done per application - there is no one size fits all. For now though let's take a simple example: company that has a revenue of $5o0M and 1500 employees and experiences outage that impacts 50% of it's employees and 30% of their revenue during the downtime.
The first thing you need to do in order to calculate the cost per hour of downtime is to calculate the labor cost per hour. Let's assume that the annual benefits per employee for our fictitious company are $75k and each employee works on average 1,920h per year (assuming 2 weeks vacation and 10 holidays per year). The hourly labor cost for the company will be:
1500 employees X $75K = $112.5M / 1920h = ~$58,600
Because we previously said that only 50% of the employees will be affected the cost per our of downtime will be:
$58,600 X 50% = $29,300
The next thing that you need to do is to calculate the revenue loss per hour of downtime. Another assumption that we need to make here is that our company generates revenue 5 days a week and it is closed only for the holidays, which means that the company generates revenue 2,000 a year. From here our calculation is:
$500M / 2,000h = $250K/h
and we need to multiply this by 30% assumed loss of revenue:
$250K * 30% = $75,000
The total loss to our company for one hour of downtime is determined by combining both numbers above:
$58,000 labor loss per our + $75,000 revenue loss per hour = $133,000
However we do not stop here! Our ultimate goal is to determine how much downtime can we afford for this application. And for that you need to some more financial calculations, which involve not only the revenue but also the profits unless you want to wipe out all the profits with this one downtime. Let's assume that the company's profits are 10% and management decided that the acceptable loss of profits from availability issues with this particular application is 0.1%:
$1B * 10% profits = $100M * 0.1% = $100,000
Dividing both numbers (acceptable loss and cost of downtime per hour) gives us the maximum tolerable downtime (MTD) for the application:
$100,000 / $133,000 = ~0.75 * 60 mins = ~45 min
If you are curious such application must have uptime of 99.991445% in order to satisfy the above requirements.
The overall lesson here is that determining the cost of downtime is not only application dependent but also requires a solid knowledge of the company financials and therefore good collaboration between IT and business owners.
An overall theme of Cloud computing and the specific practices within it is fundamentally one of automation. The core value of technology is to continually automate low level procedures to free up people to work on more value add activities, ultimately leading to the utopian goal of full Autonomic Computing. For example a great way to define your plan for DevOps tool chain adoption is through this lens. In this TechTarget article they outline a simple maturity model for planning this.
Jan. 21, 2017 03:15 AM EST Reads: 568
True Story. Over the past few years, Fannie Mae transformed the way in which they delivered software. Deploys increased from 1,200/month to 15,000/month. At the same time, productivity increased by 28% while reducing costs by 30%. But, how did they do it? During the All Day DevOps conference, over 13,500 practitioners from around the world to learn from their peers in the industry. Barry Snyder, Senior Manager of DevOps at Fannie Mae, was one of 57 practitioners who shared his real world journe...
Jan. 21, 2017 02:30 AM EST Reads: 950
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
Jan. 21, 2017 02:30 AM EST Reads: 6,096
Software development is a moving target. You have to keep your eye on trends in the tech space that haven’t even happened yet just to stay current. Consider what’s happened with augmented reality (AR) in this year alone. If you said you were working on an AR app in 2015, you might have gotten a lot of blank stares or jokes about Google Glass. Then Pokémon GO happened. Like AR, the trends listed below have been building steam for some time, but they’ll be taking off in surprising new directions b...
Jan. 21, 2017 02:15 AM EST Reads: 2,309
We call it DevOps but much of the time there’s a lot more discussion about the needs and concerns of developers than there is about other groups. There’s a focus on improved and less isolated developer workflows. There are many discussions around collaboration, continuous integration and delivery, issue tracking, source code control, code review, IDEs, and xPaaS – and all the tools that enable those things. Changes in developer practices may come up – such as developers taking ownership of code ...
Jan. 21, 2017 12:30 AM EST Reads: 1,814
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of D...
Jan. 21, 2017 12:00 AM EST Reads: 5,101
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
Jan. 21, 2017 12:00 AM EST Reads: 4,704
When building DevOps or continuous delivery practices you can learn a great deal from others. What choices did they make, what practices did they put in place, and how did they connect the dots? At Sonatype, we pulled together a set of 21 reference architectures for folks building continuous delivery and DevOps practices using Docker. Why? After 3,000 DevOps professionals attended our webinar on "Continuous Integration using Docker" discussing just one reference architecture example, we recogn...
Jan. 20, 2017 10:30 PM EST Reads: 1,384
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 20, 2017 06:30 PM EST Reads: 5,463
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
Jan. 20, 2017 05:30 PM EST Reads: 1,483
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Jan. 20, 2017 05:15 PM EST Reads: 3,545
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
Jan. 20, 2017 05:15 PM EST Reads: 4,965
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
Jan. 20, 2017 02:45 PM EST Reads: 4,725
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
Jan. 20, 2017 02:30 PM EST Reads: 1,140
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Jan. 20, 2017 01:30 PM EST Reads: 5,252
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Jan. 20, 2017 01:30 PM EST Reads: 3,609
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
Jan. 20, 2017 01:00 PM EST Reads: 2,587
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and E...
Jan. 20, 2017 12:15 PM EST Reads: 5,845
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being...
Jan. 20, 2017 11:45 AM EST Reads: 2,558
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
Jan. 20, 2017 11:30 AM EST Reads: 802