Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Stackify Blog, Andreas Grabner

Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Article

Recession-Proofing IT via Virtualization and Cloud Computing

Recessions are about as appealing as a root canal; but they do force us to think differently

Recessions are about as appealing as a root canal; but they do force us to think differently. Now that the recession is official, it's an ideal time to explore how virtualization and cloud computing can help "recession-proof" IT by transforming yesterday’s costly and rigid computing model to one that puts costs under control and sets applications free.

The National Bureau of Economic Research recently declared that the U.S. has been in a recession since December 2007. The news would be darkly amusing if it weren’t so utterly painful. But now that the recession is official, this seemed to be the ideal time to explore how virtualization and cloud computing can help recession-proof IT. Consider the following four tips:

1. Virtualize infrastructure to increase capacity utilization.

Traditional server infrastructure tightly couples applications to hardware, wasting computing capacity whenever applications utilize less than 100 percent of system resources. Virtualized infrastructure decouples applications from hardware, freeing excess capacity for use by other applications. A single virtualized server can often support 5X the workload of a non-virtualized server. This allows IT to consolidate server infrastructure, which reduces capital costs associated with server acquisition and datacenter infrastructure, as well as operating costs associated with management, maintenance, and energy consumption.

2. Use external clouds to offset capital infrastructure expense.

While virtualized infrastructure can reduce capital expenses, IT may have the opportunity to eliminate those expenses altogether by using the variable compute model of external clouds like Amazon’s Elastic Compute Cloud (Amazon EC2). In this model, compute capacity becomes elastic, allowing lines of business to align the cost of application consumption to actual demand. Swapping traditional datacenter for external cloud provides infinitely scalable capacity and the ability to align cost to value received.

3. Virtualize applications to accelerate and simplify deployment.

Packaging and deploying application workloads as virtual images can close the “deployment gap” which adds cost and delay to the deployment of enterprise applications. The virtualized application is separated from its operating infrastructure and a self-contained unit that includes just enough operating system (JeOS), databases, and middleware required to run the software in production. These bits travel with the application package and allow it to run as an image in any virtualized or cloud-based execution environment without any manual setup, tuning, configuration, or certification. Suddenly, applications are set free and deployment cycles are compressed from months to minutes. This equates to cost savings and improved business agility.

4. Construct virtual applications for simplified management, automated maintenance.

 

The reality is that this new approach to application delivery can create new costs and risks. Taking the friction out of application deployment will lead to an onslaught of volume and demand, resulting in what is often called “VM sprawl.” What organizations must recognize is that they may be exchanging one cost and management burden for another, as physical machines become virtual machines. In fact, virtual sprawl is likely to far outstrip any physical sprawl you’ve witnessed heretofore. As such, organizations need a scalable approach for managing and maintaining application images. Adding headcount isn’t an option, so the answer is finding ways to do more with less. In this case, this means architecting application images for management and control, trading manual one-at-a-time updates for seamless changes that are implemented en masse. It also means complete lifecycle control and transparency wherever the application is being run — datacenter or cloud, internal or external.

Recessions are about as appealing as a root canal. But they do force us to think differently — to take an inventory of costs, retool, reinvent. The reality is that this recession is coincident with a fundamental inflection point in IT. The friction and the economics of traditional computing models no longer work. This is why organizations must embrace virtualization and cloud — both to weather the storm of a down economy and to transform yesterday’s costly and rigid computing model to one that puts costs under control and sets applications free.

 

More Stories By Jake Sorofman

Jake Sorofman is chief marketing officer of rPath, an innovator in system automation software for physical, virtual and cloud environments. Contact Jake at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...