Click here to close now.




















Welcome!

Microservices Expo Authors: Elizabeth White, SmartBear Blog, Pat Romanski, Ruxit Blog, VictorOps Blog

Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Article

The Benefits of Virtualization and Cloud Computing

Whether you’re an enterprise or small to medium business, you’ll soon be benefiting from the cloud

What’s all the buzz about? Cloud computing is one of Gartner’s top 10 strategic technology trends for 2009 – #2, right behind virtualization. Analysts say the economics of cloud for customers are truly compelling, with expected savings for business applications of 3-5x. That’s not chump change – particularly in today’s recessionary economy.

But the most compelling benefits of the cloud aren't just cost-savings. They're the increased flexibility, elasticity and scalability available to optimize efficiency and best serve the needs of the business.

What is Cloud Computing
Whether you're an enterprise or small to medium business, you'll soon be benefiting from the cloud.  But what is cloud computing exactly?

Cloud computing is essentially the ability to acquire or deliver a resource on demand, configured however the users chooses, and paid for according to consumption.  From a supplier's perspective, including both internal IT groups and service providers, it means being able to deliver and manage resource pools and applications in a multi-tenancy environment, to deliver the user an on-demand, pay-per-use service. A cloud service can be infrastructure for hosting applications or data storage, a development platform, or even an application that you can get on-demand, either off-site at a provider, such as SunGard or Salesforce, or built onsite within IT.

It's important to note that while many view cloud computing as services consumed externally, innovative CIOs have taken the steps to transform their IT groups into internal service providers.  This strategic shift gives them control and accountability for usage and resources, while providing a dynamic, self service model to accommodate the needs and SLAs required by the business units.  To see how one enterprise did this, you can view their video story online at: www.vmware.com/cloud.

For those of us who remember the good old dot com days, before the bust, we saw the concept of hosted services emerge.  Everyone jumped on the ASP, ISP, MSP (application service provider, internet service provider, managed service provider, respectively) bandwagons and built offerings to deliver online services or variants thereof, such as on-demand software and software-as-a-service (SaaS).

Remembering back to the xSP days, however, we must also remember that there were issues with the services hosting model. One issue was that few were comfortable with the concept of having their information hosted outside of their immediate control, as well as the fear of being locked into a relationship with particular vendors.

So, as the new concept of the cloud emerges, many are asking how it's different this time around and what should we expect?   Unlike those previous hosting models, we see well-established companies diversifying their business models to offer new services, based on established core competencies. This fundamental difference will help shape and stabilize the new concept of the cloud.

VMware CTO Steve Herrod keynoting SYS-CON's 3rd Virtualization Conference & Expo in New York. Read an Exclusive Q&A wth Herrod here.

But even more importantly, we have seen new technologies evolve over the past decade that are essential to the notion of the cloud.  The key technology is virtualization. In addition to some amazing cost savings and goodness for the environment, virtualization's ability to separate the OS and application from the hardware give it ideal properties to best deliver these on-demand cloud services. Charles King, Principal Analyst at Pund-IT put it succinctly: "Without virtualization there is no cloud- that's what enabled the emergence of this new, sustainable industry."

Challenges of the cloud
Today, new and established vendors are vying to deliver cloud services.  The challenge for users becomes choosing the right offering.  Many of the offerings are really designed to encourage development on the vendor's proprietary platform, limiting switching abilities and propagating the offering through applications built for the external cloud only.  This is appealing to the development community as it enables quick access to infrastructure and development platforms on which to create a cloud application.  But this can become a nightmare for IT when the application has to come back into the enterprise for production-level support, as well as dealing with SOX and IP risks. The viability of this solution is potentially the unearthing of a more significant problem, the inability of IT to deliver infrastructure on demand to meet the dynamic needs of these groups.  However in many cases, unless you're building an application from scratch, most businesses don't have the time or resources to rewrite their production applications to work in the cloud on a proprietary platform.

Users should choose a cloud strategy that enables the fastest development time for new applications, with the broadest support for various OSs and development environments, as well as the ability to support production-level applications on- and off-premise as needed.

The other challenge is mobility and choice in location for running applications, internally in a private cloud or externally in a public cloud.  Another approach we see in the market is the "superstore phenomenon." Organizations such as Amazon, Microsoft and Google all plan to battle it out over whose superstore datacenter will be the place your developers will build and house their cloud applications. It is true that these are all stable brands and their infrastructure will likely be a safe place to run your applications; however, in the event of outages, downtime and the inability to access your applications, what options will you have? Additionally, how will you manage these instances, where they live long term and what risks will be imposed by keeping them off site?  Users should be able to move their applications at will from one cloud to another, whether internally or externally.

Obviously, the encapsulation offered in virtualization and the mobility found in technology like VMware VMotion - which enables a live virtual machine to be moved with no downtime for the application - increase a user's ability to move virtual machines as needed. VMware's approach to the cloud is not about vendor lock-in , but is about enabling its ecosystem of partners to build and deliver services on a common platform, allowing users to simplify the federation of clouds, on or off premise as needed, to a broad base of service providers.

Lastly, you'll want to look at innovation and stability in providing the technology to leverage your virtualization investments into internal or external cloud options. If your production environments are running on VMware and you chose that platform due to the robust innovation cycles, reliability and technology advancements offered today, you want nothing less in a cloud services provider off-premise.  Say you want to establish a relationship with a service provider to offer some flex capacity at the end of the quarter for financial reporting activities.  You'll still want the reliability of your production system, control of that environment and the ability to move your VMs when and where you want. Also, as you build your internal clouds, look for vendors that are building for the future and whatever new technologies and application infrastructures might come along, visionary vendors that are future-proofed for new trends and have proven that they can deliver technology innovation in a timely manner.

Why does virtualization matter when building or selecting cloud services/vendors?
Clearly, there's a new trend emerging with lots of options, but also many challenges that could cost big money to reverse. How does virtualization address these challenges and allow a seamless transition to a cloud strategy, either on- or off-premise?

As mentioned above the key requirements you should demand from your cloud providers are: broad application support without lock-in, ease in mobility of environments, broad choice of locations (internal or external), and innovation that drives simplified federation of on- and off-premise clouds.  Additionally, as an enterprise you'll want to look for innovation in building the internal (private) cloud to evolve your ability to offer dynamic services.

As noted, virtualization is the key. Most companies' first step on the virtualization path is to consolidate their servers, using virtualization to run multiple applications on each server instead of just one, increasing the utilization rate of (and getting more value from) every server and, thus, dramatically reducing the number of servers they need to buy, rack, power, cool, and manage.

Having consolidated servers, you realize that not only have you substantially cut the capital and operating costs of your server environments, but as a result the entire datacenter has become far more flexible. Along the way, you may have started to think about and to use IT resources - including servers, storage, networks, desktops, and applications - not as isolated silos that must be managed individually but as pools of resources that can be managed in the aggregate.

This means that you can now move resources around at will across the network, from server to server, datacenter to datacenter, and even out into the cloud, to balance loads and use compute capacity more efficiently across the entire global IT environment.

In other words, users are able to look at the compute power as a centralized resource that they can now allocate to business units on demand, while still maintaining control and operational excellence. Leveraging virtualization to better serve users gives your organization the obvious lower TCO, but also allows for accountability of usage, simplifies and meets the needs of on-demand infrastructure requests, and allows for your ability to serve, control and manage SLAs.

Hence, virtualization has played and will continue to play a huge role in cloud computing. It is the technology that has allowed service providers to deliver lower-cost hosting environments to businesses of all sizes today. Just as virtualization enabled you to consolidate your servers and do more with less hardware, it also lets you support more users per piece of hardware, and deliver applications - and the servers on which they run - faster to those users.

As the leader in virtualization, VMware recently launched its vCloud initiative.  With its proven, reliable platform deployed in over 120,000 customer environments today, VMware is committed to working with enterprises who want to build internal clouds with the ability to federate to external providers to meet the changing needs of thier business.  VMware's virtual datacenter operating system, enables internal clouds with features such as self-service provisioning, chargeback, and many other advanced automation and management features.

In addition, VMware is leveraging its huge ecosystem to bring new cloud offerings, such as security for clouds, to market.  The virtualization market leader's approach leverages the infrastructure and expertise of hundreds of partners worldwide, including brand names such as Verizon, Hosting.com, SunGard, Terremark and Savvis to deliver the VMware platform and cloud services.  This, in combination with the technology for internal clouds, lets enterprises run their applications where they want, when they want.

With the largest choice of location and interoperability of platforms, the broadest application and OS support, and leading virtualization and cloud technologies, VMware and its cloud strategy offer users a safe, reliable, and robust on-ramp to the cloud, whether on or off premise.

So, if you're a VMware user, you're in good hands and you've already taken steps toward the cloud simply by virtualizing your servers on a proven platform that offers rich management and automation features. You will see VMware continue to lead the market in delivering cloud innovation for both on- and off-premise clouds.

If you're not a VMware user but want reliable infrastructure on demand, many service providers offer VMware Infrastructure 3 with pay-per-use models.  For more information about VMware vCloud or to find a partner that can help you realize the benefits of the cloud, visit: www.vmware.com/vcloud.

More Stories By Wendy Perilli

Wendy Perilli is director of product marketing for cloud computing at VMware. She gathers market insight from analysts, customers, service and technology partners and many others. With almost 20 years in high-tech fields, Wendy's broad range of experience with various technologies offers unique insight into the role that virtualization plays in emerging markets and trends.

Comments (2)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
What does “big enough” mean? It’s sometimes useful to argue by reductio ad absurdum. Hello, world doesn’t need to be broken down into smaller services. At the other extreme, building a monolithic enterprise resource planning (ERP) system is just asking for trouble: it’s too big, and it needs to be decomposed.
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
The Microservices architectural pattern promises increased DevOps agility and can help enable continuous delivery of software. This session is for developers who are transforming existing applications to cloud-native applications, or creating new microservices style applications. In his session at DevOps Summit, Jim Bugwadia, CEO of Nirmata, will introduce best practices, patterns, challenges, and solutions for the development and operations of microservices style applications. He will discuss ...