Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Yeshim Deniz, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Open Source Cloud, @CloudExpo, @DXWorldExpo

Containers Expo Blog: Blog Feed Post

Challenges in Virtualization

Companies looking at virtualization solutions need storage solutions that are flexible

By Sue Poremba

Virtualization has been a boon to enterprise as it makes IT operations more efficient. Some like its green qualities as virtualization saves on energy consumption, while others appreciate the storage capacity, as well as the data recovery solutions for if disaster strikes.

However, the virtual environment is invisible, and with that come more challenges in making sure it runs smoothly. The cloud might be simple to setup, but it becomes more complex over time. In addition, the more machines and data involved, the more difficult it can be to monitor for space, CPU spikes, network security and other indicators.

“If there is a bug or a discrepancy, I need to know that there’s a problem before my customer does. And though that is the biggest challenge, it’s also a great opportunity,” Russ Caldwell, CTO, Emcien Corporation said.

One of those challenges is making sure storage in the virtualized environment is adequate. “We focus on storage and database environments that scale as the customers grow,” said Caldwell. “Determining how fast customers grow and change is the biggest factor for determining the adequate storage size.”

Companies looking at virtualization solutions need storage solutions that are flexible so they can add or remove storage, as needed. Even though it may have been the right size in the beginning of a project, things change, and a flexible virtualization tool can give that peace of mind when things change. For example, when we’re working with slow-moving manufacturing data, we can determine the adequate storage size easier than when we’re working with hundreds of millions of bank nodes, where the growth is much more dramatic.

The key, according to John Ross with virtual solution company Phantom Business Development at Net Optics, is to truly assess the performance of the servers and the requirements of the virtual machines. This requires monitoring to be in place for the life of the systems to predict utilization and to modify placement based on performance. “When this is not accounted for, it can appear as though there is high CPU utilization on the hosts as well as the VMs,” said Ross, “With the use of protocols such as NFS and ISCSI, it can put quite a load on the network.”

Companies moving to the cloud also have to change how they think about networking. “It can be hard to understand how network connection works when there aren’t wires to simply plug it into a box, but instead virtual, invisible connections that need to be managed through APIs or online interfaces,” said Caldwell. One of the challenges for a company with multiple clients is keeping client data separate from one another. Grouping machines together and isolating them in their own network is the best approach in tackling this challenge. Using excellent monitoring tools smartly can ensure that the network is as reliable as possible.

“Network connectivity comes down to whether the network connection is a single point of failure: If your virtualization solution is off-site, it’s only as good as the quality of the Internet connection between you and your provider,” said William L. Horvath with DoX Systems. If you have a single connection between you and the Internet, that’s one problem. (You can reduce the risks by contracting with two or more ISPs and getting routers that support trunking.) Likewise, if your virtualization provider’s facility is in a single geographical location (say, Manhatten) that loses functionality for an extended period of time due to some natural disaster, you’re hosed. Our Chamber of Commerce lost access to a cloud-based service not too long ago because someone in the data center, which wasn’t owned by the service provider, forgot to disable the fire suppression system during emergency testing, which unexpectedly destroyed most of the hard drives in the servers.

To avoid the challenges involved in virtualization, Ross provided the following tips:

1. Plan on virtualizing everything — not just the servers but the network, the storage, the security … everything!

2. Standardize everything, from the operating systems on upwards through middleware and applications. The more uniformity exists within configurations, the easier it will be to scale and move these workloads optimally around the environment.

3. Ensure network capabilities are met. This will dynamically change and collapse. There will be huge flow changes as utilization and cloud are adopted.

4. Implement resource monitoring. Existing legacy tools will not provide the data or detail needed.

5. Implement a decommissioning process. Ross repeatedly finds several unused machines running. In a virtual environment, this can become a major issue, consuming resources and driving up costs.

6. Plan for backup and disaster recovery. This will drastically change in virtualization and must be addressed.

7. Train your team based on what the management will look like, not on the migration.

The cloud solves certain problems really well and it allows for SMBs to have the flexible infrastructures that they require — without a lot of capital or hardware or payroll costs. Using the cloud wisely with the right tools, companies can get a leg ahead.

Sue Poremba is a freelance writer focusing primarily on security and technology issues and occasionally blogs for Rackspace Hosting.

Read the original blog entry...

More Stories By Cloud Best Practices Network

The Cloud Best Practices Network is an expert community of leading Cloud pioneers. Follow our best practice blogs at http://CloudBestPractices.net

Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development organizations.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.