Welcome!

Microservices Expo Authors: Sujoy Sen, Automic Blog, Liz McMillan, Pat Romanski, AppDynamics Blog

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Open Source Cloud, @CloudExpo, @BigDataExpo

Containers Expo Blog: Blog Feed Post

Challenges in Virtualization

Companies looking at virtualization solutions need storage solutions that are flexible

By Sue Poremba

Virtualization has been a boon to enterprise as it makes IT operations more efficient. Some like its green qualities as virtualization saves on energy consumption, while others appreciate the storage capacity, as well as the data recovery solutions for if disaster strikes.

However, the virtual environment is invisible, and with that come more challenges in making sure it runs smoothly. The cloud might be simple to setup, but it becomes more complex over time. In addition, the more machines and data involved, the more difficult it can be to monitor for space, CPU spikes, network security and other indicators.

“If there is a bug or a discrepancy, I need to know that there’s a problem before my customer does. And though that is the biggest challenge, it’s also a great opportunity,” Russ Caldwell, CTO, Emcien Corporation said.

One of those challenges is making sure storage in the virtualized environment is adequate. “We focus on storage and database environments that scale as the customers grow,” said Caldwell. “Determining how fast customers grow and change is the biggest factor for determining the adequate storage size.”

Companies looking at virtualization solutions need storage solutions that are flexible so they can add or remove storage, as needed. Even though it may have been the right size in the beginning of a project, things change, and a flexible virtualization tool can give that peace of mind when things change. For example, when we’re working with slow-moving manufacturing data, we can determine the adequate storage size easier than when we’re working with hundreds of millions of bank nodes, where the growth is much more dramatic.

The key, according to John Ross with virtual solution company Phantom Business Development at Net Optics, is to truly assess the performance of the servers and the requirements of the virtual machines. This requires monitoring to be in place for the life of the systems to predict utilization and to modify placement based on performance. “When this is not accounted for, it can appear as though there is high CPU utilization on the hosts as well as the VMs,” said Ross, “With the use of protocols such as NFS and ISCSI, it can put quite a load on the network.”

Companies moving to the cloud also have to change how they think about networking. “It can be hard to understand how network connection works when there aren’t wires to simply plug it into a box, but instead virtual, invisible connections that need to be managed through APIs or online interfaces,” said Caldwell. One of the challenges for a company with multiple clients is keeping client data separate from one another. Grouping machines together and isolating them in their own network is the best approach in tackling this challenge. Using excellent monitoring tools smartly can ensure that the network is as reliable as possible.

“Network connectivity comes down to whether the network connection is a single point of failure: If your virtualization solution is off-site, it’s only as good as the quality of the Internet connection between you and your provider,” said William L. Horvath with DoX Systems. If you have a single connection between you and the Internet, that’s one problem. (You can reduce the risks by contracting with two or more ISPs and getting routers that support trunking.) Likewise, if your virtualization provider’s facility is in a single geographical location (say, Manhatten) that loses functionality for an extended period of time due to some natural disaster, you’re hosed. Our Chamber of Commerce lost access to a cloud-based service not too long ago because someone in the data center, which wasn’t owned by the service provider, forgot to disable the fire suppression system during emergency testing, which unexpectedly destroyed most of the hard drives in the servers.

To avoid the challenges involved in virtualization, Ross provided the following tips:

1. Plan on virtualizing everything — not just the servers but the network, the storage, the security … everything!

2. Standardize everything, from the operating systems on upwards through middleware and applications. The more uniformity exists within configurations, the easier it will be to scale and move these workloads optimally around the environment.

3. Ensure network capabilities are met. This will dynamically change and collapse. There will be huge flow changes as utilization and cloud are adopted.

4. Implement resource monitoring. Existing legacy tools will not provide the data or detail needed.

5. Implement a decommissioning process. Ross repeatedly finds several unused machines running. In a virtual environment, this can become a major issue, consuming resources and driving up costs.

6. Plan for backup and disaster recovery. This will drastically change in virtualization and must be addressed.

7. Train your team based on what the management will look like, not on the migration.

The cloud solves certain problems really well and it allows for SMBs to have the flexible infrastructures that they require — without a lot of capital or hardware or payroll costs. Using the cloud wisely with the right tools, companies can get a leg ahead.

Sue Poremba is a freelance writer focusing primarily on security and technology issues and occasionally blogs for Rackspace Hosting.

Read the original blog entry...

More Stories By Cloud Best Practices Network

The Cloud Best Practices Network is an expert community of leading Cloud pioneers. Follow our best practice blogs at http://CloudBestPractices.net

@MicroservicesExpo Stories
Small teams are more effective. The general agreement is that anything from 5 to 12 is the 'right' small. But of course small teams will also have 'small' throughput - relatively speaking. So if your demand is X and the throughput of a small team is X/10, you probably need 10 teams to meet that demand. But more teams also mean more effort to coordinate and align their efforts in the same direction. So, the challenge is how to harness the power of small teams and yet orchestrate multiples of them...
From the conception of Docker containers to the unfolding microservices revolution we see today, here is a brief history of what I like to call 'containerology'. In 2013, we were solidly in the monolithic application era. I had noticed that a growing amount of effort was going into deploying and configuring applications. As applications had grown in complexity and interdependency over the years, the effort to install and configure them was becoming significant. But the road did not end with a ...
In the world of DevOps there are ‘known good practices’ – aka ‘patterns’ – and ‘known bad practices’ – aka ‘anti-patterns.' Many of these patterns and anti-patterns have been developed from real world experience, especially by the early adopters of DevOps theory; but many are more feasible in theory than in practice, especially for more recent entrants to the DevOps scene. In this power panel at @DevOpsSummit at 18th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists will dis...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, will show how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningfu...
In a crowded world of popular computer languages, platforms and ecosystems, Node.js is one of the hottest. According to w3techs.com, Node.js usage has gone up 241 percent in the last year alone. Retailers have taken notice and are implementing it on many levels. I am going to share the basics of Node.js, and discuss why retailers are using it to reduce page load times and improve server efficiency. I’ll talk about similar developments such as Docker and microservices, and look at several compani...
Many private cloud projects were built to deliver self-service access to development and test resources. While those clouds delivered faster access to resources, they lacked visibility, control and security needed for production deployments. In their session at 18th Cloud Expo, Steve Anderson, Product Manager at BMC Software, and Rick Lefort, Principal Technical Marketing Consultant at BMC Software, will discuss how a cloud designed for production operations not only helps accelerate developer...
You deployed your app with the Bluemix PaaS and it's gaining some serious traction, so it's time to make some tweaks. Did you design your application in a way that it can scale in the cloud? Were you even thinking about the cloud when you built the app? If not, chances are your app is going to break. Check out this webcast to learn various techniques for designing applications that will scale successfully in Bluemix, for the confidence you need to take your apps to the next level and beyond.
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter...
Admittedly, two years ago I was a bulk contributor to the DevOps noise with conversations rooted in the movement around culture, principles, and goals. And while all of these elements of DevOps environments are important, I’ve found that the biggest challenge now is a lack of understanding as to why DevOps is beneficial. It’s getting the wheels going, or just taking the next step. The best way to start on the road to change is to take a look at the companies that have already made great headway ...
With DevOps becoming more well-known and established practice in nearly every industry that delivers software, it is important to continually reassess its efficacy. This week’s top 10 includes a discussion on how the quick uptake of DevOps adoption in the enterprise has posed some serious challenges. Additionally, organizations who have taken the DevOps plunge must find ways to find, hire and keep their DevOps talent in order to keep the machine running smoothly.
Wow, if you ever wanted to learn about Rugged DevOps (some call it DevSecOps), sit down for a spell with Shannon Lietz, Ian Allison and Scott Kennedy from Intuit. We discussed a number of important topics including internal war games, culture hacking, gamification of Rugged DevOps and starting as a small team. There are 100 gold nuggets in this conversation for novices and experts alike.
The notion of customer journeys, of course, are central to the digital marketer’s playbook. Clearly, enterprises should focus their digital efforts on such journeys, as they represent customer interactions over time. But making customer journeys the centerpiece of the enterprise architecture, however, leaves more questions than answers. The challenge arises when EAs consider the context of the customer journey in the overall architecture as well as the architectural elements that make up each...
Much of the discussion around cloud DevOps focuses on the speed with which companies need to get new code into production. This focus is important – because in an increasingly digital marketplace, new code enables new value propositions. New code is also often essential for maintaining competitive parity with market innovators. But new code doesn’t just have to deliver the functionality the business requires. It also has to behave well because the behavior of code in the cloud affects performan...
In 2006, Martin Fowler posted his now famous essay on Continuous Integration. Looking back, what seemed revolutionary, radical or just plain crazy is now common, pedestrian and "just what you do." I love it. Back then, building and releasing software was a real pain. Integration was something you did at the end, after code complete, and we didn't know how long it would take. Some people may recall how we, as an industry, spent a massive amount of time integrating code from one team with another...
As the software delivery industry continues to evolve and mature, the challenge of managing the growing list of the tools and processes becomes more daunting every day. Today, Application Lifecycle Management (ALM) platforms are proving most valuable by providing the governance, management and coordination for every stage of development, deployment and release. Recently, I spoke with Madison Moore at SD Times about the changing market and where ALM is headed.
Struggling to keep up with increasing application demand? Learn how Platform as a Service (PaaS) can streamline application development processes and make resource management easy.
If there is anything we have learned by now, is that every business paves their own unique path for releasing software- every pipeline, implementation and practices are a bit different, and DevOps comes in all shapes and sizes. Software delivery practices are often comprised of set of several complementing (or even competing) methodologies – such as leveraging Agile, DevOps and even a mix of ITIL, to create the combination that’s most suitable for your organization and that maximize your busines...
The goal of any tech business worth its salt is to provide the best product or service to its clients in the most efficient and cost-effective way possible. This is just as true in the development of software products as it is in other product design services. Microservices, an app architecture style that leans mostly on independent, self-contained programs, are quickly becoming the new norm, so to speak. With this change comes a declining reliance on older SOAs like COBRA, a push toward more s...
Digital means customer preferences and behavior are driving enterprise technology decisions to be sure, but let’s not forget our employees. After all, when we say customer, we mean customer writ large, including partners, supply chain participants, and yes, those salaried denizens whose daily labor forms the cornerstone of the enterprise. While your customers bask in the warm rays of your digital efforts, are your employees toiling away in the dark recesses of your enterprise, pecking data into...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.