Welcome!

Microservices Expo Authors: Yeshim Deniz, XebiaLabs Blog, AppNeta Blog, Elizabeth White, Kong Yang

Related Topics: Containers Expo Blog, Microservices Expo, @CloudExpo

Containers Expo Blog: Article

Building a Cloud Factory

A process empowering companies to more efficiently migrate workloads to the cloud

Few areas of human endeavor can match the pace of change in IT. Even by IT standards, the change being driven by cloud computing sometimes seems surprising. To refer to a virtual environment that has only recently been deployed as "legacy," as some organizations are now doing, underscores the fact that the only thing constant in the data center is change. To deal with change of this magnitude, which can involve transforming the workload hosting model of an entire organization, some industrial-strength thinking is required.

In order to tackle this challenge, it's important to properly frame the cloud transformation problem. Many associate cloud with agility, flexibility, cost transparency and other end-user-oriented benefits. But many of these attributes are primarily associated with new infrastructure requests, and specifically, the use of self-service portals to "spin up" infrastructure to host new applications or host transient processing demands. When it comes to migrating hundreds or thousands of existing workloads into cloud infrastructure, agility is not a benefit that is typically experienced. In fact the opposite is often the case: because clouds require a higher degree of standardization (i.e., a finite catalog of sizes and software options), migrating existing physical and virtual servers into cloud models can actually be quite difficult. In other words, the very features that make clouds agile for new workload deployments can actually make them less agile from a transformation perspective.

This is where the notion of a factory comes in. In industrial processes, factories are the epitome of scalability, repeatability and productivity. Although they may take some effort to "tool up," once they are up and running they can handle a higher flow of activity, efficiently processing inputs to provide consistent output. This notion is also key to large-scale transformation. By applying a common approach that has been properly engineered to give repeatable results, organizations can greatly reduce the time and effort required to migrate to cloud infrastructure.

Within this concept, it is important to expand on what is meant by "properly engineered." Many organizations tackle these kinds of problems from a grassroots perspective, using spreadsheets and smart people to determine action. The problem with this approach is it rarely evolves to the point where it can generate truly accurate answers, mainly because the problem is too complex. Migrating workloads into clouds requires processing volumes of historical data, analyzing configuration information on the servers and applications being migrated, modeling target instance sizes and software stacks, enforcing corporate and regulatory requirements, honoring SLA and data protection rules, etc. Spreadsheets are not well suited to this, in much the same way that they are ill suited for use as corporate accounting platforms. Even if they can be coaxed into giving a decent answer for simple environments, they will not generate the reports needed to satisfy stakeholders, management, engineering, operations, etc., all of whom need significant detail surrounding the decisions being made in order to ensure benefits are achieved and risk is minimized.

Buried in the list of migration analysis requirements is a key concept linking them all together. This is the notion of policy, which represents the ground rules on how workloads should be hosted, where they should and should not go, how much resources they should be allocated, etc. Without properly modeled policies, hosting decisions are left to the practitioner performing the migration, and it can be hit-or-miss whether they do the right thing (or even follow the same policy twice in a row). Planning and managing cloud infrastructure without proper policies is like trying to fill out a tax return without instructions - there are just too many variables to get it right.

With all of these concepts in mind, the exact nature of the cloud factory becomes clearer. It divides the problem into a series of logical steps that combine data, target models and cloud planning and management policies in order to automate the process of deciding exactly where things go and how big to make them. These steps that make up the factory are:

  1. Candidate Qualification: This process determines whether a given set of workloads are suitable to be hosted in a given cloud environment. This is both qualitative and quantitative in nature and designed to separate true candidates from the workloads that are better suited to go elsewhere (more on this later in step 6). Examples of quantitative criteria include maximum I/O rates, context switching limitations, maximum CPU and memory sizes, etc. Qualitative criteria include data sensitivity, SLA requirements, backup strategy and other considerations. By applying a policy capturing all of these factors, a rapid and accurate assessment can be made.
  2. Sizing: This takes the qualified candidates and determines what cloud instances are best suited to host them given their historical levels and patterns of utilization. This again is subject to policy, which governs how much history is considered, target utilization levels, etc. The result is a detailed specification of the instance sizes needed and the projected utilization levels in the "to be" environment. Note the use of benchmarks is critical in this step, as the translation of CPU utilization from the current environment to the cloud depends on the relative speeds of the CPU employed in each.
  3. Load Balancing: Also a sizing step, this is focused on the load balancers and clusters being migrated. Because cloud environments offer different sizing options, and can even offer more advanced "elasticity" features, it is not always desirable to do a straight one-to-one translation of these servers into cloud capacity. For example, an 8-way IIS cluster might translate onto 12 smalls, 6 mediums and 3 large instances. Of these options, the one that meets the policy criteria (e.g., size for yearly peak activity, allow for N+1 resiliency) at the lowest cost will be the winner. This result is combined with the general sizing results from the previous step to provide a complete sizing plan.
  4. Software Stack Mapping: This step considers the OS and software configurations of the source servers and maps them onto the "closest" configuration available in the cloud. Because cloud catalogs only offer a finite set of software options, this is effectively a standardization analysis. For Infrastructure-as-a-Service (IaaS), this step is typically limited to the OS-level configuration and matches the OS attributes of the existing servers and VMs to the operating systems that are on offer in the cloud (which is typically a much shorter list). For Platform-as-a-Service this step also includes scrutiny of the actual software inventory and applications installed. The result may say "server X looks the most like an IIS v6 server, but differs from the standard image in the following ways..." This not only provides the optimal stack to deploy, but also generates a remediation list that is critical for reducing risk during implementation.
  5. Placement: Once the final specification is arrived at (through sizing, balancing and software mapping), the next step for internal cloud environments is determining exactly where the workloads should be placed in the infrastructure actually hosting the cloud environment. Because most clouds are based on virtual environments, the key is to fit the new VMs into the environment in a way that optimally leverages server resources. This step looks somewhat similar to placement of workloads in virtual environments (which tends to resemble placing Tetris blocks in available server capacity), but the policy regarding overcommit has a large influence on the resulting placements. If the policy is to strictly reserve the capacity for each cloud instance, then the environment will be very safe but relatively inefficient, as the workload density will be quite low (think of playing Tetris with the blocks wrapped in bubbles). If the policy is to fully overcommit resources, then the end customer may have a higher risk of contention if they place unanticipated demands on the environment, but the higher density that results can result in significantly lower costs (think Tetris blocks packed tightly together, requiring far less capacity).
  6. Exception Handling: Going back to step 1, there are typically components of an application or business service that may not be suitable for hosting in the cloud. For these systems, it is necessary to evaluate other hosting options in order to determine what to do with them. Because there is often an order of precedence with respect to the hosting options, this step involves the systematic qualification of the rejected workloads against an ordered set of hosting strategies. These strategies can include using cloud instances with customized allocations, using dedicated cloud servers, hosting in a virtual environment, using dedicated blades, using dedicated rack mount servers or leaving the workloads alone (a last resort). By passing the rejected candidates through this gauntlet of options, each will arrive at a viable outcome.

The result of applying these steps is a methodical, exhaustive and rapid process for planning cloud migrations. By taking a data-centric, policy-driven approach, fewer mistakes are made, less rework is required, and application owners and other stakeholders will have much higher confidence they will arrive on the other end unscathed. This transparency, combined with the detailed specifications and implementation details that emerge, can rapidly accelerate cloud initiatives. This not only reduces time-to-value, but also enables IT organizations to keep up with the pace of technology innovation, which shows no sign of letting up.

More Stories By Andrew Hillier

Andrew Hillier is CTO and co-founder of CiRBA, Inc., a data center intelligence analytics software provider that determines optimal workload placements and resource allocations required to safely maximize the efficiency of Cloud, virtual and physical infrastructure. Reach Andrew at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Cloud Expo, Inc. has announced today that Aruna Ravichandran, vice president of DevOps Product and Solutions Marketing at CA Technologies, has been named co-conference chair of DevOps at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Developers want to create better apps faster. Static clouds are giving way to scalable systems, with dynamic resource allocation and application monitoring. You won't hear that chant from users on any picket line, but helping developers to create better apps faster is the mission of Lee Atchison, principal cloud architect and advocate at New Relic Inc., based in San Francisco. His singular job is to understand and drive the industry in the areas of cloud architecture, microservices, scalability ...
Back in February of 2017, Andrew Clay Schafer of Pivotal tweeted the following: “seriously tho, the whole software industry is stuck on deployment when we desperately need architecture and telemetry.” Intrigue in a 140 characters. For me, I hear Andrew saying, “we’re jumping to step 5 before we’ve successfully completed steps 1-4.”
This recent research on cloud computing from the Register delves a little deeper than many of the "We're all adopting cloud!" surveys we've seen. They found that meaningful cloud adoption and the idea of the cloud-first enterprise are still not reality for many businesses. The Register's stats also show a more gradual cloud deployment trend over the past five years, not any sort of explosion. One important takeaway is that coherence across internal and external clouds is essential for IT right n...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
To more closely examine the variety of ways in which IT departments around the world are integrating cloud services, and the effect hybrid IT has had on their organizations and IT job roles, SolarWinds recently released the SolarWinds IT Trends Report 2017: Portrait of a Hybrid Organization. This annual study consists of survey-based research that explores significant trends, developments, and movements related to and directly affecting IT and IT professionals.
Is your application too difficult to manage? Do changes take dozens of developers hundreds of hours to execute, and frequently result in downtime across all your site’s functions? It sounds like you have a monolith! A monolith is one of the three main software architectures that define most applications. Whether you’ve intentionally set out to create a monolith or not, it’s worth at least weighing the pros and cons of the different architectural approaches and deciding which one makes the most s...
Software as a service (SaaS), one of the earliest and most successful cloud services, has reached mainstream status. According to Cisco, by 2019 more than four-fifths (83 percent) of all data center traffic will be based in the cloud, up from 65 percent today. The majority of this traffic will be applications. Businesses of all sizes are adopting a variety of SaaS-based services – everything from collaboration tools to mission-critical commerce-oriented applications. The rise in SaaS usage has m...
The proper isolation of resources is essential for multi-tenant environments. The traditional approach to isolate resources is, however, rather heavyweight. In his session at 18th Cloud Expo, Igor Drobiazko, co-founder of elastic.io, drew upon his own experience with operating a Docker container-based infrastructure on a large scale and present a lightweight solution for resource isolation using microservices. He also discussed the implementation of microservices in data and application integrat...
We'd all like to fulfill that "find a job you love and you'll never work a day in your life" cliché. But in reality, every job (even if it's our dream job) comes with its downsides. For you, the constant fight against shadow IT might get on your last nerves. For your developer coworkers, infrastructure management is the roadblock that stands in the way of focusing on coding. As you watch more and more applications and processes move to the cloud, technology is coming to developers' rescue-most r...
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
In large enterprises, environment provisioning and server provisioning account for a significant portion of the operations team's time. This often leaves users frustrated while they wait for these services. For instance, server provisioning can take several days and sometimes even weeks. At the same time, digital transformation means the need for server and environment provisioning is constantly growing. Organizations are adopting agile methodologies and software teams are increasing the speed ...
Even for the most seasoned IT pros, the cloud is complicated. It can be difficult just to wrap your head around the many terms and acronyms that make up the cloud dictionary-not to mention actually mastering the technology. Unfortunately, complicated cloud terms are often combined to the point that their meanings are lost in a sea of conflicting opinions. Two terms that are used interchangeably (but shouldn't be) are hybrid cloud and multicloud. If you want to be the cloud expert your company ne...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, will discuss how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He will discuss how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
The human body is the most complex machine ever created! With a complex network of interconnected organs, millions of cells and the most advanced processor, human body is the most automated system in this planet. In this article, we will draw comparisons between working of a human body to that of a datacenter. We will learn how self-defense and self-healing capabilities of our human body is similar to firewalls and intelligent monitoring capabilities in our datacenters. We will draw parallels b...
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, will tackle the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holisticall...