Click here to close now.




















Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Carmen Gonzalez, Ruxit Blog, Trevor Parsons

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Blog Feed Post

Virtual Strategy - Virtually Right

With a private cloud strategy and dynamic data center you can quickly respond to rapid business fluctuations

With a private cloud strategy and dynamic data center you can quickly respond to rapid business fluctuations. But how do you get there?

This post was originaly published as thanksgiving weekend special at virtual-strategy.com.
In the article I discussed some approaches for building a dynamic data center that not only addresses complexity and reduces cost, but also accelerates business response time, to ensure that organization realizes the true promise of cloud computing, business agility and customer responsiveness.

Cloud computing presents an appealing model for offering and managing IT services through shared and often virtualized infrastructure. It’s great for new business start-ups who don’t want the risk of a large on-premise technology investment, or organizations who can’t easily predict what the future demand will be for their services. But for most of us with existing infrastructure and resources, the picture is very different. We want to capitalize on the benefits of the cloud ― on demand, low risk, affordable computing ― but we’ve spent years investing in rooms stacked high with hardware and software to run our daily mission critical jobs and services.

So how do organizations in this situation make the shift from straight-forward server consolidation to a dynamic, self-service virtualized data center? How do they reach the peak of standardized IT service delivery and agility that is in step with the needs of the business? Many virtualization deployments stall as organizations stop to deal with challenges like added complexity, staffing requirements, SLA management, or departmental politics. This “VM stall” tends to coincide with different stages in the virtualization maturity lifecycle, such as the transition from tier 2/3 server consolidation to mission-critical tier 1 applications, and from basic provisioning automation to a private/hybrid cloud approach.

The virtualization maturity lifecycle
The simple answer is to take it step-by-step, learning as you go, building maturity at every step. This will earn you the skills, knowledge, and experience needed to progress from an entry-level virtualization project to a mature dynamic data center and private cloud strategy.

It’s called the virtualization maturity lifecycle, and it builds in four steps. Just like pilots start their training on small planes (going full cycle from take-off to landing) before they move onto large commercial jets, it is advisable for organizations to implement these virtualization maturity steps iteratively. For example, start a full maturity cycle on test and development servers before moving to mission critical servers and applications.
Start easy, by consolidating servers, to increase utilization and reduce your current carbon footprint. To ensure deep insight and continuity in support of the migration from physical to virtual, you might want to leverage image backup and physical-to-virtual restore tools that allow you to move your physical IBM, Dell and HP images directly to ready to run VM images for VMware, Sun, Citrix and Microsoft.

The next step involves optimizing the infrastructure. Apart from maintaining consistency, efficiency, and compliance across the virtual resources (which is proving fast to be even more complex in virtual than in physical environments), we analyze, monitor, (re-)distribute and tune our applications and services.

While optimizing, we also discover and document the rules we will automate in the next phase. Rules about which applications best fit together, what areas are suitable for self service and which type of services are most important. As you can imagine the answers to this last question will be very different for a nuclear plant (safety first) compared to an online video rental service (customers first), which it is why it is such an important step. If you skip this stage and go straight into automation, you’ll likely end up in the same situation that you’re in today, just automated.

A successful cloud strategy is all about agility and flexibility, and the next step in the virtualization maturity lifecycle helps take care of automation and the orchestration of your (now) virtual services. You can empower users to help themselves ― industrialize processes ― without calling IT for every service request. Automation has many advantages here. It is the catalyst to standardize your virtual infrastructure, integrate and orchestrate processes across IT silos, and accelerate the provisioning of virtual cloud services. Once the industrialized provisioning process is live, automation technologies can then also be used to monitor demand volumes, utilization levels and application response times and to assist root-cause analytics to help isolate and remediate virtual environment issues.

The final stage is the centerpiece of a cloud strategy, a position which allows you to manage the definition, demand, and deployment of IT services: the dynamic data center. Your now agile infrastructure, delivered from a secure, highly available data center, enables you to quickly respond to rapid business fluctuations. To reach a dynamic data center, you need to automate the entire process of service delivery from request to fulfilment. This includes centralized service requests, automating the approval process so that department heads can quickly approve or reject requests, a standard and repeatable provisioning process, and standard configurations.

This goes much further than the traditional dream of a “lights out” data center, which basically was a static conveyor belt-like factory where all labor was automated away. The dynamic data center is like a modern car factory, where robots perform almost all tasks, but in ever changing sequences and configurations, guided by supply-chain-lead orchestration.

The new normal
As we all know, technology changes fast. This advancement in technology is creating a “new normal” where relationships with customers are increasingly in a digital form and technology is no longer an enabler or accelerator of the business― it has become the business.

This is a theme picked up by Peter Hinssen, one of Europe's thought leaders on the impact of technology on our society. He evangelizes this new normal, arguing that in a digital world there will be new rules that define what is acceptable for IT, including zero tolerance for digital failure, an era of “good enough” functionality (60% functionality in six weeks rather than 90% in six months), and the need to move your architectures―including your new cloud architecture―from “built to last” to “designed to change”.
The lifecycle approach described earlier may be just what you need to help align your IT organization to what Hinssen calls the new normal. First you determine where opportunities exist for consolidation and rationalization across your physical and virtual environments ― assessing what you have in your data center environment and establish a baseline for making decisions that take you to the next stage. Next, to achieve agility, you have to automate the provisioning and de-provisioning of virtualized resources, including essential elements, such as identities, and other management policies such as access rights.

The next step in delivering an on-time, risk-free (zero failure) cloud computing strategy is service assurance. You need to manage IT service quality and delivery based on business impact and priority — top-to-bottom and end-to-end. That includes, for example, delivering a superior online end-user experience with low-overhead application performance management, and end-to-end visibility into traffic flows and device performance. The new normal also needs to be secure. IT security management technologies must be applied against current regulations and end-user needs, which enable the virtual layer to be more secure.

All these factors combined ultimately lead to agile IT service delivery. With agility, you can build and optimize scalable, reliable resources and entire applications quickly. By embarking on the virtualization maturity roadmap, you can move closer to a dynamic data center and successful cloud strategy.

Any shortcuts?
This evolutionary approach may sound very procedural (and safe). You may also be thinking, is this the only way? What if I need it now?  Is there no revolutionary approach to help me get straight to a private cloud much more quickly? Just like developing countries, which have skipped the wired POTS phone system and moved directly to a 100% wireless infrastructure, a revolutionary approach does exist. The secret lies in the fact that – in addition to the application itself - the infrastructure required to deploy an application can be virtualized – load balancers, firewalls, NAS gateways, monitoring tools, etc.  This entire entity – the application and the required infrastructure it needs to be successfully deployed – can then be managed as a single object. Want to deploy a copy of the application? Simply load the object and all of the associated virtual appliances are automatically loaded, networked, secured and made ready.  This is called an application-centric cloud.

With traditional virtualization, the servers are the parts that are virtualized, but afterward, these virtual servers, networks, routers, load balancers and more, still need to be managed and configured to work with the other parts of the data center, a task as complex and daunting as it was before. This is infrastructure-centric cloud.  With full application-centric clouds, the whole business service (with all its involved components) is virtualized becoming a virtual service (instead of a bunch of virtual servers) which reduces the complexity of managing these services significantly.

As a result, application-centric clouds can now model, configure, deploy and manage complex, composite applications as if they were a single object. This enables operators to use a visual model of an application and the required infrastructure, and to store that model in the integrated repository.  Users or customers can then pull that model out of the repository, reuse it and deploy it to any data center around the world with the click of a button.  Interestingly, users deploy these services to a private cloud, or to an MSP, depending on who happens to offer the best conditions at that moment.  Sound too futuristic?  Far from it.  Several innovative service providers, like DNS Europe, Radix Technologies, and ScaleUp, are already doing exactly this on a daily basis.

For many enterprises, governments and service provider organizations, the mission for IT today is no longer just about keeping the infrastructure running. It’s about the critical need to quickly create new services and revenue streams and improve the competitive position of their organization.
Some parts of your organization may not have time to evolve into a private cloud. For them, taking the revolutionary (or green field) approach may be best, while for other existing revenue streams, an evolutionary approach, ensuring investment protection, may be best.  In the end, customers will be able to choose the approach that best fits the task at hand, finding the right mix of both evolutionary and revolutionary to meet their individual needs.

Read the original blog entry...

More Stories By Gregor Petri

Gregor Petri is a regular expert or keynote speaker at industry events throughout Europe and wrote the cloud primer “Shedding Light on Cloud Computing”. He was also a columnist at ITSM Portal, contributing author to the Dutch “Over Cloud Computing” book, member of the Computable expert panel and his LeanITmanager blog is syndicated across many sites worldwide. Gregor was named by Cloud Computing Journal as one of The Top 100 Bloggers on Cloud Computing.

Follow him on Twitter @GregorPetri or read his blog at blog.gregorpetri.com

@MicroservicesExpo Stories
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device acce...
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
Containers are not new, but renewed commitments to performance, flexibility, and agility have propelled them to the top of the agenda today. By working without the need for virtualization and its overhead, containers are seen as the perfect way to deploy apps and services across multiple clouds. Containers can handle anything from file types to operating systems and services, including microservices. What are microservices? Unlike what the name implies, microservices are not necessarily small,...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
Puppet Labs is pleased to share the findings from our 2015 State of DevOps Survey. We have deepened our understanding of how DevOps enables IT performance and organizational performance, based on responses from more than 20,000 technical professionals we’ve surveyed over the past four years. The 2015 State of DevOps Report reveals high-performing IT organizations deploy 30x more frequently with 200x shorter lead times. They have 60x fewer failures and recover 168x faster
Microservice architecture is fast becoming a go-to solution for enterprise applications, but it's not always easy to make the transition from an established, monolithic infrastructure. Lightweight and loosely coupled, building a set of microservices is arguably more difficult than building a monolithic application. However, once established, microservices offer a series of advantages over traditional architectures as deployment times become shorter and iterating becomes easier.
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the ...
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding bu...
ElasticBox, the agile application delivery manager, announced freely available public boxes for the DevOps community. ElasticBox works with enterprises to help them deploy any application to any cloud. Public boxes are curated reference boxes that represent some of the most popular applications and tools for orchestrating deployments at scale. Boxes are an adaptive way to represent reusable infrastructure as components of code. Boxes contain scripts, variables, and metadata to automate proces...
To support developers and operations professionals in their push to implement DevOps principles for their infrastructure environments, ProfitBricks, a provider of cloud infrastructure, is adding support for DevOps tools Ansible and Chef. Ansible is a platform for configuring and managing data center infrastructure that combines multi-node software deployment, ad hoc task execution, and configuration management, and is used by DevOps professionals as they use its playbooks functionality to autom...
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Learn what is going on, contribute to the discussions, and e...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...