Welcome!

Microservices Expo Authors: AppDynamics Blog, Automic Blog, Liz McMillan, Jason Bloomberg, JP Morgenthal

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Blog Feed Post

Virtual Strategy - Virtually Right

With a private cloud strategy and dynamic data center you can quickly respond to rapid business fluctuations

With a private cloud strategy and dynamic data center you can quickly respond to rapid business fluctuations. But how do you get there?

This post was originaly published as thanksgiving weekend special at virtual-strategy.com.
In the article I discussed some approaches for building a dynamic data center that not only addresses complexity and reduces cost, but also accelerates business response time, to ensure that organization realizes the true promise of cloud computing, business agility and customer responsiveness.

Cloud computing presents an appealing model for offering and managing IT services through shared and often virtualized infrastructure. It’s great for new business start-ups who don’t want the risk of a large on-premise technology investment, or organizations who can’t easily predict what the future demand will be for their services. But for most of us with existing infrastructure and resources, the picture is very different. We want to capitalize on the benefits of the cloud ― on demand, low risk, affordable computing ― but we’ve spent years investing in rooms stacked high with hardware and software to run our daily mission critical jobs and services.

So how do organizations in this situation make the shift from straight-forward server consolidation to a dynamic, self-service virtualized data center? How do they reach the peak of standardized IT service delivery and agility that is in step with the needs of the business? Many virtualization deployments stall as organizations stop to deal with challenges like added complexity, staffing requirements, SLA management, or departmental politics. This “VM stall” tends to coincide with different stages in the virtualization maturity lifecycle, such as the transition from tier 2/3 server consolidation to mission-critical tier 1 applications, and from basic provisioning automation to a private/hybrid cloud approach.

The virtualization maturity lifecycle
The simple answer is to take it step-by-step, learning as you go, building maturity at every step. This will earn you the skills, knowledge, and experience needed to progress from an entry-level virtualization project to a mature dynamic data center and private cloud strategy.

It’s called the virtualization maturity lifecycle, and it builds in four steps. Just like pilots start their training on small planes (going full cycle from take-off to landing) before they move onto large commercial jets, it is advisable for organizations to implement these virtualization maturity steps iteratively. For example, start a full maturity cycle on test and development servers before moving to mission critical servers and applications.
Start easy, by consolidating servers, to increase utilization and reduce your current carbon footprint. To ensure deep insight and continuity in support of the migration from physical to virtual, you might want to leverage image backup and physical-to-virtual restore tools that allow you to move your physical IBM, Dell and HP images directly to ready to run VM images for VMware, Sun, Citrix and Microsoft.

The next step involves optimizing the infrastructure. Apart from maintaining consistency, efficiency, and compliance across the virtual resources (which is proving fast to be even more complex in virtual than in physical environments), we analyze, monitor, (re-)distribute and tune our applications and services.

While optimizing, we also discover and document the rules we will automate in the next phase. Rules about which applications best fit together, what areas are suitable for self service and which type of services are most important. As you can imagine the answers to this last question will be very different for a nuclear plant (safety first) compared to an online video rental service (customers first), which it is why it is such an important step. If you skip this stage and go straight into automation, you’ll likely end up in the same situation that you’re in today, just automated.

A successful cloud strategy is all about agility and flexibility, and the next step in the virtualization maturity lifecycle helps take care of automation and the orchestration of your (now) virtual services. You can empower users to help themselves ― industrialize processes ― without calling IT for every service request. Automation has many advantages here. It is the catalyst to standardize your virtual infrastructure, integrate and orchestrate processes across IT silos, and accelerate the provisioning of virtual cloud services. Once the industrialized provisioning process is live, automation technologies can then also be used to monitor demand volumes, utilization levels and application response times and to assist root-cause analytics to help isolate and remediate virtual environment issues.

The final stage is the centerpiece of a cloud strategy, a position which allows you to manage the definition, demand, and deployment of IT services: the dynamic data center. Your now agile infrastructure, delivered from a secure, highly available data center, enables you to quickly respond to rapid business fluctuations. To reach a dynamic data center, you need to automate the entire process of service delivery from request to fulfilment. This includes centralized service requests, automating the approval process so that department heads can quickly approve or reject requests, a standard and repeatable provisioning process, and standard configurations.

This goes much further than the traditional dream of a “lights out” data center, which basically was a static conveyor belt-like factory where all labor was automated away. The dynamic data center is like a modern car factory, where robots perform almost all tasks, but in ever changing sequences and configurations, guided by supply-chain-lead orchestration.

The new normal
As we all know, technology changes fast. This advancement in technology is creating a “new normal” where relationships with customers are increasingly in a digital form and technology is no longer an enabler or accelerator of the business― it has become the business.

This is a theme picked up by Peter Hinssen, one of Europe's thought leaders on the impact of technology on our society. He evangelizes this new normal, arguing that in a digital world there will be new rules that define what is acceptable for IT, including zero tolerance for digital failure, an era of “good enough” functionality (60% functionality in six weeks rather than 90% in six months), and the need to move your architectures―including your new cloud architecture―from “built to last” to “designed to change”.
The lifecycle approach described earlier may be just what you need to help align your IT organization to what Hinssen calls the new normal. First you determine where opportunities exist for consolidation and rationalization across your physical and virtual environments ― assessing what you have in your data center environment and establish a baseline for making decisions that take you to the next stage. Next, to achieve agility, you have to automate the provisioning and de-provisioning of virtualized resources, including essential elements, such as identities, and other management policies such as access rights.

The next step in delivering an on-time, risk-free (zero failure) cloud computing strategy is service assurance. You need to manage IT service quality and delivery based on business impact and priority — top-to-bottom and end-to-end. That includes, for example, delivering a superior online end-user experience with low-overhead application performance management, and end-to-end visibility into traffic flows and device performance. The new normal also needs to be secure. IT security management technologies must be applied against current regulations and end-user needs, which enable the virtual layer to be more secure.

All these factors combined ultimately lead to agile IT service delivery. With agility, you can build and optimize scalable, reliable resources and entire applications quickly. By embarking on the virtualization maturity roadmap, you can move closer to a dynamic data center and successful cloud strategy.

Any shortcuts?
This evolutionary approach may sound very procedural (and safe). You may also be thinking, is this the only way? What if I need it now?  Is there no revolutionary approach to help me get straight to a private cloud much more quickly? Just like developing countries, which have skipped the wired POTS phone system and moved directly to a 100% wireless infrastructure, a revolutionary approach does exist. The secret lies in the fact that – in addition to the application itself - the infrastructure required to deploy an application can be virtualized – load balancers, firewalls, NAS gateways, monitoring tools, etc.  This entire entity – the application and the required infrastructure it needs to be successfully deployed – can then be managed as a single object. Want to deploy a copy of the application? Simply load the object and all of the associated virtual appliances are automatically loaded, networked, secured and made ready.  This is called an application-centric cloud.

With traditional virtualization, the servers are the parts that are virtualized, but afterward, these virtual servers, networks, routers, load balancers and more, still need to be managed and configured to work with the other parts of the data center, a task as complex and daunting as it was before. This is infrastructure-centric cloud.  With full application-centric clouds, the whole business service (with all its involved components) is virtualized becoming a virtual service (instead of a bunch of virtual servers) which reduces the complexity of managing these services significantly.

As a result, application-centric clouds can now model, configure, deploy and manage complex, composite applications as if they were a single object. This enables operators to use a visual model of an application and the required infrastructure, and to store that model in the integrated repository.  Users or customers can then pull that model out of the repository, reuse it and deploy it to any data center around the world with the click of a button.  Interestingly, users deploy these services to a private cloud, or to an MSP, depending on who happens to offer the best conditions at that moment.  Sound too futuristic?  Far from it.  Several innovative service providers, like DNS Europe, Radix Technologies, and ScaleUp, are already doing exactly this on a daily basis.

For many enterprises, governments and service provider organizations, the mission for IT today is no longer just about keeping the infrastructure running. It’s about the critical need to quickly create new services and revenue streams and improve the competitive position of their organization.
Some parts of your organization may not have time to evolve into a private cloud. For them, taking the revolutionary (or green field) approach may be best, while for other existing revenue streams, an evolutionary approach, ensuring investment protection, may be best.  In the end, customers will be able to choose the approach that best fits the task at hand, finding the right mix of both evolutionary and revolutionary to meet their individual needs.

Read the original blog entry...

More Stories By Gregor Petri

Gregor Petri is a regular expert or keynote speaker at industry events throughout Europe and wrote the cloud primer “Shedding Light on Cloud Computing”. He was also a columnist at ITSM Portal, contributing author to the Dutch “Over Cloud Computing” book, member of the Computable expert panel and his LeanITmanager blog is syndicated across many sites worldwide. Gregor was named by Cloud Computing Journal as one of The Top 100 Bloggers on Cloud Computing.

Follow him on Twitter @GregorPetri or read his blog at blog.gregorpetri.com

@MicroservicesExpo Stories
In a crowded world of popular computer languages, platforms and ecosystems, Node.js is one of the hottest. According to w3techs.com, Node.js usage has gone up 241 percent in the last year alone. Retailers have taken notice and are implementing it on many levels. I am going to share the basics of Node.js, and discuss why retailers are using it to reduce page load times and improve server efficiency. I’ll talk about similar developments such as Docker and microservices, and look at several compani...
The goal of any tech business worth its salt is to provide the best product or service to its clients in the most efficient and cost-effective way possible. This is just as true in the development of software products as it is in other product design services. Microservices, an app architecture style that leans mostly on independent, self-contained programs, are quickly becoming the new norm, so to speak. With this change comes a declining reliance on older SOAs like COBRA, a push toward more s...
From the conception of Docker containers to the unfolding microservices revolution we see today, here is a brief history of what I like to call 'containerology'. In 2013, we were solidly in the monolithic application era. I had noticed that a growing amount of effort was going into deploying and configuring applications. As applications had grown in complexity and interdependency over the years, the effort to install and configure them was becoming significant. But the road did not end with a ...
Many private cloud projects were built to deliver self-service access to development and test resources. While those clouds delivered faster access to resources, they lacked visibility, control and security needed for production deployments. In their session at 18th Cloud Expo, Steve Anderson, Product Manager at BMC Software, and Rick Lefort, Principal Technical Marketing Consultant at BMC Software, will discuss how a cloud designed for production operations not only helps accelerate developer...
I have an article in the recently released “DZone Guide to Building and Deploying Applications on the Cloud” entitled “Fullstack Engineering in the Age of Hybrid Cloud”. In this article I discuss the need and skills of a Fullstack Engineer with relation to troubleshooting and repairing complex, distributed hybrid cloud applications. My recent experiences with troubleshooting issues with my Docker WordPress container only reinforce the details I wrote about in this piece. Without my comprehensive...
Digital means customer preferences and behavior are driving enterprise technology decisions to be sure, but let’s not forget our employees. After all, when we say customer, we mean customer writ large, including partners, supply chain participants, and yes, those salaried denizens whose daily labor forms the cornerstone of the enterprise. While your customers bask in the warm rays of your digital efforts, are your employees toiling away in the dark recesses of your enterprise, pecking data into...
Admittedly, two years ago I was a bulk contributor to the DevOps noise with conversations rooted in the movement around culture, principles, and goals. And while all of these elements of DevOps environments are important, I’ve found that the biggest challenge now is a lack of understanding as to why DevOps is beneficial. It’s getting the wheels going, or just taking the next step. The best way to start on the road to change is to take a look at the companies that have already made great headway ...
In the world of DevOps there are ‘known good practices’ – aka ‘patterns’ – and ‘known bad practices’ – aka ‘anti-patterns.' Many of these patterns and anti-patterns have been developed from real world experience, especially by the early adopters of DevOps theory; but many are more feasible in theory than in practice, especially for more recent entrants to the DevOps scene. In this power panel at @DevOpsSummit at 18th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists will dis...
Small teams are more effective. The general agreement is that anything from 5 to 12 is the 'right' small. But of course small teams will also have 'small' throughput - relatively speaking. So if your demand is X and the throughput of a small team is X/10, you probably need 10 teams to meet that demand. But more teams also mean more effort to coordinate and align their efforts in the same direction. So, the challenge is how to harness the power of small teams and yet orchestrate multiples of them...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, will show how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningfu...
You deployed your app with the Bluemix PaaS and it's gaining some serious traction, so it's time to make some tweaks. Did you design your application in a way that it can scale in the cloud? Were you even thinking about the cloud when you built the app? If not, chances are your app is going to break. Check out this webcast to learn various techniques for designing applications that will scale successfully in Bluemix, for the confidence you need to take your apps to the next level and beyond.
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter...
With DevOps becoming more well-known and established practice in nearly every industry that delivers software, it is important to continually reassess its efficacy. This week’s top 10 includes a discussion on how the quick uptake of DevOps adoption in the enterprise has posed some serious challenges. Additionally, organizations who have taken the DevOps plunge must find ways to find, hire and keep their DevOps talent in order to keep the machine running smoothly.
Wow, if you ever wanted to learn about Rugged DevOps (some call it DevSecOps), sit down for a spell with Shannon Lietz, Ian Allison and Scott Kennedy from Intuit. We discussed a number of important topics including internal war games, culture hacking, gamification of Rugged DevOps and starting as a small team. There are 100 gold nuggets in this conversation for novices and experts alike.
The notion of customer journeys, of course, are central to the digital marketer’s playbook. Clearly, enterprises should focus their digital efforts on such journeys, as they represent customer interactions over time. But making customer journeys the centerpiece of the enterprise architecture, however, leaves more questions than answers. The challenge arises when EAs consider the context of the customer journey in the overall architecture as well as the architectural elements that make up each...
Much of the discussion around cloud DevOps focuses on the speed with which companies need to get new code into production. This focus is important – because in an increasingly digital marketplace, new code enables new value propositions. New code is also often essential for maintaining competitive parity with market innovators. But new code doesn’t just have to deliver the functionality the business requires. It also has to behave well because the behavior of code in the cloud affects performan...
In 2006, Martin Fowler posted his now famous essay on Continuous Integration. Looking back, what seemed revolutionary, radical or just plain crazy is now common, pedestrian and "just what you do." I love it. Back then, building and releasing software was a real pain. Integration was something you did at the end, after code complete, and we didn't know how long it would take. Some people may recall how we, as an industry, spent a massive amount of time integrating code from one team with another...
As the software delivery industry continues to evolve and mature, the challenge of managing the growing list of the tools and processes becomes more daunting every day. Today, Application Lifecycle Management (ALM) platforms are proving most valuable by providing the governance, management and coordination for every stage of development, deployment and release. Recently, I spoke with Madison Moore at SD Times about the changing market and where ALM is headed.
Struggling to keep up with increasing application demand? Learn how Platform as a Service (PaaS) can streamline application development processes and make resource management easy.
If there is anything we have learned by now, is that every business paves their own unique path for releasing software- every pipeline, implementation and practices are a bit different, and DevOps comes in all shapes and sizes. Software delivery practices are often comprised of set of several complementing (or even competing) methodologies – such as leveraging Agile, DevOps and even a mix of ITIL, to create the combination that’s most suitable for your organization and that maximize your busines...