Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Don MacVittie, Derek Weeks

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Open Source Cloud, @CloudExpo

Containers Expo Blog: Article

Visible Ops' Success Leads to a Vblock

Why the key to a successful Visible Ops framework is a Vblock

Love it or hate it, ITIL and Change Management will always be an integral part of any IT set up with regulations such as BASEL II, FISMA, SOX (Sarbanes-Oxley) and HIPAA constantly breathing down the neck and conscience of organization leaders. Having once had a "purple badge" wearing ITIL guru for a manager, it always fascinated me how he'd advocate the framework as the solution to all our IT problems. While he'd hark on about defining repeatable and verifiable IT processes, it always ended up being theoretical as opposed to practical, often emphasized by his own IT competency, "Err, Archie how do I save this Word document and what on Earth is that SAN thing you keep going on about?"

That distinction between theory and practice was never more apparent than in the almost pointless CAB (Change Advisory Board) meetings that took place on a weekly basis. While the Change Management processes themselves were painfully bureaucratic and often a diversion from doing actual operational work, the CAB meetings were a surreal experience. With barely anyone in attendance the CAB would ask for a justification to each change, with a response of "approve" or "rejected" when it was clear that they had little or no idea of the technical explanation or implication that was given to them.

Then there was the Security/Risk Compliance chap who'd lock himself in his room glued to his Tripwire dashboard carefully spying on any unapproved changes. Such was his fascination with Tripwire that he too barely attended the CAB meetings, instead indirectly emphasizing his lack of trust and relevance of the Change Management system.  So imagine his amazement when I introduced him to a new product we had implemented for our WINTEL environment called VMware and its feature VMotion.  The fact that I had been seamlessly migrating VMs across physical servers without raising a change and without him being able to pick it up on Tripwire sent him into a perplexed frenzy. Somewhat amused by his constant head shaking, I decided to disclose that I had also been seamlessly migrating LUNs across different RAID Groups with HDS' Cruise Control to get more spindles working, upon which like Batman he'd rushed back to his cave to check whether "Big Brother" Tripwire had picked it up.  Was I really supposed to raise a change for every VMotion or LUN migration?

Several years later after moving from being a customer to a technical consultant my impression of the effectiveness of the CAB failed to improve. Midweek and late in the day in the customer's data center with their SAN Architect, I'd pointed out that they had cabled up the wrong ports in their SAN switches and that this would require a change to be raised. "No need for that" replied the SAN architect, "I'm one of the CAB members". He then to my shock and in true Del Boy fashion, duly proceeded to pull out and swap the FC cables to his production hosts with a big grin on his face. Several minutes later his phone rang, to which he replied, "It's okay, I've resolved it. There was a power failure on some servers." Then with a cheeky grin, a swing of the head and a wink of an eye, he turned to me and said, "There you go sorted, lubbly jubbly!".

While my initial skepticism to ITIL's practicality was centered around my personal experiences it was only embellished by the number of long white bearded external auditors that would supposedly check whether proper controls existed within the many firefighting and cowboy organizational procedures I witnessed. Like a classroom of kids hearing the teacher coming up the corridor and scurrying to get to their desk to present a fabricated impression of discipline and order, I never ceased to be astounded by the last minute changes and running around of our compliance folk to ensure we successfully passed our audits. Despite having more daily Priority 1s than the canteen was serving decent hot meals, we still inexplicably passed every audit with flying colours, which in turn emboldened the rogue "under the radar" operational practices that served to keep the lights on.

So with such a tarnished experience of ITIL, it was with great curiosity and interest that led me to look closer at the movement and initiative of ITPI's Visible Ops.  While still mapping its ideas to ITIL terminology, the onus of Visible Ops is on increasing service levels, decreasing costs and increasing security and auditability. In simplest terms, Visible Ops is a fast track / jumpstart exercise to an efficient operating model that replicates the researched processes of high-performing organizations in just four steps.

To summarize, the first of these four steps is what is termed Phase 1 or "Stabilize the Patient". With the understanding that almost 80% of outages are self-inflicted, any change outside of scheduled maintenance windows are quickly frozen. It then becomes mandatory for problem managers to have any change related information at hand so that when that 80% of "unplanned work" is initiated a full understanding of the root cause is quickly established. This phase starts at the systems and business processes that are responsible for the greatest amount of firefighting with the aim that once they are resolved they would free up work cycles to initiate a more secure and measured route for change.

Phase 2, which is termed "Catch & Release" and "Find Fragile Artifacts", is related to the infrastructure itself with the understanding that it cannot be repeatedly replicated. With an emphasis on gaining an accurate inventory of assets, configurations and services, the objective is to identify the "artifacts" with the lowest change success rates, highest MTTR and highest business downtime costs. By capturing all these assets, what they're running, the services that depend upon them and those responsible for them, an organization ends up in a far more secure position prior to a Priority 1 firefighting session.

Phase 3 or "Establish Repeatable Build Library" is focused on implementing an effective release management process. Using the previous phases as a stepping stone, this phase documents repeatable builds of the most critical assets and services enabling their rebuilding to be more cost effective than to repair. In a process that leads to an efficient mass-production of standardized builds, senior IT operations staff can transform from a reactive to a proactive release management delivery model. This is achieved by operating early in the IT operations lifecycle by consistently working on software and integration releases prior to their deployment into production environments. At the same time a reduction in unique production configurations is pushed for, consequently increasing the configuration lifespans prior to their replacement or change which in turn leads to an improvement in manageability and reduction in complexity.  Eventually the output of these repeatable builds are "golden" images that have been tried, tested, planned and approved prior to production. Therefore when new applications, patches and upgrades are released for integration these golden builds or images need merely updating.

The fourth and last phase, entitled "Enable Continuous Improvement" is pretty self explanatory in that it deals with building a closed loop between the release, control and resolution processes. By completing the previous three phases, metrics for the three key process areas (release, controls and resolution) are focused on, specifically those that can facilitate quick decision making and provide accurate indicators of the work and its success in relation to the operational process. Drawing on ITIL‘s resolution process metrics of Mean Time Before Failure (MTBF) and Mean Time to Repair (MTTR), this phase looks at Release by measuring how efficiently and effectively infrastructure is provisioned. Controls are measured by how effectively the change decisions that are made keep production infrastructure available, predictable and secure, while Resolution is quantified by how effectively issues are identified and resolved.

So while these four concise and particular phases look great on paper what really differentiates them from potentially just being another theoretical process that fails to be delivered comprehensively in practical reality? If the manner in which IT is procured, designed, configured, validated and implemented remains the same there is little if any chance for Visible Ops to succeed any much further than the Purple Badge lovers of ITIL. But what if the approach to IT and more specifically its infrastructure was to change from the traditional buy your own, bolt it together and pray that it works method and instead transferred to a more sustainable and predictable model? What if the approach to infrastructure was one of a green fields approach or seamless migration to a pretested, pre-validated, pre-integrated, prebuilt and preconfigured product i.e. a true Converged Infrastructure? What impact could that possibly have on the success of Visible Ops and the aforementioned four phases?

If we look at phase 1 and "stabilizing the patient" this can be immediately achieved with a Vblock where an organisation no longer has to spend time investigating and worrying about the risk and impact of change. By having a standardized product based approach as opposed to a bunch of components bundled together, thousands of hours of QA testing and analysis work can be performed by VCE for each new patch, firmware upgrade or update on a like for like product that is owned by the customer. With this acting as the premise of a semi-annual release certification matrix that updates all of the components of the Converged Infrastructure as a comprehensive whole, risks typically associated with the change process are eliminated. Furthermore as changes are dictated by this pre-tested and pre-validated process and need to adhere to this release certification matrix to remain within support, it helps eradicate any rogue based changes as well as inform problem managers comprehensively of the necessary changes and updates. Ultimately phase 1's objective of stabilization is immediately achieved via the risk mitigation that comes with implementing a pre-engineered, pre-defined and pre-tested upgrade path.

The challenge of phase 2, which in essence equates to an eventual full inventory of the infrastructure, is a painful process at the best of times especially as new kit from various vendors is constantly being purchased and bolted on to existing kit. Moving to a Vblock simplifies this challenge as it's a single product and hence a single SKU at procurement. Akin to purchasing an Apple Macbook that is made up of many components e.g. a hard drive, processor, CD-ROM etc., the Converged Infrastructure's components are formulated as a whole to provide the customer a product. The parts of the product and all of their details are known to the manufacturer i.e. VCE and can easily be transferred as a single bill of materials to the customer with serial numbers etc. thus ensuring an up to date and accurate inventory and consequently simplified asset management process. When patches, upgrades and additions of new parts and components are required they are automatically added to the inventory list of the single product, thus ensuring up to date asset management.

The Release Management requirement of Phase 3 offers a challenge that is not only embroiled with risk but also takes up a significant amount of staff and management time cycles to ensure that technology and infrastructure remain up to date. This entails the rigmarole of downloading, testing and resolving interoperability issues of component patches and releases and relies heavily on the information sharing of silos as well as the success of regression tests. The unique approach of a Vblock meets this challenge immediately by making pre-tested, validated software and firmware upgrades available for the end user enabling them to locate releases that are applicable for their Converged Infrastructure system. With regards to the rebuild as opposed to repair approach stipulated in phase 3, because a Vblock can be deployed and up and running in only 30 days, the ability to have a like for like standardized infrastructure for new and upcoming projects is a far easier process compared to the usual build it yourself infrastructure model. On a more granular level, by having a management and orchestration stack with a self service portal, golden image VMs can be immediately deployed with a billing and chargeback model as well as integration with a CMDB. The result is a quick and successful attainment of phase 3 of the Visible Ops model via a unified release and configuration management methodology that is highly predictable and enhances availability by reducing interoperability issues.

Measuring the success of metrics such as MTTR and MTBF as detailed in Phase 4 is ultimately linked to the success of the monitoring and support model that's in place for your infrastructure. With a product based approach to infrastructure the support model will also be better equipped to ensure continuous improvement. Having an escalation response process that is based on a product, regardless if resolving a problem requires consultation with multiple experts or component teams, ultimately means a seamless and single point of contact for all issues. This end-to-end accountability for an infrastructure's support, maintenance and warranty makes the tracking of issue resolution and availability a much simpler model to measure and monitor. Furthermore with open APIs that enable integration with comprehensive monitoring and management software platforms, the Converged Infrastructure can be monitored for utilization, performance and capacity management as well as potential issues that can be flagged proactively to support.

As IT operational efficiency becomes more of an imperative for businesses across the globe, the theoretical practices that have failed to deliver are either being assessed, questioned or in some cases continued with. What is often being overlooked is that one of the key and inherent problems is the traditional approach to building and managing IT infrastructure. Even a radical and well researched approach and framework such as Visible Ops will eventually suffer and at worse fail to succeed if the IT infrastructure that the framework is based on was built by the same mode of thinking that created the problems. Fundamentally whether the Visible Ops model is a serious consideration for your environment or not, by adopting the framework with a Vblock, the ability to stabilize, standardize and optimise your IT infrastructure and its delivery of services to the business becomes a lot more practical and consequently a lot less theoretical.

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

@MicroservicesExpo Stories
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things c...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task ...
"We started a Master of Science in business analytics - that's the hot topic. We serve the business community around San Francisco so we educate the working professionals and this is where they all want to be," explained Judy Lee, Associate Professor and Department Chair at Golden Gate University, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The past few years have seen a huge increase in the amount of critical IT services that companies outsource to SaaS/IaaS/PaaS providers, be it security, storage, monitoring, or operations. Of course, along with any outsourcing to a service provider comes a Service Level Agreement (SLA) to ensure that the vendor is held financially responsible for any lapses in their service which affect the customer’s end users, and ultimately, their bottom line. SLAs can be very tricky to manage for a number ...
In a recent post, titled “10 Surprising Facts About Cloud Computing and What It Really Is”, Zac Johnson highlighted some interesting facts about cloud computing in the SMB marketplace: Cloud Computing is up to 40 times more cost-effective for an SMB, compared to running its own IT system. 94% of SMBs have experienced security benefits in the cloud that they didn’t have with their on-premises service
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...