Welcome!

SOA & WOA Authors: Elizabeth White, Liz McMillan, Vincent Brasseur, Ignacio M. Llorente, Natalie Lerner

Related Topics: Virtualization, Java, SOA & WOA, Open Source, Cloud Expo

Virtualization: Article

Visible Ops' Success Leads to a Vblock

Why the key to a successful Visible Ops framework is a Vblock

Love it or hate it, ITIL and Change Management will always be an integral part of any IT set up with regulations such as BASEL II, FISMA, SOX (Sarbanes-Oxley) and HIPAA constantly breathing down the neck and conscience of organization leaders. Having once had a "purple badge" wearing ITIL guru for a manager, it always fascinated me how he'd advocate the framework as the solution to all our IT problems. While he'd hark on about defining repeatable and verifiable IT processes, it always ended up being theoretical as opposed to practical, often emphasized by his own IT competency, "Err, Archie how do I save this Word document and what on Earth is that SAN thing you keep going on about?"

That distinction between theory and practice was never more apparent than in the almost pointless CAB (Change Advisory Board) meetings that took place on a weekly basis. While the Change Management processes themselves were painfully bureaucratic and often a diversion from doing actual operational work, the CAB meetings were a surreal experience. With barely anyone in attendance the CAB would ask for a justification to each change, with a response of "approve" or "rejected" when it was clear that they had little or no idea of the technical explanation or implication that was given to them.

Then there was the Security/Risk Compliance chap who'd lock himself in his room glued to his Tripwire dashboard carefully spying on any unapproved changes. Such was his fascination with Tripwire that he too barely attended the CAB meetings, instead indirectly emphasizing his lack of trust and relevance of the Change Management system.  So imagine his amazement when I introduced him to a new product we had implemented for our WINTEL environment called VMware and its feature VMotion.  The fact that I had been seamlessly migrating VMs across physical servers without raising a change and without him being able to pick it up on Tripwire sent him into a perplexed frenzy. Somewhat amused by his constant head shaking, I decided to disclose that I had also been seamlessly migrating LUNs across different RAID Groups with HDS' Cruise Control to get more spindles working, upon which like Batman he'd rushed back to his cave to check whether "Big Brother" Tripwire had picked it up.  Was I really supposed to raise a change for every VMotion or LUN migration?

Several years later after moving from being a customer to a technical consultant my impression of the effectiveness of the CAB failed to improve. Midweek and late in the day in the customer's data center with their SAN Architect, I'd pointed out that they had cabled up the wrong ports in their SAN switches and that this would require a change to be raised. "No need for that" replied the SAN architect, "I'm one of the CAB members". He then to my shock and in true Del Boy fashion, duly proceeded to pull out and swap the FC cables to his production hosts with a big grin on his face. Several minutes later his phone rang, to which he replied, "It's okay, I've resolved it. There was a power failure on some servers." Then with a cheeky grin, a swing of the head and a wink of an eye, he turned to me and said, "There you go sorted, lubbly jubbly!".

While my initial skepticism to ITIL's practicality was centered around my personal experiences it was only embellished by the number of long white bearded external auditors that would supposedly check whether proper controls existed within the many firefighting and cowboy organizational procedures I witnessed. Like a classroom of kids hearing the teacher coming up the corridor and scurrying to get to their desk to present a fabricated impression of discipline and order, I never ceased to be astounded by the last minute changes and running around of our compliance folk to ensure we successfully passed our audits. Despite having more daily Priority 1s than the canteen was serving decent hot meals, we still inexplicably passed every audit with flying colours, which in turn emboldened the rogue "under the radar" operational practices that served to keep the lights on.

So with such a tarnished experience of ITIL, it was with great curiosity and interest that led me to look closer at the movement and initiative of ITPI's Visible Ops.  While still mapping its ideas to ITIL terminology, the onus of Visible Ops is on increasing service levels, decreasing costs and increasing security and auditability. In simplest terms, Visible Ops is a fast track / jumpstart exercise to an efficient operating model that replicates the researched processes of high-performing organizations in just four steps.

To summarize, the first of these four steps is what is termed Phase 1 or "Stabilize the Patient". With the understanding that almost 80% of outages are self-inflicted, any change outside of scheduled maintenance windows are quickly frozen. It then becomes mandatory for problem managers to have any change related information at hand so that when that 80% of "unplanned work" is initiated a full understanding of the root cause is quickly established. This phase starts at the systems and business processes that are responsible for the greatest amount of firefighting with the aim that once they are resolved they would free up work cycles to initiate a more secure and measured route for change.

Phase 2, which is termed "Catch & Release" and "Find Fragile Artifacts", is related to the infrastructure itself with the understanding that it cannot be repeatedly replicated. With an emphasis on gaining an accurate inventory of assets, configurations and services, the objective is to identify the "artifacts" with the lowest change success rates, highest MTTR and highest business downtime costs. By capturing all these assets, what they're running, the services that depend upon them and those responsible for them, an organization ends up in a far more secure position prior to a Priority 1 firefighting session.

Phase 3 or "Establish Repeatable Build Library" is focused on implementing an effective release management process. Using the previous phases as a stepping stone, this phase documents repeatable builds of the most critical assets and services enabling their rebuilding to be more cost effective than to repair. In a process that leads to an efficient mass-production of standardized builds, senior IT operations staff can transform from a reactive to a proactive release management delivery model. This is achieved by operating early in the IT operations lifecycle by consistently working on software and integration releases prior to their deployment into production environments. At the same time a reduction in unique production configurations is pushed for, consequently increasing the configuration lifespans prior to their replacement or change which in turn leads to an improvement in manageability and reduction in complexity.  Eventually the output of these repeatable builds are "golden" images that have been tried, tested, planned and approved prior to production. Therefore when new applications, patches and upgrades are released for integration these golden builds or images need merely updating.

The fourth and last phase, entitled "Enable Continuous Improvement" is pretty self explanatory in that it deals with building a closed loop between the release, control and resolution processes. By completing the previous three phases, metrics for the three key process areas (release, controls and resolution) are focused on, specifically those that can facilitate quick decision making and provide accurate indicators of the work and its success in relation to the operational process. Drawing on ITIL‘s resolution process metrics of Mean Time Before Failure (MTBF) and Mean Time to Repair (MTTR), this phase looks at Release by measuring how efficiently and effectively infrastructure is provisioned. Controls are measured by how effectively the change decisions that are made keep production infrastructure available, predictable and secure, while Resolution is quantified by how effectively issues are identified and resolved.

So while these four concise and particular phases look great on paper what really differentiates them from potentially just being another theoretical process that fails to be delivered comprehensively in practical reality? If the manner in which IT is procured, designed, configured, validated and implemented remains the same there is little if any chance for Visible Ops to succeed any much further than the Purple Badge lovers of ITIL. But what if the approach to IT and more specifically its infrastructure was to change from the traditional buy your own, bolt it together and pray that it works method and instead transferred to a more sustainable and predictable model? What if the approach to infrastructure was one of a green fields approach or seamless migration to a pretested, pre-validated, pre-integrated, prebuilt and preconfigured product i.e. a true Converged Infrastructure? What impact could that possibly have on the success of Visible Ops and the aforementioned four phases?

If we look at phase 1 and "stabilizing the patient" this can be immediately achieved with a Vblock where an organisation no longer has to spend time investigating and worrying about the risk and impact of change. By having a standardized product based approach as opposed to a bunch of components bundled together, thousands of hours of QA testing and analysis work can be performed by VCE for each new patch, firmware upgrade or update on a like for like product that is owned by the customer. With this acting as the premise of a semi-annual release certification matrix that updates all of the components of the Converged Infrastructure as a comprehensive whole, risks typically associated with the change process are eliminated. Furthermore as changes are dictated by this pre-tested and pre-validated process and need to adhere to this release certification matrix to remain within support, it helps eradicate any rogue based changes as well as inform problem managers comprehensively of the necessary changes and updates. Ultimately phase 1's objective of stabilization is immediately achieved via the risk mitigation that comes with implementing a pre-engineered, pre-defined and pre-tested upgrade path.

The challenge of phase 2, which in essence equates to an eventual full inventory of the infrastructure, is a painful process at the best of times especially as new kit from various vendors is constantly being purchased and bolted on to existing kit. Moving to a Vblock simplifies this challenge as it's a single product and hence a single SKU at procurement. Akin to purchasing an Apple Macbook that is made up of many components e.g. a hard drive, processor, CD-ROM etc., the Converged Infrastructure's components are formulated as a whole to provide the customer a product. The parts of the product and all of their details are known to the manufacturer i.e. VCE and can easily be transferred as a single bill of materials to the customer with serial numbers etc. thus ensuring an up to date and accurate inventory and consequently simplified asset management process. When patches, upgrades and additions of new parts and components are required they are automatically added to the inventory list of the single product, thus ensuring up to date asset management.

The Release Management requirement of Phase 3 offers a challenge that is not only embroiled with risk but also takes up a significant amount of staff and management time cycles to ensure that technology and infrastructure remain up to date. This entails the rigmarole of downloading, testing and resolving interoperability issues of component patches and releases and relies heavily on the information sharing of silos as well as the success of regression tests. The unique approach of a Vblock meets this challenge immediately by making pre-tested, validated software and firmware upgrades available for the end user enabling them to locate releases that are applicable for their Converged Infrastructure system. With regards to the rebuild as opposed to repair approach stipulated in phase 3, because a Vblock can be deployed and up and running in only 30 days, the ability to have a like for like standardized infrastructure for new and upcoming projects is a far easier process compared to the usual build it yourself infrastructure model. On a more granular level, by having a management and orchestration stack with a self service portal, golden image VMs can be immediately deployed with a billing and chargeback model as well as integration with a CMDB. The result is a quick and successful attainment of phase 3 of the Visible Ops model via a unified release and configuration management methodology that is highly predictable and enhances availability by reducing interoperability issues.

Measuring the success of metrics such as MTTR and MTBF as detailed in Phase 4 is ultimately linked to the success of the monitoring and support model that's in place for your infrastructure. With a product based approach to infrastructure the support model will also be better equipped to ensure continuous improvement. Having an escalation response process that is based on a product, regardless if resolving a problem requires consultation with multiple experts or component teams, ultimately means a seamless and single point of contact for all issues. This end-to-end accountability for an infrastructure's support, maintenance and warranty makes the tracking of issue resolution and availability a much simpler model to measure and monitor. Furthermore with open APIs that enable integration with comprehensive monitoring and management software platforms, the Converged Infrastructure can be monitored for utilization, performance and capacity management as well as potential issues that can be flagged proactively to support.

As IT operational efficiency becomes more of an imperative for businesses across the globe, the theoretical practices that have failed to deliver are either being assessed, questioned or in some cases continued with. What is often being overlooked is that one of the key and inherent problems is the traditional approach to building and managing IT infrastructure. Even a radical and well researched approach and framework such as Visible Ops will eventually suffer and at worse fail to succeed if the IT infrastructure that the framework is based on was built by the same mode of thinking that created the problems. Fundamentally whether the Visible Ops model is a serious consideration for your environment or not, by adopting the framework with a Vblock, the ability to stabilize, standardize and optimise your IT infrastructure and its delivery of services to the business becomes a lot more practical and consequently a lot less theoretical.

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

@ThingsExpo Stories
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrategies, will examine why IT must finally fulfill its role in support of its SBUs or face a new round of...
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device experiences grounded in people's real needs and desires.
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...