Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Blog Feed Post

You Can't Have IT as a Service Until IT Has Infrastructure as a Service

The underlying premise of delivering IT “as a service” is that the services exist to be delivered in the first place

The underlying premise of delivering information technology “as a service” is that the services exist to be delivered in the first place.

Oh, it’s on now. IT has been served with a declaration of intent and that is to eliminate IT and its associated bottlenecks that are apparently at the heart of a long application deployment lifecycle. Ignoring reality, the concept of IT as a Service in many ways is well-suited to solving both issues (real and perceived) on the business and the IT sides of the house. By making the acquisition and deployment of server infrastructure a self-service process, IT abrogates responsibility for deploying applications. It means if a project is late, business stakeholders can no longer point to the easy scapegoat of IT and must accept accountability.

Or does it?

WHAT does “IT as a SERVICE” MEAN ANYWAY?

We can’t really answer that question until we understand what all this “IT as a Service” hype is about, can we?

The way tech journalists report on “IT as a Service” and its underlying business drivers you’d think the concept is essentially centered on the elimination of IT in general. That’s not the case, not really. On the surface it appears to be, but appearances are only skin deep. What businesses want is a faster provisioning cycle across IT: access, server infrastructure, applications, data. They want “push-button IT”, they want IT as a vending machine with a cornucopia of services available for their use with as little human interaction as possible.

What that all boils down is this: the business stakeholders want efficiency of process.

Eric Knorr of InfoWorld summed it well when he recently wrote in “What the ‘private cloud’ really means”:

blockquote Nonetheless, the model of providing commodity services on top of pooled, well-managed virtual resources has legs, because it has the potential to take a big chunk of cost and menial labor out of the IT equation. The lights in the data center will never go out. The drive for greater efficiency, though, has had a dozen names in the history of IT, and the private cloud just happens to be the latest one.

In reality all we’re talking about with “IT as a Service” really is private cloud, but it appears that “IT as a Service” has a strong(er) set of legs upon which to stand primarily because purists and pundits prefer to distinguish between the two. So be it, what you call it is not nearly as important as what it does – or is intended to do.

Interestingly enough, VMware made a huge push for “IT as a Service” at VMworld last month and, in conjunction with that, released a number of product offerings to support the vision of “IT as a Service.” But while there was much brouhaha regarding the flexibility of a virtualized infrastructure there was very little to support the longer vision of “IT as a Service.” It was more “IT as a Bunch of Virtual Machines and Provisioning Services” than anything else. Now, that’s not to say that virtualization of infrastructure doesn’t enable “IT as a Service”; it does in the sense that it makes one piece of the overall concept possible: self-service provisioning. But it imagedoes not in any way, shape or form assist in the much larger and complex task of actually enabling the services that need to be provisioned. Even the integration across the application delivery network with the core server infrastructure provisioning services – so necessary to accomplish something like live cloudbursting on-demand – relies upon virtualization only for the server/application resources. The integration to instruct and integrate the application delivery network is accomplished via services, via an API, and whether the underlying form-factor is virtual or iron is completely irrelevant.

IT as a SERVICE REALLY MEANS a SERVICE-ORIENTED OPERATIONAL INFRASTRUCTURE

I’m going to go out on a limb and say that what we’re really doing here is applying the principles of SOA to IT and departmental function. Yeah, I said that. Again.

Take a close look at this diagram and tell me what you see. No, never mind, I’ll tell you what I see: a set of service across the entire IT infrastructure landscape that, when integrated together, form a holistic application deployment and delivery architecture. A service-oriented architecture. And what this whole “IT as a Service” thing is about is really offering up operational processes as a service to business folks. Just as SOA was meant to encapsulate business functions as services when it’s IT being pushed into the service-oriented mold you get operational functions as services: provisioning, metering, migration, scalability.

It’s Infrastructure as a Service, when you get down to it.

In order for IT to be a push-button, self-service, never-talk-to-the-geeks-in-the-basement again organization (which admittedly looks just as good from the other side as a never-talk-to-the-overly-demanding-business-folks again organization) IT not only has to enable provisioning of compute resources suitable for automated billing and management, but it also has to make available the rest of the infrastructure as services that can be (1) provisioned just as easily, (2) managed uniformly, and (3) integrated with some sort of human-readable/configurable policy creation tool that can translate business language, a la “make it faster”, into something that can actually be implemented in the infrastructure, a la “apply an application acceleration policy”. I’m simplifying, of course, but ultimately we really do want it imagethat simply, don’t we? We’ll never get there if we don’t have the services available in the first place.

Sound familiar, SOA architects? Business analysts? It should. A complete application deployment consists of a set of dynamic services that can be provisioned in conjunction with application resources that provide for a unified “application” that is fast, reliable, and secure. A composite “application” that is essentially a mash-up comprised of infrastructure services that enforce application specific policies to meet the requirements of the business stakeholder.

It’s SOA. Pure and simple. And while rapid provisioning is made easier by virtualization of those components, it’s not a necessity. What’s necessary is that the components provide for and are managed through services. Remember, there’s no magical fairy dust that goes along with the hardware/software –> virtual network appliance transformation that suddenly springs forth a set of services like a freaking giant beanstalk out of a magic bean. The components must be enabled to both be and provide services and from those the components can be deployed, configured, integrated, and managed as a service.

WITHOUT the SERVICES IT IS IT as USUAL

If the only services available in this new “IT as a Service” paradigm are provisioning and metering and possibly migration, then IT as a Service does not actually exist. What happens then is the angst of the business regarding the lengthy acquisition cycles for compute resources simply becomes focused on the next phase of the deployment – the network, or the security, or the application delivery components. And as each set of components is servified and made available in the Burger King IT Infrastructure menu, the business will turn its baleful eye on the next set of components..and the next…and the next.

Until there exists an end-to-end set of services for deploying, managing, and migrating applications, “IT as a Service” will not truly exist and the role of IT will remain largely the same – manual configuration, integration, and management of a large and varied set of infrastructure components.


Related blogs & articles:

Follow me on Twitter View Lori's profile on SlideShare friendfeed icon_facebook

AddThis Feed Button Bookmark and Share

 

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...