Welcome!

Microservices Expo Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan, Charles Araujo

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Agile Computing, Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Cloud Computing: Rethinking Control of IT

Executives are still dead set on building Private Clouds. The true reason for this stubbornness is the battle over control.

In my role as a globetrotting Cloud consultant, I continue to be amazed at how many executives, both in IT and in the lines of business, still favor Private Clouds over Public. These managers are perfectly happy to pour money into newfangled data centers (sorry, “Private Clouds”), even though Amazon Web Services (AWS) and its brethren are reinventing the entire world of IT. Their reason? Sometimes they believe Private Clouds will save them money over the Public Cloud option. No such luck: Private Clouds are dreadfully expensive to build, staff, and manage, while Public Cloud services continue to fall in price. Others point to security as the problem. No again. OK, maybe Private Clouds will give us sufficient elasticity? Probably not. Go through all the arguments, however, and they’re still dead set on building that Private Cloud. What gives? The true reason for this stubbornness, of course, is the battle over control.

Thinking Like a Control Freak
IT executives in particular have always been control freaks. Our IT environments have been filled with fragile, flaky gear for so long that we figure the only way to run the IT shop is to control everything, grudgingly doling out bits of functionality and information to business users, but only when they ask nicely.

But this old mainframe reality has been fading for years now. The move to client/server to n-tier to the Internet and now to the Cloud are all exercises in increasingly distributed computing, with special emphasis on the distributed. As in distributed control.

The technology powers that be in the enterprise have been fighting this trend kicking and screaming, of course. But they’ve been fighting a losing battle. We saw the tide turn in the first-generation SOA days of the 2000s, when the IT establishment invested tried to implement SOA by buying ESBs, centralized pieces of middleware that purported to run the organization. But too many enterprises ended up with multiple ESBs and other pieces of middleware, since of course every manager in every department silo needs their own, because they all crave control. So the doomed SOA effort became a futile exercise in middleware-for-your-middleware, as the desired agility benefit sank beneath waves of rats-nest complexity.

What’s really going on here? Why do executives crave control so badly? Two reasons: risk mitigation and differentiation. If that piece of technology is outside your control, then perhaps bad things will happen: security breaches, regulatory compliance violations, or performance issues, to name the scariest. The problem is, maintaining control doesn’t necessary reduce such risks. But if you’re responsible for managing the risks, then the natural reaction is to crave control.

Managers also believe that whatever it is they’re doing in their silo is special and different in some way. So there’s no way they can leverage that shared piece of middleware or shared SOA-based Services or multitenant Cloud. If they did, they wouldn’t be special any more. Having a differentiated offering is essential to any viable market strategy, after all. So clearly my technology has to be different from your technology!

Chaos vs. Control
The Cloud, as you might expect, shakes up both these considerations, because the Cloud separates responsibility from control in ways that we’ve never seen before. Every manager knows that these two priorities often go hand in hand, and under normal circumstances, we prefer them to go together, because the last thing we want is responsibility without control: the recipe for becoming the scapegoat, after all. With the Cloud, however, we can maintain control while delegating responsibility to the Cloud Service Provider (CSP). The CSP is responsible for ensuring the operational environment is working properly, including the automated management and user-driven provisioning and configuration that differentiate Cloud Computing from virtualized hosting. However, the CSP has delegated control over each customer environment to that customer.

By turning around this control vs. responsibility equation, we’ve placed the CSP into the scapegoat position. As long as we have an iron-clad Service-Level Agreement with our CSP, we can trust them to take responsibility for our operational environments, and if anything goes wrong, we can hold them responsible. But the control over those environments remains with us, the customer. Once enterprise executives realize this new world order, they will run as fast as they can away from building Private Clouds. After all, if you can maintain control while delegating responsibility, why would you ever want responsibility? Responsibility gets people fired, after all.

Shifting responsibility to the CSP also helps to resolve the regulatory compliance roadblock that so many executives point to as the reason to select Private over Public Cloud. A combination of a properly responsible CSP combined with a sufficiently detailed SLA can go a long way toward indemnifying organizations against compliance breach risks. Remember, regulations rarely if ever specify how you must comply, only that you must. It’s up to you (and your lawyers) to decide on the how. As long as you’re diligent, conscientious, and follow established best practice, you’ve mitigated the bulk of your noncompliance risk. The CSPs are chomping at the bit to take this responsibility, so the smart risk mitigation strategy is shifting toward the Public Cloud.

The Price of Differentiation
The second threat to centralized control of IT is the business driver toward differentiation. Whatever our department or business is doing is special and different, and thus our infrastructure as well as our application environment must be unique as well. This principle is always true up to a point, which is why executives love to cling to it like a floating log in a vast sea of change. But just where that point falls continues to shift, and has shifted further than many people realize.

No enterprise would dream of calling a computer chip company and asking them to fabricate a custom processor for general business needs. What about a server? Unlikely, but perhaps. What about your core business applications, like finance, human resources, or customer relationship management (CRM)? Somewhat more likely. How about applications that provide capabilities that differentiate you in the marketplace? OK, now we’re talking.

In other words, virtually no enterprise has any rational motivation to specify custom infrastructure. Today’s Infrastructure-as-a-Service (IaaS) will do, especially considering how many configuration choices are available today: processor speed, operating system (as long as you want Windows or a flavor of Linux), memory, storage, and network are all user configurable and provisionable. Furthermore, there’s no reason to customize your dev, test, or deployment environments, so might as well use a Platform-as-a-Service (PaaS) offering.

But what about the applications? For non-strategic apps like CRM, might as well use Software-as-a-Service (SaaS) like Salesforce. No executives in their right mind would say that their customer relationship needs are so unique that they should code their own CRM system. So, what about those strategic apps, the ones that offer our differentiated capabilities or information to our customers? If an existing SaaS app won’t do, well, that’s what PaaS and IaaS are for: building and hosting our custom apps for us, respectively.

Still not convinced? Consider the competitive risk: the risk of spending too much money on unnecessary capabilities. While your competition is leveraging the Cloud, focusing their efforts on their true strategic differentiation in the market and saving buckets of dough everywhere else, you’re busy pouring cash into building yet another widget that might as well be the same widget you can get much more cheaply in the Cloud. If doing something unique and different doesn’t help the bottom line, then you’re simply wasting money. The asteroid is almost here. Which would you rather be, a dinosaur or a mammal?

The ZapThink Take
Outsourcing commodity capabilities to the low-cost provider while focusing your strategic value-add on customized offerings is an oft-repeated pattern in the world of business, but it hasn’t really taken hold in the world of IT until the rise of Cloud Computing. The reason it’s taken so long for the techies is because we’ve never been able to separate control and responsibility in the past as well as we can today. Before the Cloud, if we wanted to outsource one, then the other went along for the ride. Any enterprise that outsourced their entire IT operation went down this road. Sure, your technology becomes somebody else’s responsibility, but you end up giving up control as well.

Perhaps the greatest challenge with maintaining such control with the Cloud is that it raises the stakes on governance, leading to what we call next-generation governance in our ZapThink 2020 Poster as well as my new book, The Agile Architecture Revolution. The Cloud’s automated self-service represents powerful tools in the hands of people across our organization. Without a proactive, automated approach to governance, we risk running off the rails. Such issues are endemic in today’s technology environments: from Bring-Your-Own-Device (BYOD) challenges to SOA governance to rogue Clouds, we must learn how to maintain control while maintaining the agility benefit such powerful technology dangles in front of us. But until we learn to delegate responsibility for the underlying technology to Public Cloud Providers, we’ll never be able to maintain control cost-effectively while maintaining our competitiveness.

Image source: Diego David Garcia

More Stories By Jason Bloomberg

Jason Bloomberg is a leading IT industry analyst, Forbes contributor, keynote speaker, and globally recognized expert on multiple disruptive trends in enterprise technology and digital transformation. He is ranked #5 on Onalytica’s list of top Digital Transformation influencers for 2018 and #15 on Jax’s list of top DevOps influencers for 2017, the only person to appear on both lists.

As founder and president of Agile Digital Transformation analyst firm Intellyx, he advises, writes, and speaks on a diverse set of topics, including digital transformation, artificial intelligence, cloud computing, devops, big data/analytics, cybersecurity, blockchain/bitcoin/cryptocurrency, no-code/low-code platforms and tools, organizational transformation, internet of things, enterprise architecture, SD-WAN/SDX, mainframes, hybrid IT, and legacy transformation, among other topics.

Mr. Bloomberg’s articles in Forbes are often viewed by more than 100,000 readers. During his career, he has published over 1,200 articles (over 200 for Forbes alone), spoken at over 400 conferences and webinars, and he has been quoted in the press and blogosphere over 2,000 times.

Mr. Bloomberg is the author or coauthor of four books: The Agile Architecture Revolution (Wiley, 2013), Service Orient or Be Doomed! How Service Orientation Will Change Your Business (Wiley, 2006), XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996). His next book, Agile Digital Transformation, is due within the next year.

At SOA-focused industry analyst firm ZapThink from 2001 to 2013, Mr. Bloomberg created and delivered the Licensed ZapThink Architect (LZA) Service-Oriented Architecture (SOA) course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, which was acquired by Dovel Technologies in 2011.

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting), and several software and web development positions.

@MicroservicesExpo Stories
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service.
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
"We view the cloud not as a specific technology but as a way of doing business and that way of doing business is transforming the way software, infrastructure and services are being delivered to business," explained Matthew Rosen, CEO and Director at Fusion, in this SYS-CON.tv interview at 18th Cloud Expo (http://www.CloudComputingExpo.com), held June 7-9 at the Javits Center in New York City, NY.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual b...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
Don’t go chasing waterfall … development, that is. According to a recent post by Madison Moore on Medium featuring insights from several software delivery industry leaders, waterfall is – while still popular – not the best way to win in the marketplace. With methodologies like Agile, DevOps and Continuous Delivery becoming ever more prominent over the past 15 years or so, waterfall is old news. Or, is it? Moore cites a recent study by Gartner: “According to Gartner’s IT Key Metrics Data report, ...
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Archi...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...