Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Weblogic, Microservices Expo, IBM Cloud, PowerBuilder, Recurring Revenue, Artificial Intelligence, Log Management, Server Monitoring, @CloudExpo

Weblogic: Article

Why SOA Needs Cloud Computing - Part 1

IT has become the single most visible point of latency when a business needs to change

SOA in the Cloud

It's Thursday morning, you're the CEO of a large, publicly traded company, and you just called your executives into the conference room for the exciting news. The board of directors has approved the acquisition of a key competitor, and you're looking for a call-to-action to get everyone planning for the next steps.

You talk to the sales executives about the integration of both sales forces in three months time, and they are excited about the new prospects. You talk to the HR director who is ready to address the change they need to make in two months. You speak to the buildings and maintenance director who can have everyone moved that needs to be moved in three months. Your heart is filled with pride.

However, when you ask the CIO about changing the core business processes to drive the combined companies, the response is much less enthusiastic. "Not sure we can change our IT architecture to accommodate the changes in less than 18 months, and I'm not even sure if that's possible," says the CIO.  "We simply don't have the ability to integrate these systems, nor the capacity. We'll need new systems, a bigger data center..." You get the idea.

As the CEO you can't believe it. While the other departments are able to accommodate the business opportunity in less than five months, IT needs almost two years?

In essence, IT has become the single most visible point of latency when a business needs to change. Thus, the ability to change is limited by IT. In this case, the merger is not economically feasible and the executive team is left scratching their heads. Indeed, they thought IT was about new ways to automate the business, and had no idea how slow they are to react to change.

However, it does not have to be this way.  The survival of many businesses will depend upon the fundamental change in the way we think about and create our IT infrastructure going forward.  That is, if you're willing to admit where you are, and be willing to change.

How Things Got Off-Track
The issues with information technology are best understood by understanding the history of IT over the last 30 years. This would be why things are they way they are. It's almost like speaking at a 12-step program...you're admitting you have a problem, and are willing to look at how you got here.

It's also important that you check your ego at the door. IT folk typically don't like to talk about mistakes made in the past. Indeed, many will defend to the day they die all IT-related decisions that have been made in the past. That's really not the point. It's not about placing blame, it's about opening up your eye as to what you're currently dealing with, and opening up your mind as to ways it can be fixed.

If there is one issue that comes to mind each and every time we look at the mistakes that IT made in the past, it's managing-by-magazine. This is a term used many times. In essence, those charged with building and managing IT systems often did not look at what's best for the business, but looked at what's most popular at the time. Or, what was being promoted in the popular computer journals as the technology ‘required' to solve all of your problems.

We also have issues with managing-by-inertia, or the failure to do anything just because it's new and unknown. This is the opposite problem of managing-by-magazine, since instead of doing something just because it's popular, we simply sit on our existing IT architecture. Typically this lack of action is rooted around the fear of change, and the risks associated with it.

We had the structured computing revolution, which became the object-oriented computing revolution, which became distributed objects, which became component development, which became ERP, which became CRM, which became service-orientation...you get the idea. Of course, I'm missing a bunch of other technologies that we ‘had to have' including data warehousing, business intelligence, business process management, and the list goes on and on.

Not that these technologies were bad things; most were not. However, they had the effect of distracting those in IT from the core problems of their business, and thus focused more on the productized technology than the needs and requirements of the business. This was due to the fact that analyzing and documenting business requirements was not as fun, and not a resume-enhancing experience. We all want to be relevant.

This focus more on the solution than the problem caused a layering effect within the enterprise architectures. In essence, the architectures grew more complex and cumbersome due to the fact that the popular products of the day were being dragged into the data center, and became another layer of complexity that both increased costs and made the enterprise architecture much too fragile, tightly coupled, and difficult to change.

Today we have IT infrastructures and enterprise architectures that are just too costly to maintain, and difficult-to-impossible to change. As business needs change, including upturns and downturns in the economy, IT is having a harder and harder time adjusting to meet the needs of business. Indeed, as in our example at the beginning of this article, CEOs are finding that IT is typically the latency within the business that causes delays and cost overruns, and IT does not add value to the enterprise as it once did. Remember when IT was the solution and not the problem?

Indeed, IT departments were more productive when they were coding applications in COBOL on mainframes because it required them to be lean and cautious with their use of resources. Today, we have almost too much technology and too many options. We gave IT enough rope to hang themselves, or at least to get their IT architectures in a state that makes them much less valuable to the business.

•   •   •

This article is excerpted from "Cloud Computing and SOA Convergence in your Enterprise...a Step-by-Step Approach." By David S. Linthicum.

More Stories By David Linthicum

David Linthicum is the Chief Cloud Strategy Officer at Deloitte Consulting, and was just named the #1 cloud influencer via a recent major report by Apollo Research. He is a cloud computing thought leader, executive, consultant, author, and speaker. He has been a CTO five times for both public and private companies, and a CEO two times in the last 25 years.

Few individuals are true giants of cloud computing, but David's achievements, reputation, and stellar leadership has earned him a lofty position within the industry. It's not just that he is a top thought leader in the cloud computing universe, but he is often the visionary that the wider media invites to offer its readers, listeners and viewers a peek inside the technology that is reshaping businesses every day.

With more than 13 books on computing, more than 5,000 published articles, more than 500 conference presentations and numerous appearances on radio and TV programs, he has spent the last 20 years leading, showing, and teaching businesses how to use resources more productively and innovate constantly. He has expanded the vision of both startups and established corporations as to what is possible and achievable.

David is a Gigaom research analyst and writes prolifically for InfoWorld as a cloud computing blogger. He also is a contributor to “IEEE Cloud Computing,” Tech Target’s SearchCloud and SearchAWS, as well as is quoted in major business publications including Forbes, Business Week, The Wall Street Journal, and the LA Times. David has appeared on NPR several times as a computing industry commentator, and does a weekly podcast on cloud computing.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...