Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Automic Blog, Liz McMillan, Mamoon Yunus

Related Topics: Microservices Expo, IBM Cloud, Recurring Revenue, Artificial Intelligence, Server Monitoring, @CloudExpo

Microservices Expo: Blog Post

Finding New Life For SOA in the Cloud

SOA Announces Comeback Tour

We’ve been having quite a few discussions with analysts over the past few months on the subject of “cloud”. The interesting thing about these discussions is the vast array of points of view from which those analysts are viewing “cloud”. Some are focused on the network aspects, others on pricing/differentiation, and some are even very focused on what “cloud” means to applications – and the organizations that will, allegedly, take advantage of the cloud as a means of application deployment.

One such analyst is Daryl Plummer of Gartner. Daryl has always been very application focused so it’s always a pleasure to speak with him and, of late, read what he has to say via his blog. (Daryl is also a cartoonist, and has turned his interests in that area on the cloud, resulting in “G-Men”. If you haven’t yet, take a gander. He’s quite talented.)

The last time we spoke to Daryl he asked “What can you do to help an organization move a monolithic application into the cloud?” That’s a fairly straightforward answer for F5, unless you specify that the organization wants to move workload into the cloud, not necessarily the entire application.

SOA IS BACK IN BUSINESS

See, the problem here is that workload is not the same thing as an application. Workload is more equivalent to, say, an activity in a business process orchestration than it is the entire process, which would equate more closely to the application.

Workload is a discrete block of application logic that is self-contained, and can be executed on its own. In structured languages we might codify event_ticket this as a function, in an object-oriented language we’d likely go the route of a method, and in the land of SOA (Service Oriented Architecture) we’d call this a web service.

That’s right, folks, SOA has risen from the dead and is about to embark on a comeback tour.

Invariably applications always seem to have one or two “functions” that are fairly compute intense; these are the chunks of application logic that require more processing than others usually because they’re mathematically complex, or require a lot of analysis, or just involve churning through huge data sets. For whatever the reason, these “workloads” are expensive to run.

The belief is that these workloads are the ones that can be more effectively offloaded to the cloud. Often, these workloads are of a nightly or weekly execution nature; they aren’t run all the time and when they are running, nothing else can because it’s chewing up resources faster than housing values are dropping.

But you can’t “pull them out” of a monolithic application. The cloud wasn’t designed to assist in decomposition of monolithic applications into composite processes. It was designed, for the most part, to run applications; the two are not the same.

In order to move a “workload” into the cloud you have to decouple it from the application; you have to use the basic principles associated with SOA and decompose the application into its composite processes such that you can distribute those processes in a way that most effectively utilizes the processing power at hand – whether that’s locally or in the cloud. You can’t simply move a monolithic application into the cloud and expect the cloud provider to be able to dig into it and optimize the execution of specific processes. It just isn’t that smart.

BUT WHAT ABOUT GRID?

The concept of grid has always revolved around parallelization of processes; executing lengthy or computationally expensive tasks in parallel to reduce the amount of time required to complete. But grid requires that you separate out (decouple) the processes to be parallelized from the application. Grid isn’t necessarily smart enough either to move the distribute a specific function or operation across multiple machines in order to increase the speed of execution. At least not yet.

The problem appears to be that we’re attributing cloud and grid with attributes that are more akin to CPU scheduling than what they really are capable of doing. Yes, the use of CPU cycles is an integral part of the concept of cloud and grid, but the ability to schedule individual pieces of logic across CPUs is not something the cloud or grid is capable of doing – unless the developer uses tools and methodologies available to tell it to do so.

Which is the point of SOA, isn’t it? SOA (is supposed to, anyway) decomposes applications into discrete services so they can be distributed intelligently. If one service is reused by multiple business processes it can be replicated or moved into the cloud so that it scales appropriately to meet the demands that are placed upon it by other applications.

The problem, of course, is that decomposing monolithic applications requires resources and time. But there really is no other way to solve the problem – at least not yet. The cloud is not a huge bank of CPUs across which discrete functions can be distributed. It’s not. The cloud is a huge bank of servers and while it’s more than capable of distributing applications across those servers, it isn’t necessarily about optimizing the execution of applications across CPUs. That’s more grid, and taking advantage of grid is going to require some changes to the application, too.

Basically, if you’ve got a monolithic application you’re either (a) moving it en masse to the cloud or (b) ripping it apart into services or grid-enabled processes. Those are your options right now, take it or leave it. If you want to move “workload” into the cloud, you’re going to have to enable your applications to do so. And that means SOA or proprietary grid-enablement.

Or you can wait and see what happens next. But it’s likely that before grid meets cloud and actually creates a system capable of distributing both applications and workload – automatically – across servers and CPUs that it’ll be somebody else’s problem.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong...
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
Web services have taken the development world by storm, especially in recent years as they've become more and more widely adopted. There are naturally many reasons for this, but first, let's understand what exactly a web service is. The World Wide Web Consortium (W3C) defines "web of services" as "message-based design frequently found on the Web and in enterprise software". Basically, a web service is a method of sending a message between two devices through a network. In practical terms, this ...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....