Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Microservices Expo

Microservices Expo: Article

Autonomic SOA Web Services - Achieving Fully "Business-Conscious" IT Systems

Service-oriented architectures (SOA) and autonomic computing are among the hottest topics in IT today

Service-oriented architectures (SOA) and autonomic computing are among the hottest topics in IT today. SOA simplifies integration and facilitates the componentization of enterprise-wide systems, thereby enabling optimal business agility. Autonomic computing allows these systems to operate without human intervention - through self-configuring, self-healing, and self-managing capabilities. By combining autonomic computing with SOA, enterprises can now achieve a new IT utopia, named "Autonomic SOA." However, this new level of autonomic computing goes beyond just reacting to traditional IT issues, such as increasing the speed to diagnose, repair, and prevent future issues. Autonomic SOA is also able to respond dynamically as changes occur within business conditions and processes, thereby empowering enterprises everywhere to achieve flawless execution on a daily basis.

A Healing Touch: Autonomic Computing
A useful analogy for understanding autonomic computing is the human nervous system. Autonomic controls in the human nervous system send indirect messages that regulate temperature, breathing, heart rate, and digestion without conscious thought. If your core body temperature rises too high your autonomic systems cause your sweat glands to secrete and cool you down automatically and involuntarily. By analogy, an autonomic computing system keeps core IT systems functioning at a basic level. If the CPU of a particular system appears to be overburdened, or another runs out of disk space, the autonomic system adapts to the situation by adjusting and reallocating available resources to maintain the system's equilibrium.

An autonomic IT infrastructure is made up of a network of organized, "smart" computing components that give us what we need, when we need it, without a conscious mental or physical effort. It gives us systems that look alive and think they are alive. However, in order to fully leverage these interactions, you must have the ability to gain the intelligence behind the autonomic components - you must go beyond the window dressing or the outer layer. Only then will you have a truly fully functional IT system.

So while we can now see how powerful autonomic IT systems are and how much easier they can make the life of the enterprise IT organization, we can't stop there; if we did, business would never progress, never grow, and never achieve optimal execution. Why not strive for full business consciousness? Where did the business failure occur and why? How can I improve my business execution? Is the answer in the IT infrastructure or is it in the business processes and the humans who interact with them?

Danger Zone: The World of SOA
SOA is the de facto standard for enterprise integration today. Adopting SOA gives businesses many benefits. From an IT perspective these benefits include lowered cost and complexity of integration as well as platform and technology independence. Standardization and reuse enable the business to be more agile in flexibly connecting its IT resource silos into composite applications, which can improve business process efficiency.

However SOA itself presents some significant challenges that need to be addressed by an autonomic approach. SOA is defined by loose coupling where messages and the exact format of those messages between service nodes can be specified at run time. SOA implies extensive distribution and scale because it links systems all across the enterprise and runs processes that reach every department in the organization. Over the years, corporations have developed fiefdoms, islands, stovepipes, or whatever you want to call them; they all represent communication and control challenges. SOA operates in the virtual world and that is a world that is even riskier and more threatening than the physical world. It's a world based on high-performance computing devices so things happen fast (including bad things), and that can leave little time for analysis and thought. All of this can, in the end, have a bad impact on the business users who are supposedly served by these leading-edge systems.

Real-World Business SOA
In SOA, business information flows between service nodes. Because interfaces are published and the flows are XML-based the business information itself is now accessible and actionable. SOA gives us the pathways and connections that move critical business messages between systems and applications that need to do the real work of business processes. Recalling our previous nervous system analogy, this is like the message from our brain telling our arm to move the steering wheel to the right. Hand-eye coordination and feedback pathways allow us to drive a car, and the combination of monitoring (our eyes), full visibility of everything that is happening (all of our other senses), and analysis (understanding and interpretation) makes sure that we stay on the road.

What's Inside: Autonomic SOA Characteristics
SOA is the bridge between IT and business. Transparency applied to SOA means being able to tell the difference between "is alive" in the context of the business and just "looks alive" in the IT context. Furthermore, combining autonomic IT and SOA allows us to determine if it's a business problem or an IT problem that needs to be fixed. Businesses need to bring together SOA for business with an underlying autonomic IT infrastructure. In order to know that from the business effectiveness viewpoint, all is well and executing optimally and to be able to make conscious, effective business decisions - that is, keeping the business in homeostasis - businesses need both. We call this combination of autonomic IT with SOA, autonomic SOA.

An autonomic SOA must have the following core characteristics: flexibility, accessibility, and transparency.

Flexibility - Autonomic systems must be able to sift through data in a platform- and device-agnostic manner. In SOA composite applications, data is in the form of messages that are passing from one service node to another. In those messages resides critical data that is driving the execution of the processes that control the business. If you can understand what is in those messages, then you've unlocked a more in-depth understanding of your business. When you act on that business information, you're empowered to handle risk quickly and proactively. You'll even have the power to aggressively seize business opportunities and in the process achieve optimal business execution.

Accessibility - Quite simply this means that the systems are always on. That has to mean more than that they're just powered up, and more than that electricity is running through the internal transistors and running the operating system. It should mean always on from a full business-execution perspective. Are the correct messages being received and sent? Are they following the correct syntax and semantics? Are they supporting the business process they are supposed to support? Are those business processes themselves always on from end to end, so a business interaction with customers always works as expected to their full satisfaction?

Transparency - Usually this means that the system interacts with users and performs its functions completely while shielding (and not burdening) users from the intricacies of how the system does its job. Of course this should be true regardless of whether you examine this at the IT level or whether you look more at the business level where the composite applications operate. In the realm of SOA, transparency takes on even more significance: transparency is our eyes and ears to what is happening in the business right as it happens, with enough intelligence applied to our observations that we know what to do with what we see.

More Stories By Jonathan Rosenberg

Dr. Jothy Rosenberg is a strategic advisor and cofounder of Service Integrity. He is currently vice president of software at Ambric, a fabless semiconductor company. Dr. Rosenberg also cofounded GeoTrust, the world's second largest certificate authority and a major innovator in enterprise managed security solutions. He is also the author of How Debuggers Work and Understanding Web Services Security (2003; Addison-Wesley). Jothy holds patents on watchpoint debugging mechanisms, content certification and site identity assurance, as well as a pending security compliance monitoring patent.

More Stories By Arthur Mateos

Dr. Arthur Mateos is the vice president of products at Boston-based Service Integrity (www.serviceintegrity.com), a provider of real-time Business Intelligence (BI) software for service-oriented architectures (SOA). Before cofounding Service Integrity, Dr. Mateos served as senior product manager at Inktomi. In this position he was responsible for Content Distribution (CDS) and Wireless Messaging product lines. Under his direction, the CDS product line was instrumental in driving more than $60M in revenue. He received a BA in Physics from Princeton University and a PhD in Nuclear Physics from MIT.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...