Microservices Expo Authors: David Sprott, Sematext Blog, Lori MacVittie, Yeshim Deniz, Carmen Gonzalez

Related Topics: Microservices Expo

Microservices Expo: Article

SOA Feature Story: Real-Time SOA Starts with the Messaging Bus!

The mediator of all component interactions

Service Oriented Architectures are increasingly being used to implement high-performance and real-time systems. Traditional systems operate in "human real-time," where human patience is the limit. Increasingly, however, systems operate in "computer real-time," where the only limits are imposed by the operational speed of the computers and networks.

For example, next-generation Air Traffic Management systems are being developed to accommodate the huge increase in air traffic and link the operational capabilities of agencies such as the Federal Aviation Authority, the Department of Defense (DOD) and the Department of Homeland Security (DHS). These systems require higher information bandwidth (to track more aircraft or more complex "free-flight" trajectories) as well as much lower latencies or delays on the information (to detect flight abnormalities quickly). Similar demands are being made in healthcare, SCADA, network monitoring, energy distribution, transportation, and other critical infrastructure systems.

Best-of-Breed SOA Components
Demanding real-time applications require best-of-breed service-oriented foundational components. There are three kinds of foundational components in a SOA system: A messaging fabric/bus, information transformation/processing engines, and persistence/storage services (see Figure 1). Often these components are integrated into an Enterprise Service Bus (ESB) and hosted in a J2EE Application Server.

Of these foundational components, the Messaging Fabric/Bus is the most critical, since it mediates all interactions between components.

Low-performance SOA systems may use HTTP as the "messaging fabric/bus" to exchange messages between components. This approach is only suitable for non-demanding applications: HTTP isn't reliable, has limited bandwidth, introduces very high latencies, and can't buffer and queue messages and deliver them to systems that are either temporarily unavailable or join at a later time.

The solution is to deploy a high-performance messaging middleware such as RTI Data-Distribution Service, IBM WebSphere MQ, TIBCO, or SonicMQ. These middleware platforms have been developed with scalability and performance in mind. However, they each employ a different architecture optimized for different application scenarios.

Why Does Messaging Performance Matter?
The requirements and expectations of computer-speed real-time far exceed traditional human-speed real-time. Whereas in systems with a human in the loop, real-time meant that the information was available anywhere from fractions of a second to few seconds in the computer-to-computer world, real-time means decisions should be made in milliseconds or even microseconds.

Computer real-time puts more stringent requirements on the messaging infrastructure: Each processing and storage component must get hundreds of thousands of messages/events per second with microsecond or at worst millisecond latencies. This means that the messaging middleware must be able to deliver millions of messages a second system-wide.

And the capacity of the messaging fabric must be able to scale with the capacity of the underlying hardware and not impose any limits beyond those of the underlying hardware infrastructure (CPU speed, cores, speed, and bandwidth of the network) itself. As the CPU and network speeds increase those systems able to take advantage of what the hardware provides will deliver a competitive advantage. In an automated trading system, for instance, the critical metric is not the absolute time it takes to make a decision, but rather whether a decision is taken and the trade executed before competitive trades occur. The same is true in a combat management system.

One final aspect of computer real-time SOA systems is their "inverted performance-load utility curve." This means that the ability to respond in a timely manner becomes more important when the system is experiencing a high load. In a normal utility curve, such as in human real-time systems, degraded performance is acceptable under an increased load. This is because human expectations and patience adjust based on the circumstances (e.g., they understand that on a peak holiday period they may endure longer hold times when calling to make a flight reservation). In contrast, computer-speed real-time systems often have the opposite demands. It is precisely at the moments of high load when the "most critical action" is taking place and it is then when it is most critical to deliver top performance (e.g., it is precisely when market action is heavy that trading decisions must be made quickly).

The differences between human-speed real-time systems and computer-speed real-time systems are summarized in Table 1.

Selecting Messaging Middleware in SOA Systems
Messaging middleware is the key enabler of real-time SOA. However, there are many options. How can you choose the best messaging middleware for a particular real-time SOA system? Five areas distinguish messaging middleware: architecture, quality of service (QoS) control and filters, performance-boosting technologies, real-time determinism, and metrics.

The four basic architectures employed by messaging middleware are: centralized (hub-and-spoke), clustered, federated, and peer-to-peer. (see Figure 2)

A centralized (hub-and-spoke) architecture routes every message though a single server that implements the message "service," contains all the message queues, and brokers every message.
A clustered architecture uses a collection of servers and assigns to each responsibility for some of the messages (like ownership of some of the message queues or topics). Each message is relayed by a server but not all messages use the same server.

A federated architecture also uses a collection of servers, but it uses them as a "resource pool" where queues may appear in multiple servers, and messages may be brokered by one or more servers.

A peer-to-peer architecture doesn't employ any brokers in the critical path. Messages are routed directly from the sender to the receiver.

Each has strengths and weaknesses. Centralized is easiest to administer and can provide stronger transactional semantics but suffers from poor performance, reduced tolerance to faults, and doesn't scale. Clustered is more scalable than centralized but also has reduced fault tolerance and can only offer good performance in a grid environment with all the clients co-located close to the grid. Federated is more scalable, but suffers from higher latency and jitter as each message is brokered by at least two servers. P2P offers the best scalability, performance, lowest jitter, and highest resilience, but is difficult for vendors to implement and offers limited transactional support.

As demands become more real-time, the need for performance, predictability, and balance tips the scale towards P2P architecture. That's why, for example, demanding networks like Voice over IP and Video over IP (like Skype) use peer-to-peer designs.

Quality of Service Control & Filters
QoS control is critical to deliver timely data with low latency and high throughput. CPU, memory, and network bandwidth resources must be shared among all the traffic. However, not all traffic requires the same bandwidth or has the same urgency or level or criticality. Without QoS control, the application has no way to differentiate different traffic classes and their corresponding constraints. As a consequence, the middleware can't make intelligent decisions, prioritize traffic, or ultimately meet the application requirements.

More Stories By Gerardo Pardo-Castellote

Gerardo Pardo-Castellote, PhD, is chief technology officer of Real-Time Innovations Inc.

Comments (3) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Most Recent Comments
Gerardo Pardo-Castellote 07/20/08 01:57:08 AM EDT

Regarding the previous comment about "TCP not lining up a message on one connection after a file transfer on another connection." and the "information in the article not being correct."

This is true, but in order for this to occur you would need to open a new TCP connection for every message. This is extremely inefficient, requires a handshake involving a round-trip message, and allocates a lot of system resources. This is certainly something you do not want to do in a real-time system.

So in practice anybody developing a real-time system would have to hold the TCP connection open and send successive messages over it (or course one can keep more than one connection open, and round-robin among them but that does not change fundamental problem if the application is writing quickly). Therefore the information in the article IS correct.

Casual Visitor 06/12/08 03:04:45 PM EDT

TCP does not line up a message on one connection after a file transfer on another connection. Each TCP connection forms its own in-order transfer. If you want to convince people to buy your product, you should avoid putting incorrect information in the article. It is much better to have a good analysis with accurate claims so that people will believe that your product might overcome real problems rather than phantom ones like "messages wait behind file transfers".

Derek Pavatte 01/25/08 02:03:32 AM EST

If everything is automated, I suppose we will have more time to do things more pleasant things than work as much. These technological advancements sound very progressive. Let us all work towards a competent and ethical work environment.

@MicroservicesExpo Stories
In many organizations governance is still practiced by phase or stage gate peer review, and Agile projects are forced to accommodate, which leads to WaterScrumFall or worse. But governance criteria and policies are often very weak anyway, out of date or non-existent. Consequently governance is frequently a matter of opinion and experience, highly dependent upon the experience of individual reviewers. As we all know, a basic principle of Agile methods is delegation of responsibility, and ideally ...
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center. What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to...
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
SYS-CON Events announced today that eCube Systems, the leading provider of modern development tools and best practices for Continuous Integration on OpenVMS, will exhibit at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. eCube Systems offers a family of middleware products and development tools that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
operations aren’t merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations – together with other practices such as security – to collaborate and coexist with less overhead and conflict than in the past. In his session at @DevOpsSummit at 19th Cloud Expo, Gordon Haff, Red Hat Technology Evangelist, will discuss what modern operational practices look like in a world in which applications are more loosely coupled, are deve...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of So...
DevOps is a term that comes full of controversy. A lot of people are on the bandwagon, while others are waiting for the term to jump the shark, and eventually go back to business as usual. Regardless of where you are along the specturm of loving or hating the term DevOps, one thing is certain. More and more people are using it to describe a system administrator who uses scripts, or tools like, Chef, Puppet or Ansible, in order to provision infrastructure. There is also usually an expectation of...
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
DevOps theory promotes a culture of continuous improvement built on collaboration, empowerment, systems thinking, and feedback loops. But how do you collaborate effectively across the traditional silos? How can you make decisions without system-wide visibility? How can you see the whole system when it is spread across teams and locations? How do you close feedback loops across teams and activities delivering complex multi-tier, cloud, container, serverless, and/or API-based services?
Today every business relies on software to drive the innovation necessary for a competitive edge in the Application Economy. This is why collaboration between development and operations, or DevOps, has become IT’s number one priority. Whether you are in Dev or Ops, understanding how to implement a DevOps strategy can deliver faster development cycles, improved software quality, reduced deployment times and overall better experiences for your customers.