Welcome!

Microservices Expo Authors: Liz McMillan, Elizabeth White, Pat Romanski, Yeshim Deniz, Zakia Bouachraoui

Related Topics: Microservices Expo, Linux Containers, Open Source Cloud, Apache

Microservices Expo: Article

SOA Made Easy with Open Source Apache Camel

XML/REST/Web Services/SOA revolution has driven engineers and software firms to create an abundance of protocols

Today, many readers have completed many such exercises. There is a wealth of experience and thousands of successful projects out there that have led to the definition of many infrastructure design patterns that help developers cut to the chase when it comes to integration. One set of design patterns that has gained traction in the industry is Hohpe and Woolf's Enterprise Integration Patterns. These patterns include a technology-agnostic vocabulary for describing large-scale integration solutions. Rather than focusing on the low-level programming, they take a top-down approach to developing an asynchronous, message-based architecture.

A consistent vocabulary is nice, but an easy-to-use framework for actually building the infrastructure would be even better.
That was exactly the thinking behind the open source Camel project at Apache. Now that a tried-and-true set of patterns is available, the obvious next step is to create an engine that can implement the patterns in the simplest way possible.

Camel is a code-first tool that allows developers to perform sophisticated large-scale integration without having to learn any vendor-specific or complex underlying technology. Camel is a POJO-based implementation of the Enterprise Integration Patterns using a declarative Java Domain Specific Language to connect to messaging systems and configure routing and mediation rules. The result is a framework that lets Java developers design and build a Service Oriented Architecture (SOA) without having to read pages and pages of specifications for technologies like JMS or JBI or deal with the lower-level details of Spring.

Apache Camel grew organically from code and ideas that were generated from other Apache projects particularly Apache ActiveMQ and Apache ServiceMix. Project members found that people wanted to create and use patterns from the Enterprise Integration Patterns book in many different scenarios. The Camel team set about to build such a framework for exactly this purpose.

Camel Overview
The first step in building Camel was to decouple the implementation of the patterns from the underlying plumbing. Some people want to use the patterns inside an enterprise service bus (ESB), some people want to use them inside a message broker, and other people want to use these patterns inside an application itself or to talk between messaging providers. Still other people want to use them inside a Web Services framework or some other communication platform. Rather than tie this routing code to a particular message broker or ESB, Camel extracts this code to be a standalone framework that can be used in any project. Camel has a small footprint and can be reused anywhere, whether in a servlet, in the Web Services stack, inside a full ESB, or in a messaging application.

The primary advantage of Camel is that the development team doesn't have to work with containers just to connect systems. Many might consider working with containers to be a right of passage or a test of one's mettle, but to a growing number of teams these hurdles are an unnecessary barrier to entry. With Apache Camel, developers can get the job done with a minimum of extraneous tasks. Camel can, however, be deployed within a JBI container if other requirements warrant that, but it's not necessary.

To simplify the programming, Camel supports a domain-specific language in both Java and XML for the Enterprise Integration Patterns to be used in any Java IDE or from within spring XML (see Figure 1). This higher level of abstraction makes problem solving more efficient.

Camel reuses many Spring 2 features, such as declarative transactions, inversion of control configuration, and various utility classes for working with such things as JMS and JDBC and Java Persistence API (JPA). This raises the abstraction level to make things very simple, reducing the amount of XML one has to write, but still exposing the wire-level access if anyone needs to roll his sleeves up and get down and dirty.

Camel Examples
We're going explain different ways of configuring Apache Camel, first using the Java DSL (Domain Specific Language) and then using Spring XML configuration.

Java DSL Configuration
This example demonstrates a use case in which you want to archive messages from a JMS Queue into files in a directory structure. The first thing to do is to create a CamelContext object:

CamelContext context = new DefaultCamelContext();

There's more than one way of adding a Component to the CamelContext. You can add components implicitly - when we set up the routing - as we do here for the FileComponent:

context.addRoutes(new RouteBuilder() {

    public void configure() {
       from("test-jms:queue:test.queue").to("file://test");
       // set up a listener on the file component
       from("file://test").process(new Processor() {

         public void process(Exchange e) {
           System.out.println("Received exchange: " + e.getIn());
         }
       });
    }
});

or explicitly - as we do here when we add the JMS Component:

ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=false");
// note we can explicity name the component
context.addComponent("test-jms", JmsComponent.jmsComponentAutoAcknowledge(connectionFactory));

Next you must start the Camel context. If you're using Spring to configure the Camel context this is done automatically for you; although if you're using a pure Java approach then you just need to call the start() method:

camelContext.start();

This will start all of the configured routing rules.


More Stories By Robert Davies

Rob Davies is chief technology officer at FuseSource. One of the original members of the team, he co-founded LogicBlaze which was purchased by IONA and is now FuseSource. Prior to working for Logicblaze, he was a founder and the CTO of SpiritSoft which was purchased by Sun Microsystems. Rob has over 20 years experience of developing high performance distributed enterprise systems and products for telcos and finance, and is best known for his work at the Apache Software Foundation where he co-founded the ServiceMix, ActiveMQ, and Camel projects. He is now the PMC chair of ServiceMix and continues to be an active committer on all three projects. You can read his blog, On Open Source Integration, or follow him on twitter.

More Stories By James Strachan

James Strachan, technical director at IONA, is responsible for helping the Company provide open source offerings for organizations requiring secure, high-performance distributed systems and integration solutions. He is heavily involved in the open source community, and has co-founded several Apache projects, including ActiveMQ, Camel, Geronimo and ServiceMix. He also created the "Groovy" scripting language and additional open source projects such as dom4j, jaxen and Jelly. Prior to joining IONA, James spent more than 20 years in enterprise software development. Previously, James co-founded LogicBlaze, Inc., an enterprise open source company acquired by IONA. Prior to that, he founded SpiritSoft, Inc., a company providing enterprise Java middleware services.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...