Welcome!

SOA & WOA Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, Adine Deford

Blog Feed Post

Getting the maximum performance of your Java processes

therore-concurrent provides self-tuning thread-pools helping you to make the most of your system.

Recently, I have been working in the optimization of an OLTP system. The software has a SEDA architecture (Staged Event Driven Architecture) with lots of threads doing little works. I had to fight with the hard task of adjusting a hundred of parameters. Each of those parameters affected some others and so on.

For example if the number of concurrent database connections is set too low, it would cause a contention in getting connections. On the contrary, if that number is set too high, it could cause a lock-contention in the database when the threads want to access to some shared resources (index, row, block, etc.)

Even more, not always the processing of an event requires the same type of resources. A sudden change in the type of events that are being treated, can turn an optimal configuration into a suboptimal.

One of the most significant parameters is the number of threads assigned for each component. It is difficult to choose a good value if you don’t know how much the threads use each type of resource and how much are they coupled between each other.

Usually certain tasks have a higher priority and should be processed as soon as possible. This further complicates the choice of the configuration. Enforcing priorities and maximizing throughputs are opposite goals therefore it is necessary to define the scope of both.

In my experience a huge configurability can work against you. In a medium/big SOA system with a lot of service communications and complex workload profiles that even change over time, is almost impossible to get the optimal value for each of those parameters. Because of that I found interesting to develop a library that might be able to adapt quickly at runtime in order to make the most of the system.

Self-tuning thread-pool

Nowadays creating threads manually is not very common. Instead of that, thread-pools are frequently used. A thread-pool manages the creation and allocation of threads. JDK comes with some interesting and useful classes for managing threads. I list two of the most important:

  • ThreadPoolExecutor is a very flexible and configurable thread-pool that supports customization of queue size, minimum and maximum pool size, keep-alive time, etc.
  • Executors is a convenient class that creates thread-pools for the most usual cases.

I have developed the library therore-concurrent that takes advantage of those classes and extends some functionalities. The library contains analogous to the above classes.

  • SelfTuningExecutorService is a thread-pool that implements a mechanism for searching a good value for the pool size. The algorithm tries to maximize the throughput respecting the thread-pool priorities.
  • SelfTuningExecutors acts as the factory of SelfTuningExecutorService. It is recommended to use it as a singleton.

The following charts show how quickly SelfTuningExecutorService finds the optimal value.

selftuning_poolsize_executions_chart

Using SelfTuningExecutors directly

  • Add the dependency to the pom
  • <dependency>
        <groupId>net.therore</groupId>
        <artifactId>therore-concurrent</artifactId>
        <version>1.1.0</version>
    </dependency>
    

  • The following snippet shows how can it be used.
  • SelfTuningExecutors executors = SelfTuningExecutors.defaultSelfTuningExecutors();
    ExecutorServicce service = executors.newSelfTuningExecutor("executor-for-test", corePoolSize, initPoolSize
           , maximumPoolSize, priority, queueSize);
    service.execute(task);
    

The only new parameters are initPoolSize and priority.

  • initPoolSize is the initial amount of threads assigned to the pool.
  • priority is a positive number that works for SelfTuningExecutorService to limit the number of threads of this pool regarding others.

Integrating SelfTuningExecutors with Quartz Scheduler

Quartz-Scheduler has his own thread-pool interface and its name is “ThreadPool” (not surprise). The class SelfTuningThreadPool that is in the artifact therore-concurrent-quartz implements such interface. Integrating it is very easy, follow these steps:

  • Add the dependency to the pom
  • <dependency>
        <groupId>net.therore</groupId>
        <artifactId>therore-concurrent-quartz</artifactId>
        <version>1.1.0</version>
    </dependency>
    

  • Change the configuration properties of quartz
  • # org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
    # org.quartz.threadPool.threadCount = 1
    # org.quartz.threadPool.threadPriority = 5
    org.quartz.threadPool.class = net.therore.concurrent.quartz.SelfTuningThreadPool
    org.quartz.threadPool.corePoolSize = 1
    org.quartz.threadPool.initPoolSize = 1
    org.quartz.threadPool.maximumPoolSize = 100
    org.quartz.threadPool.priority = 5
    org.quartz.threadPool.queueSize = 2
    

Integrating SelfTuningExecutors with Apache Camel

I love Apache Camel. It offers a lot of components supporting integration with different technologies. But if none of them actually help you yet, it’s pretty easy to make your own component.

Camel’s team has thought very well the threading model. They use the concept (and interface) of ThreadPoolProfile which is a kind of thread-pool-template that you can use to instantiate several pools with the same configuration. If that is not enough, you can program your own implementation of ExecutorServiceManager, the Camel’s thread-pool provider. Simplifying, think about it like the Executors class of the JDK.

I’ve just done that, SelfTunigExecutorServiceManager is the name of my own implementation of ExecutorServiceManager. It is located in other maven module therore-concurrent-camel. I’ll explain how to use it.

  • Add the dependency to the pom
  • <dependency>
        <groupId>net.therore</groupId>
        <artifactId>therore-concurrent-camel</artifactId>
        <version>1.1.0</version>
    </dependency>
    

  • The following snippet contains two connected routes with SEDA component and SelfTunigExecutorServiceManager
  • SelfTunigExecutorServiceManager executorManager = new SelfTunigExecutorServiceManager(context);
    context.setExecutorServiceManager(executorManager);
    ThreadPoolProfile profile = new ThreadPoolProfile();
    profile.setId("self-tuning-profile");
    profile.setMaxPoolSize(100);
    profile.setMaxQueueSize(100);
    profile.setDefaultProfile(true);
    executorManager.setDefaultThreadPoolProfile(profile);        
    
    final String sedaEndpointUri = "seda:myseda?blockWhenFull=true&size=1";
    context.addRoutes(new RouteBuilder() {
       @Override
       public void configure() throws Exception {
           from("direct:in")
           .to(sedaEndpointUri);
       }
    });
    context.addRoutes(new RouteBuilder() {
       @Override
       public void configure() throws Exception {
           from(sedaEndpointUri)
           .threads(1, 100)
           .to("bean:mybean");
       }
    });
    
    ProducerTemplate template = context.createProducerTemplate();
    context.start();
    for (int i=0; i<ITERATIONS; i++) {
       template.sendBody("direct:in", "dummy string");
    }
    

Summary

I have figured out that there are many elements that might turn into selftuning ones. I chose ThreadPool because from my point of view is one of the most important, used and easy to test element.

Moreover, most of the modern libraries and frameworks feature different ways to extend their factories, providers and templates. All of that aims to develop general purpose classes and integrate them with lots of frameworks.

Read the original blog entry...

More Stories By Alfredo Diaz

Alfredo Diaz is a Java EE Architect with over 10 years of experience. He is an expert in SOA, real-time processing, scalability and HA. He is an Agile enthusiast.