It’s been almost two months since my activities forked away from this blog, now it’s time to join back. In previous post, I compared three different ways to transform the elements from a list that included the classical for/each, sequential and parallel stream processing. It would be nice to see how fork/join framework competes against parallel stream but until then, let’s add the code.

The basic idea of the fork/join framework is to split the work into smaller pieces until those are small enough to run sequentially, run them concurrently and wait for results:

Let’s match the explanation above with the code:

  • split the work into smaller pieces until those are small enough to run sequentially

  • run them concurrently

  • wait for results

In order to run this code, we need to create a shared ForkJoinPool instance and call invoke method with an instance of ForkJoinConverter.

Each ForkJoinConverter instance uses a thread from the ForkJoinPool shared instance to execute its code. One small optimization that can be done to minimize the usages of those threads is to fork only one ForkJoinConverter instance and call directly compute method on the other ForkJoinConverter instance.

Make sure that rightConverter.compute() is called before leftConverter.join() in order to have concurrent executions of left and right converters.

I do not know now when writing our own fork/join algorithm would be better than using parallel streams, for the moment I would choose the streams just because it is simpler to use.

Your thoughts are welcome

2 replies