Parallel stream vs sequential stream vs for-each loop processing in java 8

I had a discussion with another java programmer about processing collections with java 8 streams and she mentioned that she prefers classical loops because streams are slower. This is the perfect start for a hot debate about which one is better and the best way to continue it is to have facts. In previous post I parsed a pretty large text file into a list so I decided to extend foods parsing code and process this list in three ways: the old fashion for-each loop, sequential stream and parallel stream. The result list after parsing contains almost 9000 elements and the processing consists in computing the number of calories in a common household measure like the tablespoon starting from the number of calories in 100g.

Each line from the original file is saved in an instance of

the result of the tranformation is kept in an instance of

and the transformation is done by

I used kotlin to implement needed data classes because in kotlin is very easy to define them and it is also very easy to mix kotlin with java. I used jmh to create the benchmark and the code is below:

I was ready to see impressive results for parallel processing but I was dissapointed because it looked like:

After few minutes of thought I realised that the actual processing is too simple and it runs fast even in sequential way. I artficially “improved” it to use BigDecimal:

and the results are:

In this case the implementation with parallel stream is ~ 3 times faster than the sequential implementations. Also there is no significant difference between fore-each loop and sequential stream processing.

My conclusions after this test are to prefer cleaner code that is easier to understand and to always measure when in doubt.