JavaScript's iterator helpers are fast

Published on

Some people responded to my claim about the potential memory-efficiency of iterator helpers by telling that using iterator helpers are not worth it since JavaScript arrays are ultra optimized in terms of performance and garbage collection and the iterator helpers will usually be much slower without even offering any significant memory-saving benefits. So I decided to do some benchmarks with arrays of different sizes and different array transformation chain depths. Here I'll only focus on the execution speed since measuring memory usage is more complicated.

The benchmark

I decided to do the benchmarks with jsbenchmark.com on Chrome 137.

The core idea of the benchmark

The idea is to create an array of a certain size filled with random numbers and benchmark the same array transformations via the iterator helper and the traditional way.

Since testing for all array sizes and all transformation chain depths will require too many benchmarks (3 * 5 = 15), I'll do benchmarks only for 1 and 5 transformations for each array size, so it will require only 3 * 2 = 6 benchmarks. I think 6 benchmarks are good enough to see the patterns without being overwhelmed.

The setup

In the benchmark setup we'll generate an array filled with random numbers:

return Array.from({length: N}, () => Math.random())

N is the size of the array and can be 2000, 20000, 200000.

The results

Benchmark 1 results
2000 items and 1 transformation

For the smallest array and only 1 transformation it seems regular array transformations are much more efficient. However, this doesn't seem to be surprising. Also, I was suggesting to use iterator helpers for huge arrays, not for small ones.

Benchmark 2 results
20000 items and 1 transformation

As the size increases, the gap shrinks very substantially. This seem to support my theory of the advantage of the iterator helpers for huge arrays.

Benchmark 3 results
200000 items and 1 transformation

The gap shrinks further, but very slightly this time. It will probably not reach the performance of the regular array transformations. It looks like temporary allocations are not cheap for the performance.

Now let's see what happens if we use much deeper transformation chains. I predict much better results for the iterator helpers because this means more temporary array allocations for regular array transformations.

Benchmark 4 results
2000 items and 5 transformations

Like the 2K case for single transformation, the iterator helpers perform worse, but the gap is not as dramatic. Looks like the deeper chain causes more temporary array allocations for regular array transformations, and they start to perform worse from that.

Benchmark 5 results
20000 items and 5 transformations

The result is similar to the 20K and single transformation case. Both perform absolutely slower compared to the single transformation case, but the relative ratio is very similar to the 20K and single transformation case.

Benchmark 6 results
200000 items and 5 transformations

And finally, the iterator helpers slightly outperformed the regular array transformations. Honestly, my main point was the memory usage reduction instead of the execution speed, but I'm pleasantly surprised that iterator helper transformations can even work faster than regular array transformations for huge arrays and deep transformation chains. I also tested with perf.link and got a similar result.

Conclusion

On Chromium based browsers iterator helper transformations are not only potentially more memory-efficient (although measurements are needed) than regular array transformations, but can also work faster, which is a bit surprising. Of course, on different machines the results might differ somewhat because different CPUs, OSes, browser engine optimizations, etc can have an effect on the execution speed, but it still shows the overall pattern: iterator helper transformations start to perform not as bad or even better when the array size is huge and the transformation chain is deeper.





Read previous