beginner question: why only one tone mapper?

This is a question of semantics. The dynamic range of the (scene referred) capture is always the same because it is rooted in physics. The output dynamic range of the display referred part of the equation is of course subject to all modules applied to the image.

Correct. There is the dynamic range of the scene in the physical sense (the dynamic range of the physical light available), and the dynamic range of the digital image as it moves through the pixelpipe, which is what I am talking about.

We can change the dynamic range of an image, whether we are in scene-referred or display-referred pipelines. The difference is that compressing and then uncompressing the dynamic range will cause data loss in a display referred workflow.

Well, no. That order, and using float values (as darktable does internally), should not cause data loss. Of course, if you are working with int values (8 or 10 bit/channel), you will get data loss, but that’s not due to the display-referred workflow.

If you first expand and then recompress data in a pure display-referred workflow, you can lose any data which are pushed outside the range 0…1 by the expansion (can, not will, if you work with float values; it depends on the module).

Yes that makes sense. In my head, I wasn’t accounting for float values. You are right that expanding and then compressing is the more probably way of losing information.

Note that by the above definition, the “dynamic range” of every image that contains a single black pixel is \infty :wink:

(To be fair, dynamic range is probably the most abused concept in photography/imaging)