scene-referred modules for working with HDR images

Following advice in various topics here, I figured out a convenient workflow for creating HDR images from RAW (just lens & CA correction, export to 32-bit TIFF using linear Rec2020, then Hugin’s align_image_stack -l *.tiff -o output.hdr, import that).

But I am still learning how to then work with these using the scene-referred workflow — most HDR tutorials I found address display-referred.

I realize that a lot of choices are artistic and subjective, but suggestions about what others find useful would be appreciated.

What I currently use in Darktable 3.6:

  • filmic for some initial adjustments, but only a modicum of dynamic range compression,
  • local contrast: has a local tone mapping for HDR which is a great starting point,
  • tone equalizer
  • color balance rgb, especially perceptual brilliance (add to shadows, take from highlights)

You’re on the right track. In general, the scene-referred workflow in darktable has been designed to address high dynamic range.

To balance brightnesses of certain regions, I also find several exposure module instances with drawn mask to be helpful. Tone equalizer also works pretty nicely in many cases.

And yes, color balance rgb is great for fine-tuning after the more coarse adjustments.

1 Like

The whole point of scene-referred is to remove the difference between HDR and SDR. Both should work the same, using the same modules, it’s only in filmic that your input DR (scene white and black relative exposures) will be larger than usual. Everything else should behave just the same.

Art is in the settings, the tools are far from artistic and subjective.

Is it possible to have a larger EV range in tone equalizer (for sliders and the curve)? -8 to 0 EV does not always cover the HDR range I have in my images.

I am aware of this, so let me rephrase the question: I am asking about tools in scene-referred workflow to compress the luminance range of the image to something that works well in SDR or current HDR display technology.

Exactly — I would like to learn about which tools people use for this purpose, and how. There are lot of tutorials using the display-referred workflow out there, but could not find something recent.

That 8EV range is the range of the mask, not of the underlying image. The masking parameters help you adjust the mask to the EV range of your image (basically a mapping of image tonal values to mask values)

Isn’t that the job of the filmic module?

1 Like

Maybe the question was not clear: I am missing the controls for the whole range. Talking about this:


As said above, I find that filmic is great for the initial adjustment but using that for the whole compression results in images which are “flat” and/or have lost detail.

You did see the description in the manual for the tone equalizer module? The controls in the masking tab are there to select how much of your image tonal range is covered by the mask…

Then there is the "local contrast module to correct that.

Keep in mind that any global compression of your tonal range (which is needed to get an HDR image to display on an SDR medium) will reduce the local contrast as well, this is unavoidable. So I find that I usually have to apply local contrast to correct for that. Again, that’s an unavoidable effect of globally compressing the tonal range. (Or you use “tricks” like tone equalizer)

My problem is that the mask contrast compensation slider is not showing up for me at all:


Darktable 3.6.0. Should I enable it somewhere? How?

OK, reading the source clarifies that it is only enabled for certain detail preservation settings.

You can also use more than one instance…use exposure to shift the mask and use one instance for “highlights” and one for shadows…then you can tweak the mask to give you the most resolution in each tonal range rather than trying to do it all with one instance…this also let you keep the curve adjustments more subtle and i find this give a nicer effect…

1 Like

You need to split conceptually the tone mapping in your head.

There is the scene → display code values remapping. Scene may be HDR and display may be HDR or SDR (for now, only SDR is supported by Linux video/graphics stack). Filmic does that.

There is also the dodging and burning, which was already done in analog darkroom, and consists in balancing the luminance of the picture areas relatively to each other. This is not directly related to display, but to the overall tone balance in the image, and uses local masks. This can be achieved by masked instances of exposure module or by the tone equalizer, which is sort-of a shortcut to several stacked instances of exposure module.

The EV values in tone EQ are nothing but an arbitrary splitting of the image luminance into 9 zones. They don’t refer directly to the scene luminance values. You distribute the scene luminance range as you wish between these 9 zones using the mask contrast and exposure compensations.


Probably I am confused because filmic comes before tone equalizer in the pipeline. So does the latter work with the output of the former, or does mapping to display happen somewhere else?

I reread the manual and this is described there correctly, but I think that using EV as a unit here is somewhat confusing, the scale is no longer \log_2.

Filmic does not come before tone equalizer in the pipeline. Tone equalizer comes immediately after exposure in the pipeline (so really early on).

Dumb question: so the ordering of the pipe is from bottom to top (as shown in the UI)?

What about module groups, is it right to left or left to right?

The ordering of the pipe is from bottom to top. The groups are unordered (they are just grouped by type of module - and you can create your own custom groups containing whatever modules you like). However, within each group the (bottom to top) module order is maintained.

Thanks. I realize that I can see the order of active modules in the relevant tab, but if I wanted to find out the ordering of all modules, how would I go about it?

Click the tab a second time or look at the user manual


The scale is \log2(luminance) aka EV because human perceptions are roughly logarithmic (Weber-Fechner law) and it is more manageable to control the image in log space. Also, the zones refer to the Ansel Adams Zone system, which are also defined in EV. But notice that only the control is done in log, the signal is still linear. We use a model-view-controller paradigm here, so don’t assume that what you see in GUI is 1:1 what is being processed in the filter.