Highlight reconstruction module bug?

In darktable 3.4, I have noticed that the color reconstruction option in the HL module returns some geometric weirdness in the sky. The screenshot below shows a 100% zoom of a photo that I was editing, to the left “reconstruct in Lch”, to the right “reconstruct in color”:

I don’t know if this happened also on 3.2, never noticed it anyway.

This is the entire photo:

To recreate the situation, here’s the raw and xmp file:

20201031_FUJ5611.raf (26.6 MB)
20201031_FUJ5611.raf.xmp (10.6 KB)

Let me know if this is a known bug or anything else that needs to be reported on github (I tried to have a look at the recent issues but couldnt find anything related).

It never had worked

Thanks @age. I probably went too fast in my github search… apologies for this. I will refrain from using the reconstruct in color option in the future…

Keep in mind that now we also have a reconstruction tab in filmic.

As an aside, why would you want to reconstruct clouds in colour in a daylight scene?

This is the technical limitation of the “reconstruct color” algorithm. Personally, I avoid such artefacts by setting the highlight reconstruction to “reconstruct LCh”, using color reconstruction early in the pipeline (immediately above input color profile), and moving the exposure module just above the color reconstruction. The last step is to make the color reconstruction independent of exposure adjustment.

Screenshot from 2021-01-04 10-46-23

Of course you can do this, but this doesn’t make sense as a general solution. Highlight reconstruction module is intended to work on the raw data dealing with the raw information contained in the pixels. It’s even not dealing with colors since it’s before demosaicing.

I think you didn’t read it carefully. I do not change the location of the highlight reconstruction module in the pipeline. So, it’s still handling “mosaic” data. It’s even impossible to move it above the demosaic module!

Yes, and in fact I use it as well. However I have read various posts on the subject that leave some space for interpretation, it’s not like “stop using the HL module because it’s deprecated”. As I understand both the HL module and the Filmic reconstruction (can) work together, depending on the situation.

About this of course you’re right, and probably the reason why I haven’t noticed before… I was just playing with the modules…

By placing colour reconstruction below exposure (actually, at any part of the scene-referred portion of the pipe), do you not risk applying it to pixels which are not overexposed, but have components > 1 because of raw value + white balance + input colour profile? By reducing exposure, you’d get the real colours. By using filmic, even without reducing exposure, you’d also get the real colours. By using colour reconstruction, you handle them as overexposed, propagating (guessing) colours from other parts of the photo.
This module assumes display-referred, low-dynamic range input and provides such output. I think you’re losing data if you put it in the scene-referred part of the pipe:
image

I don’t really use it. Have you ever been able to recover details? So far, I have only used to to soften (blur) hard boundaries of blown parts. Can either of you post an example (raw + xmp)?

I also use it for minimal reconstruction around blown areas, usually for daylight photos where I wasn’t careful enough at the moment of capture. I will look later if I can find a good example.

You can’t reconstruct missing information, you just can mitigate the effect of clipping (fading out)

At best, you can recover some (grayscale) texture where only one channel is clipped, For the rest, you can’t recover what isn’t there, so correct colour is out, and if more than one channel is clipped, texture becomes difficult as well.

The filmic version does make it easier to control the transitions, and I have managed to get some texture back in some cases. The masking settings can be finicky though. And, those areas are usually highlights, so you also have the tonal compression, which tends to hide finer details anyway.

Sure, I know that. Let’s phrase it differently: I have been able to use it to improve the situation, but only when setting the ‘bloom/reconstruct’ slider to ‘bloom’. https://darktable-org.github.io/dtdocs/module-reference/processing-modules/filmic-rgb promises that if only 1 channel is blown, some detail can be recovered:

structure/texture
Use this to control whether the reconstruction algorithm should favor painting in a smooth color gradient (structure), or trying to reconstruct the texture using sharp details extracted from unclipped pixel data.

bloom/reconstruct
Use this to control whether the algorithm tries to reconstruct sharp detail in the clipped areas, or whether it should apply a blur

First of all, I do know this is not an ideal solution, but it has given me the best outcome in most cases, compared to all other approaches/tricks currently possible in darktable.

Yes, although it’s not a big deal (it might even help have a smooth transition), this could happen to a limited number of pixels. But you can control/avoid it by tweaking the threshold slider of color reconstruction. You can also use parametric masks to exclude those areas/pixels.
After all, since the tones of these very bright areas will be compressed a lot by filmic/base curve, there is no need to be precise in this stage (before applying those tone mapping modules).

As far as I know, you cannot get real colors by applying negative exposure values because at least one color component is clipped at overexposed regions. Though, even if I’m wrong, you still have the threshold/parametric masks option mentioned above.

This is correct because it works in Lab space. However, here you just want to reconstruct colors (not tones, which are already recovered as much as possible by “reconstruct LCh” mode of highlight reconstruction). Again, you can apply the color reconstruction module in either “color” or “lab color” blending modes to make sure it won’t touch the tone values.

I’m not disputing that, I was just surprised. I’ll keep the trick in mind, and will try it the next time I run into clipping.

True, but I meant that colour reconstruction would propagate guessed colours even if the pixels were not really clipped, whereas by reducing exposure would restore the real colours:

You are right about the possibility to exclude areas using the tool’s sliders and masks, though.

Thanks for sharing!

1 Like

I found a non-daylight example where the reconstruction tab in filmic is better than the HL module. We’re talking about bokeh polygons here, and I know there’s another discussion around here where people are discussing something similar.

Anyway this is the crop of the interesting area with HL module set to "reconstruct Lch’:

and this is with the module deactivated and only filmic working its magic, with options in the reconstruction tab set (approximately) to threshold=-1EV, transition=+1.9EV and balance sliders set to +30 (structure/texture), -70 (bloom/reconstruct), +100 (grey/colorful).

1 Like