Guiding laplacians to restore clipped highlights

More results. First image is the raw with WB and exposure compensation to magnify highlights, second is the same plus highlights reconstruction in guided laplacians, third is the same plus filmic highlights reconstruction on top.



















TL;DR :

  1. in situations where the 3 channels are all clipped over large regions, it does nothing, which is a gracious way of failing. It doesn’t seem to create artifacts otherwise.
  2. Large regions will need more steps of filmic HL reconstruction to propagate color from further away, which is expected. In any case, color is not defined at this place in the pipeline, before full CAT and profiling, so it needs to be handled later.
  3. Having at least one non-clipped channel is a huge help.
  4. The diffuse and sharpen module could be easily modified to introduce an iterative texture synthesis mode.

Side-note : the scene-referred framework is of the essence here. In display-referred, reconstruction methods try to darken the blown areas to make them fit the display range. This voids the cognitive relationships of relative luminance: blown areas are expected much brighter than their surrounding. Dealing with dynamic range later allows to enforce this cognitive relationship by brightening blown areas as they should.

22 Likes

third is too dark imho…

1 Like

They are all too dark, the point is to display highlights without tone mapping kicking in. If tone mapping gets mixed there, details will be compressed and people will not see what I want to show.

3 Likes

I have an image that has a large clipped cloudy sky (as 6 years ago I didn’t know how to exposure), and I’d like to test the module.

  1. I’m currently on master, a simple git question, how do I change the branch and do I need to delete my build directory and rebuild dt from scracth
  2. Where I find the new module, i.e. what it is called?

My third question is more complicated and I decided to have an own post for that.

  1. The sky is part of a panorama. I have currently stitched and developed the image according to @anon41087856 's instructions (first generate linear TIFFs with practically no processing, stitch, and then apply color balancing, exposure, etc.) Does the same apply with the new module or do I need to do the highlight recovery before stitching? I think I saw in github that the modue was applied on a non-mosaiced image. Do I need to apply then also WB) And if so, should I then fix the exposure so that it compensates the highest multiplier in WB. (In my case for an old Canon 600D, it has red 2.5, so applying exposure -2.5EV)?
  1. Download master (not needed in your case, but if anyone else wants to try):
git clone --recurse-submodules --depth 1 https://github.com/darktable-org/darktable.git
cd darktable
  1. Get Git on master:
git checkout master
  1. Refresh or update master (not needed if you did 1. but probably needed in your case):
git pull master

or

git pull origin master

depending how the remote repository is declared
4. Merge my experimental branch on top of master:

git pull https://github.com/aurelienpierre/darktable.git highlights-reconstruction-guided-laplacian
  1. Build dt in a testing environment:
./build.sh --prefix /opt/darktable-test --build-type Release --install --sudo
  1. Start dt within a testing environment:
/opt/darktable-test/bin/darktable --configdir "~/.config/darktable-test"
  1. Be sure to NOT write redundant sidecar XMP on your testing version.

Preferably, yes. Rebuild from scratch:

cd darktable
rm -R build
rm -R /opt/darktable-test
./build.sh --prefix /opt/darktable-test --build-type Release --install --sudo

It’s the same old highlights reconstruction, not a new module. The new mode is called “guide laplacians”.

3 Likes

This reconstruction method should be (more or less) immune to WB, but I have not tested how it behaves on stiched panos, so I can’t answer.

Thanks for a quick answer. I will report back.

And if I understand correctly, I should apply it to the RAWs, not to the stitched TIFF?

Yes, apply it on the RAW indeed.

I’m just desiring with all my strength to have a first version to be included in a beta to start trying… It seems a finally GREAT solution in dt for highlights, and a big step for our beloved software.
THANKS!

Finally managed to finish the first round of testing your nice and interesting work @anon41087856!
So nice to see some action into a proper scene-referred highlights reconstruction!

But first a question regarding this statement:

Why do you assume the color to be the color of a fully clipped sensor? Why not just assume that we know nothing and just propagate/blur in whatever color we have around the area? Seems like the less bad option to me. To good option would be to add the possibility to place “color seeds” in areas of unknown color that then could propagate out in that area. Would of course require more work and could probably be updated to a later version but figured I could that idea out anyway. (gradients btw also be transferred from unclipped areas to clipped areas with retouch-like tools.)

My observations

  1. The reconstruction method alters unclipped channels. I find that a bit peculiar as we know those values to be correct. If this happens because of diffusing the color (RGB ratio) then I think it would be better to instead push the clipped channel further “up”.
  2. All channels in reconstructed areas have more noise than the input. I actually couldn’t see this in the actual image so I could live with this side effect if necessary for clipped channels. But I find it a bit strange.
  3. I have multiple times seen that the second clipped channel (the middle channel) is reconstructed by lowering its values below its clipped limit. We know that clipped channels are larger than the clipping threshold and I would expect this knowledge to be used. Especially in a scene-referred reconstruction context where brighter is the expected proper result. (Circled in the graph)
  4. It seems like you are overall too careful in the reconstruction and don’t push the clipped channel(s) enough. I think this is why we see the magenta cast in pretty much all your test images earlier in this thread. Magenta is the opponent color of green and green has been the first to clip in all those images. I painted how I would eye-ball guess the curvature in this particular case in the provided graph.

All of these things are present in this image:
IMG_7282.CR2 (23.3 MB)
This file is licensed Creative Commons, By-Attribution.

How I made this graph:

  1. Set input profile, working profile, and output profile to Rec2020 (pass-through) and disable the white balance so that we can plot the actual camera pixel values.
  2. Export both the image as .exr with and without highlight reconstruction
  3. Open with a quick python script that plots the RGB channels from before (darker color) and after (brighter color). :slight_smile:

Next up, do the same for the other method!

6 Likes

We don’t assume anything, it just turns out that when large missing regions are found, output = input because the diffusion can’t reach far enough to propagate.

Did you try tuning the clipping threshold ? The clipped pixels are masked and then the mask is blurred by a 5×5 box blur, otherwise edges artifacts may be created.

That’s by design, you will find a noise level slider to adjust the amount of noise. It’s not strange if you consider that images have noise anyway, so if your highlights reconstruction is completely clean while the rest is noisy, that’s when it looks odd.

That might be a by-product of diffusing chroma…

I can’t go harder than adding 1 × Laplacian without overshooting. Same as any diffusion process, to diffuse more, you need more iterations…

1 Like

OpenCL kernels available today.

I may have fixed this in https://github.com/darktable-org/darktable/pull/10711/commits/659782c84bff33ff5b1af7a315351332093445dd, where I choose the best guiding candidate channel with variance analysis rather than min magnitude of LF channels before white balance.

6 Likes

Awesome! Will try to take a look soon :slight_smile:
And thanks for the answer to the points I made, I have some follow-up questions that I want to explore later.

And as promised the same test but for @Iain and @hannoschwalm method!

  1. There is strong enforcement of color bleeding (uniform per patch even?) that sometimes works and sometimes causes artifacts. The artifact is visible on the same image as I used on my previous post but I graphed a line slightly lower down as the artifact is more visible there. This causes the green channel to be too high for large portions of this image resulting in the green tint.
  2. The gradient propagation seems pretty good otherwise, kind of what I had expected. (Best tested by bracketed exposure and the same kind of graph difference per pixel heat map).
  3. Unclipped channels are changed for this method as well, especially at the border of clipping for one of the channels.
  4. I think border handling is probably the weakest point of this method right now. That is where I can easily spot artifacts that should be avoidable. I think that enforcing, at least, first-order smoothness could come a long way in solving that issue!

It’s interesting to compare them, I have a harder time figuring out what happens with this one and the artifacts are different.

6 Likes

This is a particularly difficult image for our method.

Firstly, because the colour channels are not well correlated in the sky. We assume the clipped area is one colour, and the relationships between the channels are not changing. Changes in colour over the clipped area mean it works on one side of the clipped region but not the other. This is why you can see the boundary quite clearly in some places.

Secondly, we assume most detail is in the minimum of RGB, which it isn’t for this image. The clouds are just about invisible in the blue channel.

1 Like

This is a particularly interesting detail :slight_smile: For sure we don’t replace unclipped bayer data but always increase data for a photosite. The different demosaicers take this data and propagate it into the surrounding pixels too - this depends on the chosen demosaicer of course. Yes - we have to improve this part.

I don’t know if i understand fully. (With borders you probably mean “border of a clipped area”) I am not sure if smoothing is the right choice here.

Forgot about the demosaicer yesterday! Interesting how one channel is able to make another overshoot a lot!

I only meant that the function should be continuous at the border of the clipped regions. No smoothing required :slight_smile: That probably means that the uniform color assumptions have to be relaxed a little bit for a case like this. F.ex we could assume the color to be uniform locally, the size/strength of that patch could be a user variable if there is no logical constant.

Why did you pick the minimum? The largest non-clipped channel will have a higher SNR and my guess is that it might correlate better with the clipped channel just by “proximity”. Another problem with doing a hard pick on which gradient to use is that they might swap places (as in the graph for the Guided Laplacian method). Question, are there any downsides to using a weighted average or similar to pick the used gradient. The middle channel could be used but still smoothly disregarded when it approaches clipping levels.

Initially because I was extrapolating data into regions where all channels are clipped.
This is currently not in the Darktable implementation.

Using the largest non-clipped channel creats problems at the boundary between only one clipped channel an two clipped channels.

Regarding alternatives like a weighted average, I have tried a weighted average, but could not get it to work. I can’t remember why.

One thing I have done previously is to sort R,G and B values creating images for minimum, median and maximum.

Then recover the clipped regions of each of those. This allows me to average all RGB channels without worrying about including clipped pixels

1 Like


here’s the vkdt render. seems the white clipping threshold is set very “conservatively”, i needed to lower it a fair bit (see slider) to get rid of blown magenta. highlight reconstruction 1.287ms on 2080Ti.

2 Likes

That’s weird, the actual clipping level when I read it out is 1.021 +/- a tiny bit of noise for both the red and the green channel… Maybe you get the magenta cast for the same reason as in Guided Laplacians, not pushing the clipped channel up enough. Are you also transferring linear gradients 1:1?