Guiding laplacians to restore clipped highlights

Both Aurelien’s and Hanno’s work on clipped highlights aren’t yet merged into development’s master branch.

If you want to test/play with them you need to integrate them yourself into master (Hanno: #10716 and Aurelien: #10711). The given links have the URL for their personal repo’s and the specific branches.

3 Likes

ART with color propagation

@hanatos, @anon41087856 : it looks to me like you two are attacking different problems? @hanatos and @Iain semm to be interested in handling large areas with one or more clipped channels (reflective areas, like walls in sunlight), whereas @anon41087856 is mostly interested in correcting fully clipped small area highlights (mostly emissive).

If my interpretation is right, wouldn’t there be a place for both methods, each with its proper application?

1 Like

you probably mean @hannoschwalm , i’m kinda out of the competition because i have no plans for a dt pull request. not sure i see the difference between small and large clipped areas though.

Btw thanks for showing how you solved the inter-channel approximation. Will have a try …

3 Likes

In a diffusive process, the size of the clipped areas determines the largest wavelets scale to be used and the number of iterations. Also it increases the probability of having unwanted color bleeding. So it changes the perf/quality trade off.

The code is no longer WIP and considered finished. A couple of changes have been made on the method proposed above over the sequence of guiding stuff, although the core stays the same.

9 Likes

…but how fast does it run? :slight_smile:

re: perf trade off for more scales, this is the breakdown in vkdt on the slow laptop for this 36mp image:

[perf] hilite   half    :	   3.797 ms
[perf] hilite   reduce  :	   1.442 ms
[perf] hilite   reduce  :	   0.385 ms
[perf] hilite   reduce  :	   0.115 ms
[perf] hilite   reduce  :	   0.049 ms
[perf] hilite   reduce  :	   0.015 ms
[perf] hilite   reduce  :	   0.009 ms
[perf] hilite   reduce  :	   0.009 ms
[perf] hilite   reduce  :	   0.011 ms
[perf] hilite   assemble:	   0.008 ms
[perf] hilite   assemble:	   0.004 ms
[perf] hilite   assemble:	   0.007 ms
[perf] hilite   assemble:	   0.016 ms
[perf] hilite   assemble:	   0.063 ms
[perf] hilite   assemble:	   0.186 ms
[perf] hilite   assemble:	   0.679 ms
[perf] hilite   assemble:	   2.629 ms
[perf] hilite   doub    :	   4.553 ms

i run the iterations until the resolution drops below 1 px… and i think maybe capped at some max iteration count that is usually not reached. the lower resolutions are not for free, but way < 1ms.

2 Likes

More results. First image is the raw with WB and exposure compensation to magnify highlights, second is the same plus highlights reconstruction in guided laplacians, third is the same plus filmic highlights reconstruction on top.



















TL;DR :

  1. in situations where the 3 channels are all clipped over large regions, it does nothing, which is a gracious way of failing. It doesn’t seem to create artifacts otherwise.
  2. Large regions will need more steps of filmic HL reconstruction to propagate color from further away, which is expected. In any case, color is not defined at this place in the pipeline, before full CAT and profiling, so it needs to be handled later.
  3. Having at least one non-clipped channel is a huge help.
  4. The diffuse and sharpen module could be easily modified to introduce an iterative texture synthesis mode.

Side-note : the scene-referred framework is of the essence here. In display-referred, reconstruction methods try to darken the blown areas to make them fit the display range. This voids the cognitive relationships of relative luminance: blown areas are expected much brighter than their surrounding. Dealing with dynamic range later allows to enforce this cognitive relationship by brightening blown areas as they should.

22 Likes

third is too dark imho…

1 Like

They are all too dark, the point is to display highlights without tone mapping kicking in. If tone mapping gets mixed there, details will be compressed and people will not see what I want to show.

3 Likes

I have an image that has a large clipped cloudy sky (as 6 years ago I didn’t know how to exposure), and I’d like to test the module.

  1. I’m currently on master, a simple git question, how do I change the branch and do I need to delete my build directory and rebuild dt from scracth
  2. Where I find the new module, i.e. what it is called?

My third question is more complicated and I decided to have an own post for that.

  1. The sky is part of a panorama. I have currently stitched and developed the image according to @anon41087856 's instructions (first generate linear TIFFs with practically no processing, stitch, and then apply color balancing, exposure, etc.) Does the same apply with the new module or do I need to do the highlight recovery before stitching? I think I saw in github that the modue was applied on a non-mosaiced image. Do I need to apply then also WB) And if so, should I then fix the exposure so that it compensates the highest multiplier in WB. (In my case for an old Canon 600D, it has red 2.5, so applying exposure -2.5EV)?
  1. Download master (not needed in your case, but if anyone else wants to try):
git clone --recurse-submodules --depth 1 https://github.com/darktable-org/darktable.git
cd darktable
  1. Get Git on master:
git checkout master
  1. Refresh or update master (not needed if you did 1. but probably needed in your case):
git pull master

or

git pull origin master

depending how the remote repository is declared
4. Merge my experimental branch on top of master:

git pull https://github.com/aurelienpierre/darktable.git highlights-reconstruction-guided-laplacian
  1. Build dt in a testing environment:
./build.sh --prefix /opt/darktable-test --build-type Release --install --sudo
  1. Start dt within a testing environment:
/opt/darktable-test/bin/darktable --configdir "~/.config/darktable-test"
  1. Be sure to NOT write redundant sidecar XMP on your testing version.

Preferably, yes. Rebuild from scratch:

cd darktable
rm -R build
rm -R /opt/darktable-test
./build.sh --prefix /opt/darktable-test --build-type Release --install --sudo

It’s the same old highlights reconstruction, not a new module. The new mode is called “guide laplacians”.

3 Likes

This reconstruction method should be (more or less) immune to WB, but I have not tested how it behaves on stiched panos, so I can’t answer.

Thanks for a quick answer. I will report back.

And if I understand correctly, I should apply it to the RAWs, not to the stitched TIFF?

Yes, apply it on the RAW indeed.

I’m just desiring with all my strength to have a first version to be included in a beta to start trying… It seems a finally GREAT solution in dt for highlights, and a big step for our beloved software.
THANKS!

Finally managed to finish the first round of testing your nice and interesting work @anon41087856!
So nice to see some action into a proper scene-referred highlights reconstruction!

But first a question regarding this statement:

Why do you assume the color to be the color of a fully clipped sensor? Why not just assume that we know nothing and just propagate/blur in whatever color we have around the area? Seems like the less bad option to me. To good option would be to add the possibility to place “color seeds” in areas of unknown color that then could propagate out in that area. Would of course require more work and could probably be updated to a later version but figured I could that idea out anyway. (gradients btw also be transferred from unclipped areas to clipped areas with retouch-like tools.)

My observations

  1. The reconstruction method alters unclipped channels. I find that a bit peculiar as we know those values to be correct. If this happens because of diffusing the color (RGB ratio) then I think it would be better to instead push the clipped channel further “up”.
  2. All channels in reconstructed areas have more noise than the input. I actually couldn’t see this in the actual image so I could live with this side effect if necessary for clipped channels. But I find it a bit strange.
  3. I have multiple times seen that the second clipped channel (the middle channel) is reconstructed by lowering its values below its clipped limit. We know that clipped channels are larger than the clipping threshold and I would expect this knowledge to be used. Especially in a scene-referred reconstruction context where brighter is the expected proper result. (Circled in the graph)
  4. It seems like you are overall too careful in the reconstruction and don’t push the clipped channel(s) enough. I think this is why we see the magenta cast in pretty much all your test images earlier in this thread. Magenta is the opponent color of green and green has been the first to clip in all those images. I painted how I would eye-ball guess the curvature in this particular case in the provided graph.

All of these things are present in this image:
IMG_7282.CR2 (23.3 MB)
This file is licensed Creative Commons, By-Attribution.

How I made this graph:

  1. Set input profile, working profile, and output profile to Rec2020 (pass-through) and disable the white balance so that we can plot the actual camera pixel values.
  2. Export both the image as .exr with and without highlight reconstruction
  3. Open with a quick python script that plots the RGB channels from before (darker color) and after (brighter color). :slight_smile:

Next up, do the same for the other method!

6 Likes

We don’t assume anything, it just turns out that when large missing regions are found, output = input because the diffusion can’t reach far enough to propagate.

Did you try tuning the clipping threshold ? The clipped pixels are masked and then the mask is blurred by a 5×5 box blur, otherwise edges artifacts may be created.

That’s by design, you will find a noise level slider to adjust the amount of noise. It’s not strange if you consider that images have noise anyway, so if your highlights reconstruction is completely clean while the rest is noisy, that’s when it looks odd.

That might be a by-product of diffusing chroma…

I can’t go harder than adding 1 × Laplacian without overshooting. Same as any diffusion process, to diffuse more, you need more iterations…

1 Like