Guiding laplacians to restore clipped highlights

I have merged the multi-scale logic of the wavelets with the guided filter to guide laplacians in order to reconstruct clipped highlights. It’s kind of a turnducken of science.

With little effort, reusing code from diffuse or sharpen and the logic from filmic reconstruction and the exposure-invariant guided filter, I got something working today to improve the noticeably shitty highlights reconstruction in darktable.

The results are far from perfect as a stand-alone reconstruction, but it helps filmic reconstruction a great deal. It’s a WIP since weird fringes can get produced. No OpenCL yet.

Code:

Results:

(“before” is using filmic reconstruction at the end of the pipeline and the new chromatic aberrations module, after is the same with raw highlights reconstruction):

Before, no raw highlights reconstruction:

After, raw highlights reconstruction in “clip” mode:

After, raw highlights reconstruction in “reconstruct in Lch” mode:

After, raw highlights reconstruction in “reconstruct color” mode:

After, raw highlights reconstruction in “guided laplacians” mode:


Before, no raw highlights reconstruction:

After, raw highlights reconstruction in “reconstruct in Lch” mode:

After, raw highlights reconstruction in “clip” mode:

After, raw highlights reconstruction in “reconstruct color” mode:

After, raw highlights reconstruction in “guided laplacians” mode:


Before, no raw highlights reconstruction:

After, raw highlights reconstruction in “clip” mode:

After, raw highlights reconstruction in “reconstruct in Lch” mode:

After, raw highlights reconstruction in “reconstruct color” mode:

After, raw highlights reconstruction in “guided laplacians” mode:

27 Likes

Looks promising!

Would you care to elaborate on how you model the inpainting? Is it assuming only an exposure change compared to its neighbors or something else?

And did you try it on images where only one channel is clipped? Like only green channel is clipped in a sky or only the red channel is clipped when in an image illuminated by a candle/fire. Both these examples have a pretty aggressive clipping.

Would this be another alternative to this or are they intended to be used in different ways?

Improved highlight reconstruction would really be a great boost for darktable…

isn’t also @hannoschwalm working on this?

My method looks to specifically identify what the colour should be in clipped areas. @anon41087856 says that his version does not do that.

1 Like

Yes! I just did a pr [WIP] Segmentation based highlights recovery for bayer sensors by jenshannoschwalm · Pull Request #10716 · darktable-org/darktable · GitHub

2 Likes

Thanks for the clarification @Iain.
Exciting news @hannoschwalm!

@anon41087856 any chance to share the raws ?

which, I believe, is a fool’s errand since you will take WB discrepancies in the face. Dealing with pixel-wise intensities is wrong, the only salvation can come from higher order methods, that is dealing with images as a collection of oscillations around a local average (aka laplacians). It’s the only thing you can hope to transfer between channels without taking the intensity scalings of RGB bullshit channels in the face.

It’s also the starting point of the MLRI (minimized laplacian residual interpolation) demosaicing method, which does score 38 dB PSNR where DLSSME pains to reach 35 dB.

I did a quick test on a difficult image, as a stand-alone reconstruction so maybe not the best setting, also to compare what I can get with @hannoschwalm PR.
I raised filmic’s white relative exposure on purpose to better see the colors I had with the different reconstruction methods.

without reconstruction:

with guiding laplacians, default threshold:

with guiding laplacians, threshold lowered to 0.99 (not that even when playing with filmic’s highlight reconstruction, which is not the case here, I can’t get rid of all the green color):

for reference, with reconstruct colors (not that good either, creates some maze artefacts near the eye):

for reference, with reconstruct in LCh:

reconstruct in LCh with a better white relative exposure in filmic to better see the amount of details recovered:

As a comparison, with @hannoschwalm PR, after lowering threshold to 0.9 (default value left too much majenta areas) and pushing reconstruct color slider all the way to the right, I get this result:

With a better white relative exposure in filmic to better see the amount of details recovered:

This is the first method that recovers correctly the red of the beak of the puffin.

If that can help, the raw is here:

5 Likes

Thanks for testing. I found out about the green overshoot this morning. Guiding the chroma with the lowest channel instead of the norm seems to be helping, stay tuned for a fix this afternoon.

1 Like

Fixed in the latest commit.

By the way, the clipping coeff for the “reconstruct color” gets multiplied internally by 0.987 so the input param = 1 is not really = 1.

Details later. Examples now:





6 Likes

much better thanks!

Let me begin by saying that I’m genuinely happy that this is being worked on. What I’ve seen of @Iain’s @hannoschwalm’s work is very impressive. Before dismissing their work in favor of laplacians (I hope that’s not the intention) I’d really like to see more comparisons between their method and laplacians. And, when it comes to highlight reconstruction, having a handful of methods to chose from is also a very good thing. What works for one image, may not work as well for the next one.

Personally I prefer propagation in most cases. With darktable being an editor known for leaving the control to the user, my dream would be for these two new methods to peacefully co-exist in darktable 4. :slight_smile:

Also, I think samples matter. Areas clipping that should have been close to pure white (puffin bird sample) are usually less complicated and for these I think just creating a nice roll-off in filmic works quite good.

I have stress test image from a small sensor camera (DMC-LX7) that clips in horrible ways that I like to use to test reconstruction. Here it is:

reconstruction_sample_01.dng (9.9 MB). This file is licensed CC BY-NC-SA 4.0.

Below is are crops from the DNG file attached above, underexposed in editor to better show transitions.

Screenshot 2021-12-29 at 00.02.13
RawTherapee’s “Color Propagation” method with “Highlight compression” cranked up does quite well. It propagates false color (radiating from the windows), but I still prefer it over all darktable methods.

Screenshot 2021-12-29 at 00.01.10
None of the current (3.8) methods in darktable do well. This screenshot uses the “reconstruct color” method.

1 Like

This method is not suited for your picture. Laplacians encode texture, like a bump map. They will perform well for spots lights radiating light around, like light bulbs, sun disc, etc. because we have an high energy colored light around those spots. So this method needs texture or small spots to work.

Here, you have a large, flat area of reflective material. It’s almost impossible to reconstruct properly, because you would need some algorithm to segment the image into surfaces and identify from which surfaces it is relevant to sample the color to inpaint. Your best chance is to desaturate to white (which RT’s highlight compression is most certainly doing), or even paint with solid color in Gimp/Krita.

On another note, clipped emissive lights can’t be avoided because they are emissive, that is, much higher energy than anything around. There is no excuse for clipped reflective material, that one is a mistake at capture time. In addition of being impossible to recover in post-production.

1 Like

This is @hannoschwalm and I are doing.

I am really looking forward to seeing how your method performs. Is sounds like will do better on these types of images.

1 Like

I am impressed with the preliminary results. I would like to leave you with these two test images, one outdoor and one indoor scene, each with different characteristics and highlights in both.

These files are licensed Creative Commons, By-Attribution, Share-Alike.

_DSF0451.RAF (48.2 MB)

_DSF1814.RAF (48.2 MB)

On cursory reading, your steepest gradient descent may lead you to color bleeding across edges. I have anisotropic diffusion following isophotes for that purpose.

Oh stop being such a sesquipedalian Aurelien (a person whom tries to prove himself using big and complicated words) and be a bit more scientific instead. I thought that you knew that there usually isn’t a single do it all method that always always works but different approaches works well in different scenarios.

I for one would not neglect the work done by @Iain and @hannoschwalm as that method so far has proven very effective for especially regions with at least one non clipped channel. I understood Iain first work as a pretty clever approximation of poisson reconstruction with propagation of gradients to the clipped channels.

On the topic of being scientific I see two things:

  1. Explain the method in a for the audience understandable way.
  2. Show in an objective fashion that the proposed method works as intended.

Number two is the most interesting one! The most rigorous way to do this is to have a ground truth to compare against. Either by bracketing at capture time or by applying a simulated clipping level a couple of stops lover than the actual clipping of the sensor (something like a clipping threshold of 0.1 in the module) to a image with minimal or no clipping.

Results can then be visualized either as is, using false color for delta E (or some other distance metric), or even 1D plots of a slice through a reconstructed area. I can help out with the last one :slight_smile:

10 Likes