Guiding laplacians to restore clipped highlights

I did a quick test on a difficult image, as a stand-alone reconstruction so maybe not the best setting, also to compare what I can get with @hannoschwalm PR.
I raised filmic’s white relative exposure on purpose to better see the colors I had with the different reconstruction methods.

without reconstruction:

with guiding laplacians, default threshold:

with guiding laplacians, threshold lowered to 0.99 (not that even when playing with filmic’s highlight reconstruction, which is not the case here, I can’t get rid of all the green color):

for reference, with reconstruct colors (not that good either, creates some maze artefacts near the eye):

for reference, with reconstruct in LCh:

reconstruct in LCh with a better white relative exposure in filmic to better see the amount of details recovered:

As a comparison, with @hannoschwalm PR, after lowering threshold to 0.9 (default value left too much majenta areas) and pushing reconstruct color slider all the way to the right, I get this result:

With a better white relative exposure in filmic to better see the amount of details recovered:

This is the first method that recovers correctly the red of the beak of the puffin.

If that can help, the raw is here:

5 Likes

Thanks for testing. I found out about the green overshoot this morning. Guiding the chroma with the lowest channel instead of the norm seems to be helping, stay tuned for a fix this afternoon.

1 Like

Fixed in the latest commit.

By the way, the clipping coeff for the “reconstruct color” gets multiplied internally by 0.987 so the input param = 1 is not really = 1.

Details later. Examples now:





6 Likes

much better thanks!

Let me begin by saying that I’m genuinely happy that this is being worked on. What I’ve seen of @Iain’s @hannoschwalm’s work is very impressive. Before dismissing their work in favor of laplacians (I hope that’s not the intention) I’d really like to see more comparisons between their method and laplacians. And, when it comes to highlight reconstruction, having a handful of methods to chose from is also a very good thing. What works for one image, may not work as well for the next one.

Personally I prefer propagation in most cases. With darktable being an editor known for leaving the control to the user, my dream would be for these two new methods to peacefully co-exist in darktable 4. :slight_smile:

Also, I think samples matter. Areas clipping that should have been close to pure white (puffin bird sample) are usually less complicated and for these I think just creating a nice roll-off in filmic works quite good.

I have stress test image from a small sensor camera (DMC-LX7) that clips in horrible ways that I like to use to test reconstruction. Here it is:

reconstruction_sample_01.dng (9.9 MB). This file is licensed CC BY-NC-SA 4.0.

Below is are crops from the DNG file attached above, underexposed in editor to better show transitions.

Screenshot 2021-12-29 at 00.02.13
RawTherapee’s “Color Propagation” method with “Highlight compression” cranked up does quite well. It propagates false color (radiating from the windows), but I still prefer it over all darktable methods.

Screenshot 2021-12-29 at 00.01.10
None of the current (3.8) methods in darktable do well. This screenshot uses the “reconstruct color” method.

1 Like

This method is not suited for your picture. Laplacians encode texture, like a bump map. They will perform well for spots lights radiating light around, like light bulbs, sun disc, etc. because we have an high energy colored light around those spots. So this method needs texture or small spots to work.

Here, you have a large, flat area of reflective material. It’s almost impossible to reconstruct properly, because you would need some algorithm to segment the image into surfaces and identify from which surfaces it is relevant to sample the color to inpaint. Your best chance is to desaturate to white (which RT’s highlight compression is most certainly doing), or even paint with solid color in Gimp/Krita.

On another note, clipped emissive lights can’t be avoided because they are emissive, that is, much higher energy than anything around. There is no excuse for clipped reflective material, that one is a mistake at capture time. In addition of being impossible to recover in post-production.

1 Like

This is @hannoschwalm and I are doing.

I am really looking forward to seeing how your method performs. Is sounds like will do better on these types of images.

1 Like

I am impressed with the preliminary results. I would like to leave you with these two test images, one outdoor and one indoor scene, each with different characteristics and highlights in both.

These files are licensed Creative Commons, By-Attribution, Share-Alike.

_DSF0451.RAF (48.2 MB)

_DSF1814.RAF (48.2 MB)

On cursory reading, your steepest gradient descent may lead you to color bleeding across edges. I have anisotropic diffusion following isophotes for that purpose.

Oh stop being such a sesquipedalian Aurelien (a person whom tries to prove himself using big and complicated words) and be a bit more scientific instead. I thought that you knew that there usually isn’t a single do it all method that always always works but different approaches works well in different scenarios.

I for one would not neglect the work done by @Iain and @hannoschwalm as that method so far has proven very effective for especially regions with at least one non clipped channel. I understood Iain first work as a pretty clever approximation of poisson reconstruction with propagation of gradients to the clipped channels.

On the topic of being scientific I see two things:

  1. Explain the method in a for the audience understandable way.
  2. Show in an objective fashion that the proposed method works as intended.

Number two is the most interesting one! The most rigorous way to do this is to have a ground truth to compare against. Either by bracketing at capture time or by applying a simulated clipping level a couple of stops lover than the actual clipping of the sensor (something like a clipping threshold of 0.1 in the module) to a image with minimal or no clipping.

Results can then be visualized either as is, using false color for delta E (or some other distance metric), or even 1D plots of a slice through a reconstructed area. I can help out with the last one :slight_smile:

10 Likes

I’m wondering if the changes made to the darktable implementation are incorporated in your g’mic version? It would be nice to use/test there too.

@jandren small note on terminology: much as I often need plain language explanations, I find many of the image processing terms easier to follow because they’re concise. Personally I’d like that to continue if possible… perhaps you mean something else though.

2 Likes

Yes, that is missing - i agree. Also everyone would have to know/ accept that the algo often works very good and sometimes it fails completely. There are two reasons for this for the vast majority of cases.

  1. The segmentation algorithm uses an exact clipping threshold. So minimal changes for the threshold - especially with “noisy” images (high iso, or 10bit sensor data) - might lead to completely different segments.
  2. And a different segmentation leads to different candidates…
1 Like

@Iain’s work :slight_smile: We discussed the algo heavily and he did a lot of prototype testing, not sure everything i did in C translates well into gmic scripts. (I dont’t speak and use gmic and he doesn’t speak C :slight_smile: Imagine the way we discussed things …

5 Likes

Thanks, time permitting I might get to reading the C code if the differences are significant enough. The biggest problem in g’mic is often to know which built-in commands can save a lot of duplicated effort :slight_smile:

as another point of reference, here’s the vkdt render:

5.2ms full image on a laptop/1650 GTX.

i have a few measures in place to suppress overly saturated colours when blurring them into the overexposed regions. on the other hand the rt render goes for maximum preservation of the original data, i’m starting to fade in the reconstruction a bit earlier for a smoother transition.

this type of image is really the most challenging i think… but all the texture is lost anyways unless you go full blown texture synthesis here. so in practice… i don’t think any technique that runs at acceptable speeds would allow me to salvage this image. pushing a curve and accepting the overexposure would though.

6 Likes

I think Filmulator also does a rather good job (I don’t actually use it, but experiment with it every now and then, so much better renditions are probably possible). @CarVac

1 Like

And the puffin

Filmulator uses RawTherapee’s algorithm unchanged; it’ll occasionally perform worse though when there’s a white point edge case I haven’t covered yet (like this one I’m currently working on handling).

That said, Filmulator does a good job on this puffin image, getting a result almost identical to the last one; there are only traces of magenta on the face.

Thanks for the clarification.
I thought it did something more/different, as there was colour bleeding in the RawTherapee version from Guiding laplacians to restore clipped highlights - #15 by mikae1