Highlight recovery teaser

Working on a RAW highlight recovery/synthesis idea. Seems to be working.
I have included some of Rawtherapee’s highlight recovery options for comparision.


When all three channels are clipped highlights are rebuilt by continuing the gradient from the surronding area.

8 Likes

Do you have any comparisons against an image of the same subject with no clipping?

1 Like

Here is a comparison with a properly exposed shot of the same wall.

5 Likes

Hi,

very interesting. I was at some point thinking about doing something like this. I briefly tried and failed though :slight_smile:
As far as I understand, the color propagation method of RT also uses some kind of gradient, but the code is a bit too convoluted for me to properly understand.
Is your implementation available somewhere?

It is currently a very messy GMIC script. Someone who has better maths can probably do much better.

Its something like this:

Estimate gradient from left to right

abcdXYZhijk

in this row of pixels XYZ are clipped

X=d+(d-c) 
Y=x+(x-d)

etc.

If estimated value is less than the clipping threshold use the clipping threshold instead.

Estimate the values from right to left

Z=h+(h-i) 
Y=z+(z-h)

Take the smallest value from those two estimates.

Blur the estimated values a bit.

Repeat this process vertically and plus and minus 45 degrees

You should now have 4 estimates in different directions.

Average the two highest estimates and throw out the other 2.

Blur the reconstructed pixels with a radius that is proportional to the distance from the non-clipped pixels

That means that the centre of the reconstructed areas get blurred more than the edges

That’s the basis for reconstructing the highlights for one channel but you can’t do that to the channels independently and get good results.

… coming soon how to reconstruct highlights for different channels …

1 Like

Now to do all the channels.

Split the bayer pattern into four half-size images (red, green, green, blue)

Stack and sort these images so you have one image containing the brightest pixels of all the channels, one with the minimum and two in the middle.

Reconstruct the clipped values for the image of minimum values. This is your reference image.

However, it is possible that you are missing some details that are contained in the other channels, so we need to make a better reference image.

We are going to use the minimum image as a reference to reconstruct the clipped values in the next brightest image.

Find the difference between the reference image and the reconstruction candidate. Inpaint the pixels that were clipped in the candidate image on the difference image based on the closest unclipped values.

Add the difference image back to the reference image. We have now reconstructed the highlights on another image in the stack. Average the reference image and the new reconstructed image together to create a new reference image

Repeat this process for the remaining images in the stack. The last image will be the brightest one with the most clipped values.

You now have one grey scale reference image with the most details possible.

… will we ever reconstruct the highlights on the colour channels, what will become of the reference image, find out next time…

1 Like

Now that you have the best reference image you can have, with the information of all the colour channels, you can use that to reconstruct the highlights of each colour channel.

You must do your white balance before the next step.

It is the same process as before, but the candidate image is one of the colour channels. The clipped pixels are in painted on the difference image between the reference and the colour channel and then the difference image is added to the reference.

And that’s about it.

Easy peasy.

1 Like

Minor correction. I added some damping so the values don’t get out of hand.

X=d+((d-c)*.95)

Exactly what I thought ! :wink:
Looks very interesting Iain, would love to see this implemented somewhere (ART I hope).