Guiding laplacians to restore clipped highlights

There are also fundamental differences in the way images are processed by humans and by computers. Pixels are treated one by one, physiological photosites work in parallel (very simplified).
Image recognition by computers exists for specific domains, what is asked for here is a universal recognition algorithm, coupled to a database with recognised shapes (flowers again).

But to improve the treatment a bit, would it be possible to make the highlight recovery module aware of the applied WB coefficients (i.e. multiply the raw white point by the WB coefficient for red and blue)?

Good to know…

Mods (@afre , @paperdigits), it is clear that @nosle just wants to stir the pot and get reactions without any positive feedback to the community. By other members replying to him, it further encourages the behavior. I vote for more intervention than just hoping he is ignored.

I understand.

Could be efficient for graph-based but that is slow to begin with. :stuck_out_tongue: Especially with the running gag (and truism) that my laptop can’t handle anything from dt.

This comment is for all forum members. If you feel that moderation is necessary, please use the appropriate channels such as flagging and PM reports. (Just don’t flag-bomb people you don’t like.) That way, mods/staff could look into the matter. We took exception lately but yeah it is not the best way to do things. Thanks.

This is exactly what we mean, this is unfair and annoying, you hijack threads with ideas that are ill thought out and then you change the subject.

Stop it. Is that clear enough?

This discussion is somewhat above my pay grade but I’ll chip in my 2p worth…

Firstly, sensor clipping is surely when a pixel value is >= the raw white point as shown in the raw black white module, is that correct?

How difficult is this for the math wizards? If I understand your comment Afre it’s like we’re shown a physical model of a mountain but someone has horizontally sliced off the top of the mountain. The challenge is to surface-fit something to replace what’s been sliced off, a bit like curve fitting in 2D. Are there not algorithms for this?, I would have thought there were.

Lastly, I tried the white-chested puffin pic to see if I could use GLHL. I accept it’s early days and e.g. no finished tutorial yet as far as I’m aware. However I found it tricky to adjust items going between GLHL and filmic, but I did finally succeed.

I have experimented with this and get mixed results. One problem is the exact colour is determined by the white balance. So it depends on your camera and the lighting conditions. Another issue is how to determine how much correction for the magenta cast is required. Or, put another way, how do you measure the amount of colour cast in a pixel.

My approach was to use the value of red and blue channels that are pushed above clipping for white balance as a mask. I work out what colour I need to ass to make fully clipped areas white, and then use the mask when adding it to the original image.

It works well to disguise the magenta highlights in daylight images. The colour is probably more accurate for highlights than a magenta cast, but it tended to desaturate reds near highlights. It gave it an ‘old photo’ look.

The technique failed for me under tungsten lights. I create the mask by the unclipped channels being pushed into clipping during white balance. This did not happen as often in my tungsten test images.

1 Like

An example of the method I just described.

Input image:

Result:

Added greenish colour to get result (ignore debug text):

1 Like

Along those lines …this gives not a bad result as well using spot matching…sampling one of the skulls…

image

There are algorithms for curve fitting (2D and 3D). But those work on simple curves (2nd, 3rd, perhaps 4th order polynomals, or other well defined simple curves); colors in real life images don’t follow simple curves. Higher orders take up a lot more processing time, and can end up being little better than guesses with some scientific sauce on them.

To stick with your mountain example, I’ve yet to see a mountain top that is a nice parabole. Mostly it’s a kind of irregular triangle. So you need high-order functions to model them…

And curve-fitting assumes continuity between the clipped area and its surroundings. If your overexposed area is a sheet of paper, curve-fitting to fill in the area is possibly the worst solution (as the area is a flat sheet, the texture of which has no relation with the surrounding area).

All of those “nice” solutions to fill in missing data might work in particular cases, but are completely unwanted in others… Keep in mind that “restoring clipped highlights” is “filling in missing data”.

Even with only one channel clipped, you have to make assumptions about the missing data. It just happens that in that case, most of the time, the assumptions are correct or at least close enough.

You bring up good points.

I have been experimenting, though, and have found that it is possible to get quite good results. Here is a animated GIF of my highlight synthesis in fully clipped areas.

One thing I have found is that anything that is reasonably plausible is an improvement, provided it is less distracting than the clipping.

1 Like

Exactly! And though I didn’t make it clear, I was coming from the angle that there seems to be frustration about the prevalence of magenta, and I feel that myself. I appreciate a curve-fitted mountain top is a guess, but it’s likely to be better than vivid magenta, and if it doesn’t work in a particular picture, you can try something else.

I have a dream…!.. a gui with a wire frame chopped-off mountain, 3D effect, and you can add points and drag them around until you get the top you want, and rotate the mountain to view from behind, and add a valley… ok maybe not.

1 Like

There are, of course, three mountains. They are similar but different. And they have been sliced at different levels.

Some really nice results.

But I still wonder if even the most primitive machine learning approach wouldn’t yield better results since that is what ML is really good at: inventing stuff that is not there.

And given the huge database of raw images that exists it would be not that complicated to use those images with faked overexposure to create a massive training dataset.

Sadly my current time constraints between work and going to university do not allow me to persue such endeavours. I’ll keep it on my list for possible bachelor/master thesis topics unless someone tackles that idea in the meantime…

4 Likes

I came across this recently …I don’t have a clue about the math so I’m not sure if this is clever or brute force or both…:slight_smile: Inpainted Image Reconstruction Using an Extended Hopfield Neural Network Based Machine Learning System - PMC

2 Likes

Don’t forget that dt would also need access to that “huge database”. So a link might be handy (there also the need to get permission to access the data)…

If you mean that there is a huge corpus of raw files out there, yes, that’s true. Same problem:
dt needs permission to access to that corpus. And of course, when the raw files are all over the places, it gets a bit more complicated.

Shouldn’t it only need to access the trained models? Those are much more concise and small.

1 Like

Well, yes. But that trained model has to be created.

I’m not sure that that’s as simple as @grubernd made it sound, and especially access to a large enough collection of raw files might not be all that evident.

1 Like

For starters let me point you to this cool website called pixls.us :upside_down_face:
https://raw.pixls.us/#repo

1 Like

I’m sure that if it was really needed, the community could fetch a lot of raw files they don’t need or rejected, which could be donated. I have a couple thousand of them, specially of burst shooting that didn’t end up quite right.