Guiding laplacians to restore clipped highlights

How can this help? Seeing something doesn’t allow you to edit it. It seems like we diverted from the subject.

Maybe this should be a new post because it really isn’t about the guided laplacian. The real subject to me (and I think s7habo too) is how to treat the magenta highlights in darktable. A short answer could be, don’t overexpose. But some of my shots are of my kids and they move thus changing the lightning.

True, we have a good idea where/what the clipping is already. The question is how to mitigate it. Ideally, we would want to estimate the full well signal but that is outside of our scope. @s7habo’s last attempt looks good if only a bit forced.

Well its really very easy…these issues in the extreme highlights are really impacted by the filmic norms…just turn them off for this image so default workflow …then with filmic v6 just drop the extreme luminence saturation and bam 217 217 217… no magenta…or other cast in the whites

image

So this

image

vs this

image

Except, at least in @s7habo’s image, the wall isn’t supposed to be white. I have additional thoughts but we are suppose to talk about GL.

Then we could shift first to when and when not to use them ie GLHR. For eg when is this the best approach over some optional or alternative approach which you could say we are still speaking about as we are interspersing and comparing methods here. I think the way the filmic norms affect the channels in the extreme highlight will impact all methods of highlight recovery. Or since the HR comes first then the interaction of the two resulting in the displayed image on the screen is one thing to consider.

Okay we can split hairs about the exact color of the wall and without being there is a guess for anyone… I just made it white and not bluish or magenta ish or whatever ish…I was simply trying to make a point that the focus is often on HLR and this is getting impacted at the level of what gets processed and displayed by the filmic norm handling. I think in the last iteration @s7habo is showing highly desaturated the HL in filmic reconstruction so it is a feature of his attempt to process the highlights in the image… I just did it a different way which likely has its own set of caveats…

I agree. It seems like we are arguing for the sake of it. Let’s look at it this way:

do the methods shown by @s7habo and @priort produce better results than the other HL reconstruction methods? It looks like they do.

Is it possible to manage/get rid of the infamous magenta cast? It looks like it is.

Do we all need more familiarity with this new feature, and its interplay with the filmic module before assuming it doesn’t work well? Likely, since it’s not officially out yet, and there is no documentation for it.

I vote for the collective chill pill

4 Likes

I didn’t realize we were arguing. I simply wanted to understand our discussion. Personally, I would have gone @priort’s route. It isn’t a competition but the difference with @s7habo’s is that he is trying out this new module to see where it could fit into his/our workflow. Does that make sense?

The only point of disagreement is making snide remarks. I know it comes from bad feelings/vibes from previous exchanges. As a mod, I feel that I should call that out. If folks disagree with @nosle’s comments, fine. Either ignore him or make a better contribution to the thread.

1 Like

ok, but sometimes sarcasm is just a very simple and valid way to comment and show slight disagreeing (btw i dind’t do that often on this forum)

It is very difficult in fact. You have to make “assumptions” why some photosites are fully saturated and some are not. For the majority of blown-out areas under common lighting situations it will be magenta, right. Existing algorithms in dt don’t treat “magenta cast” in a specific way.

That would likely involve image segmentation and further steps. I once implemented “Efficient Graph-Based Image Segmentation, Pedro F. Felzenszwalb and Daniel P. Huttenlocher, nternational Journal of Computer Vision, 59(2) September 2004” for dt, despite being efficient (as the title mentioned) it is very hungry for cpu power. Maybe i rethink on this, we could use this as a mask.

1 Like

There are also fundamental differences in the way images are processed by humans and by computers. Pixels are treated one by one, physiological photosites work in parallel (very simplified).
Image recognition by computers exists for specific domains, what is asked for here is a universal recognition algorithm, coupled to a database with recognised shapes (flowers again).

But to improve the treatment a bit, would it be possible to make the highlight recovery module aware of the applied WB coefficients (i.e. multiply the raw white point by the WB coefficient for red and blue)?

Good to know…

Mods (@afre , @paperdigits), it is clear that @nosle just wants to stir the pot and get reactions without any positive feedback to the community. By other members replying to him, it further encourages the behavior. I vote for more intervention than just hoping he is ignored.

I understand.

Could be efficient for graph-based but that is slow to begin with. :stuck_out_tongue: Especially with the running gag (and truism) that my laptop can’t handle anything from dt.

This comment is for all forum members. If you feel that moderation is necessary, please use the appropriate channels such as flagging and PM reports. (Just don’t flag-bomb people you don’t like.) That way, mods/staff could look into the matter. We took exception lately but yeah it is not the best way to do things. Thanks.

This is exactly what we mean, this is unfair and annoying, you hijack threads with ideas that are ill thought out and then you change the subject.

Stop it. Is that clear enough?

This discussion is somewhat above my pay grade but I’ll chip in my 2p worth…

Firstly, sensor clipping is surely when a pixel value is >= the raw white point as shown in the raw black white module, is that correct?

How difficult is this for the math wizards? If I understand your comment Afre it’s like we’re shown a physical model of a mountain but someone has horizontally sliced off the top of the mountain. The challenge is to surface-fit something to replace what’s been sliced off, a bit like curve fitting in 2D. Are there not algorithms for this?, I would have thought there were.

Lastly, I tried the white-chested puffin pic to see if I could use GLHL. I accept it’s early days and e.g. no finished tutorial yet as far as I’m aware. However I found it tricky to adjust items going between GLHL and filmic, but I did finally succeed.

I have experimented with this and get mixed results. One problem is the exact colour is determined by the white balance. So it depends on your camera and the lighting conditions. Another issue is how to determine how much correction for the magenta cast is required. Or, put another way, how do you measure the amount of colour cast in a pixel.

My approach was to use the value of red and blue channels that are pushed above clipping for white balance as a mask. I work out what colour I need to ass to make fully clipped areas white, and then use the mask when adding it to the original image.

It works well to disguise the magenta highlights in daylight images. The colour is probably more accurate for highlights than a magenta cast, but it tended to desaturate reds near highlights. It gave it an ‘old photo’ look.

The technique failed for me under tungsten lights. I create the mask by the unclipped channels being pushed into clipping during white balance. This did not happen as often in my tungsten test images.

1 Like

An example of the method I just described.

Input image:

Result:

Added greenish colour to get result (ignore debug text):

1 Like

Along those lines …this gives not a bad result as well using spot matching…sampling one of the skulls…

image

There are algorithms for curve fitting (2D and 3D). But those work on simple curves (2nd, 3rd, perhaps 4th order polynomals, or other well defined simple curves); colors in real life images don’t follow simple curves. Higher orders take up a lot more processing time, and can end up being little better than guesses with some scientific sauce on them.

To stick with your mountain example, I’ve yet to see a mountain top that is a nice parabole. Mostly it’s a kind of irregular triangle. So you need high-order functions to model them…

And curve-fitting assumes continuity between the clipped area and its surroundings. If your overexposed area is a sheet of paper, curve-fitting to fill in the area is possibly the worst solution (as the area is a flat sheet, the texture of which has no relation with the surrounding area).

All of those “nice” solutions to fill in missing data might work in particular cases, but are completely unwanted in others… Keep in mind that “restoring clipped highlights” is “filling in missing data”.

Even with only one channel clipped, you have to make assumptions about the missing data. It just happens that in that case, most of the time, the assumptions are correct or at least close enough.

You bring up good points.

I have been experimenting, though, and have found that it is possible to get quite good results. Here is a animated GIF of my highlight synthesis in fully clipped areas.

One thing I have found is that anything that is reasonably plausible is an improvement, provided it is less distracting than the clipping.

1 Like

Exactly! And though I didn’t make it clear, I was coming from the angle that there seems to be frustration about the prevalence of magenta, and I feel that myself. I appreciate a curve-fitted mountain top is a guess, but it’s likely to be better than vivid magenta, and if it doesn’t work in a particular picture, you can try something else.

I have a dream…!.. a gui with a wire frame chopped-off mountain, 3D effect, and you can add points and drag them around until you get the top you want, and rotate the mountain to view from behind, and add a valley… ok maybe not.

1 Like