Guiding laplacians to restore clipped highlights

Nice idea :frowning:

Well, there’s another problem with that module: it uses the raw white point after the white balance module. And in most cases, esp. red can have a WB coëfficient > 2. So with a 14-bit raw file, you consider a red value that uses 15 bits as clipped, where it isn’t clipped in the raw data before WB. The same can happen with the blue channel.
(Have a look at some clipped areas with WB module on and off, quite revealing…)

A lot of flowers have colours in the yellow-red-violet/magenta range. I really wouldn’t like automatic “correction” of magenta hues for my wild orchids.

3 Likes

Sometimes, we deal with chromatic aberration selectively, but I would caution the generalization. As you say, there are exceptions like flowers.

Not helpful, until you substantiate the sarcastic opinion.

Now I have a great idea. On setup you get asked “do you photograph flowers” yes, no. If no magenta will never feature in your images :smile:

Joking aside, as a user it’s so blatantly obvious magenta doesn’t belong in the frame, its fascinating that it’s hard for a program to detect this. Perhaps it’s guided by the ambition that it should never fail and mistakenly kill magenta. Thing is it fails the other way by producing magenta all the time. Presumably orders of magnitude more often.

I mean there is object recognition but I don’t think FLOSS raw processors or its devs are ready to do or accept that. Anyway, general purpose tools are much more useful than a tool that only works sometimes.

It is so obviously not a well thought out idea that nosle himself finds fault in it literally two sentences later.

Indeed but sarcastic remarks don’t contribute to the discussion either. I prefer @rvietor’s reply.

1 Like

There are a growing number of us who are tired of the diatribe. How would you like us to express this fatigue and annoyance?

3 Likes

I think there has been a request for true sensor clipping.data …armed with that users should be able to make informed evaluation and correction as necessary ???

1 Like

You mean a hint to the degree of clipping? Please elaborate.

This just 10 hours ago is likely the followup spoken about…

How can this help? Seeing something doesn’t allow you to edit it. It seems like we diverted from the subject.

Maybe this should be a new post because it really isn’t about the guided laplacian. The real subject to me (and I think s7habo too) is how to treat the magenta highlights in darktable. A short answer could be, don’t overexpose. But some of my shots are of my kids and they move thus changing the lightning.

True, we have a good idea where/what the clipping is already. The question is how to mitigate it. Ideally, we would want to estimate the full well signal but that is outside of our scope. @s7habo’s last attempt looks good if only a bit forced.

Well its really very easy…these issues in the extreme highlights are really impacted by the filmic norms…just turn them off for this image so default workflow …then with filmic v6 just drop the extreme luminence saturation and bam 217 217 217… no magenta…or other cast in the whites

image

So this

image

vs this

image

Except, at least in @s7habo’s image, the wall isn’t supposed to be white. I have additional thoughts but we are suppose to talk about GL.

Then we could shift first to when and when not to use them ie GLHR. For eg when is this the best approach over some optional or alternative approach which you could say we are still speaking about as we are interspersing and comparing methods here. I think the way the filmic norms affect the channels in the extreme highlight will impact all methods of highlight recovery. Or since the HR comes first then the interaction of the two resulting in the displayed image on the screen is one thing to consider.

Okay we can split hairs about the exact color of the wall and without being there is a guess for anyone… I just made it white and not bluish or magenta ish or whatever ish…I was simply trying to make a point that the focus is often on HLR and this is getting impacted at the level of what gets processed and displayed by the filmic norm handling. I think in the last iteration @s7habo is showing highly desaturated the HL in filmic reconstruction so it is a feature of his attempt to process the highlights in the image… I just did it a different way which likely has its own set of caveats…

I agree. It seems like we are arguing for the sake of it. Let’s look at it this way:

do the methods shown by @s7habo and @priort produce better results than the other HL reconstruction methods? It looks like they do.

Is it possible to manage/get rid of the infamous magenta cast? It looks like it is.

Do we all need more familiarity with this new feature, and its interplay with the filmic module before assuming it doesn’t work well? Likely, since it’s not officially out yet, and there is no documentation for it.

I vote for the collective chill pill

4 Likes

I didn’t realize we were arguing. I simply wanted to understand our discussion. Personally, I would have gone @priort’s route. It isn’t a competition but the difference with @s7habo’s is that he is trying out this new module to see where it could fit into his/our workflow. Does that make sense?

The only point of disagreement is making snide remarks. I know it comes from bad feelings/vibes from previous exchanges. As a mod, I feel that I should call that out. If folks disagree with @nosle’s comments, fine. Either ignore him or make a better contribution to the thread.

1 Like

ok, but sometimes sarcasm is just a very simple and valid way to comment and show slight disagreeing (btw i dind’t do that often on this forum)

It is very difficult in fact. You have to make “assumptions” why some photosites are fully saturated and some are not. For the majority of blown-out areas under common lighting situations it will be magenta, right. Existing algorithms in dt don’t treat “magenta cast” in a specific way.

That would likely involve image segmentation and further steps. I once implemented “Efficient Graph-Based Image Segmentation, Pedro F. Felzenszwalb and Daniel P. Huttenlocher, nternational Journal of Computer Vision, 59(2) September 2004” for dt, despite being efficient (as the title mentioned) it is very hungry for cpu power. Maybe i rethink on this, we could use this as a mask.

1 Like