Should/could darktable warn when invalid pixels pass the pipeline?

Modern builds have a nicer reconstruction set as default .

So, those invalid pixels arr actually repaired and then fed into the rest of the pipeline.

Note that reconstruction is also done by default (without a method to change it or turn it off) in programs like Lightroom and DxO Photolab, and many more.

So older darktable (still 4.0.1 i believe ) had the defaults set to clip those pixels. Preventing the magenta , but also destroying any detail that might still be possible . The 4.1/4.2 defaults give more out of the box.

But, there still is a problem on some camera makers / models , where Darktable does NOT know which pixels are invalid. The raw-clipping indicator will not show them , and thus will not clip or repair them out of the box.

This is a problem / bug where Darktable doesn’t know the invalid pixels als you call them , so it also can’t do anything sensible with them.

So, to your question : Darktable doesn’t need to prevent anything with those pixels, since they get a good attempt at automatic repair in 4.1/4.2. and if Darktable doesn’t know they are invalid , your idea will also not work (together with the repair ).

I can see how some might desire that behaviour, but it’s quite possible to make a gaussian blur which treats some regions as “masked” (or boundary) and won’t do any such spreading. That’s the deal: many algorithms don’t play as nicely, and in general you probably don’t want your masked region to have effects outside of it.

If you enable highlight reconstruction (which is an automated treatment), the lost highlights will be treated. If you don’t, they won’t be (not by some automated way to get sensible values into those pixels), and you’ll have to treat them somehow (e.g., by clipping to white, desaturating or whatever).

The only module that cares about what is clipped and what is not is the module having the explicit responsibility to do so: highlight reconstruction.

It’d be pretty much impossible to specify what each module should do if it encounters a blown pixel. What should lens correction or rotate and perspective do, for example? Or any of the sharpening/blurring filters (including those for noise reduction). They don’t map pixels 1:1 (1 input → 1 output).

Plus, a blown raw pixel (let’s say, a green one) will affect all the pixels surrounding it (via demosaicing). You could say you try to estimate the green component from the remaining unclipped green pixels, but what if some red or blue raw pixel has no unclipped green neighbours? These are exactly the problems highlight reconstruction deals with.

Yes, indeed, that’s what I meant when I wrote “I fear that just converting there and back to RGB again (without any other modification) would spread the infs, which is probably not good”.

But there are other possibilities. One could add something akin to an alpha channel to each color channel separately. That value could describe the degree of validity of the associated color value, with 0.0 meaning invalid and 1.0 meaning valid. This would double the amount of data to be processed, so I do not really mean this seriously. It’s only to show that there are different possibilities.

Kofa, you raise excellent points, and I agree that what you describe seems to totally make sense. However, in my experience it’s not how darktable behaves.

With current darktable (4.0.0), I have made the experience that I can treat blown highlights only in filmic rgb properly. Either by using the highlight reconstruction functionality of that module, or by lowering the white point. And I spent many hours on this topic, read a lot, and watched several videos, including the recent 1,5h lecture by the author of filmic rgb.

And I remember reading or hearing somewhere that the highlight reconstruction cannot reconstruct highlights properly because it doesn’t even know properly what is white (at that point in the pipeline).

highlight reconstruction with the new modes (guided laplacians in v4, and even more the new inpaint opposed and segmentation based in 4.1) gives good enough results, especially with later desaturation performed by filmic or sigmoid.
And filmic has no clear idea of ‘white’, at least not v6. It turned an extremely bright, but also very saturated (false colour, due to clipping) Sun disk into dark purple.
Magenta highlights vs raw clipping indicator vs filmic white level - #3 by kofa, and the responses below it.

" Properly" is entirely too subjective. The highlight reconstruction module in the current stable has the guided lapsians method which works well. Next stable around the holidays will have two new ones that also work quite well.

This was Aurelien and it was some time back…he was talking about the older methods and the conflict with the use of the color calibration module to wb or more accurately manage the illuminant… at the time all he had as an option was the GLHR…it was very slow, worked best on things like blown lighting and was shown by him to actually be used to restore gradients in the highlights and then in concert with filmic make the correction… The two new modes that I think you have not tried correct what you have experienced with the instant cyan and magenta… the exacerbation of which came from the default application of the simpler clip highlights and the new approach to managing them in filmic v6… I think you will see if you use a more upto date version that you may find yourself less concerned…

It also makes me wonder with many sensors at 6000 or more pixels and many screens still at less than 1920 there are already artifacts due to scaling for display…managing how the inf or invalid or whatever values are processed as you scale in and out at different zoom levels might require something more complex

@kofa, @paperdigits, thanks, I’ll check out the development version (but then, darktable is not exactly new software, so one could expect not to need the bleeding edge version to treat a common problem satisfactorily).

@kofa, I read that thread to which you refer, and I’m also puzzled by filmic’s v6 behavior, but I also like the results that I’m able to achieve with it.

What made me start this thread was me editing an image that contains a setting sun. Guided laplacians did not get rid of all the weird colors. It was only in combination with using filmic’s reconstruction tab that I obtained good results. Overall, I had the impression that there are wrong colors in the pipeline, and it’s my responsibility to adjust filmic’s parameters in order to make them disappear (i.e. saturate them away). I was wondering whether this process could not be made more robust.

I do very much recommend the new highlight reconstruction options - I very rarely find any need for the filmic reconstruction now. I’ve been using the weekly and or nightly windows builds for a while now, and it’s good. With all respect to the developer(s), I find guided laplacians far too slow on my system, and also ineffective in most cases where I really need it.

That is exactly how AP demonstrates those tools in his video… v6 is far harder to manage sunsets IMO…v5 using no for color preservation and then using the latitude with the necessary shift and tweak of the midtone saturation is again IMO far easier to handle and the new sigmoid module often works really well on these images as well with little to no input … I think you will be pleasantly surprised… also maxrgb can be nice for some images but also it can sort of blur things and distort colors other times and the default will be moving to powernorm… this also seems to help with fighting sunsets , reds, yellows, oranges …

This was part of what happened with v6 right…many complained about v5 and desaturation so it was remodelled and also and it is fairly significant that a new perceputal color space is being used…

If you want to see the potential difference that you might have seen in the past or indeed in some other software… add some reasonable strong corrections in rgb color balance and then go to the mask tab and change from UCS to JzAzBz and see the difference when colors are more compressed in certain areas… it can give the impression of contrast but you can lose detail in the shadows… I guess my point is there is a lot going on under the hood and some changes introduced certain behaviors that some have found tricky to manage and efforts and strides have been made with quite some success I would say to improve things in 4.2

I don’t know what to tell you about your expectations except that they’re wrong.

You should be happy that the software is consistently improving.

2 Likes

quick clarify after the discussion has evolvel so much: Initial post was about conflation of clipped and unclipped data and how image processing in general deals (or rather not deals) with the distinction. Your post brings an example of how a simple algorithm is not prepared for that distinction. I agree with you, but I thought that OPs intent was to go down that rabbit-hole and talk about how algorithms could make the distinction.
Obviously that is not going to code itself, so implementation is the big problem.

Not sure if I went Alice in Wonderland on that? If I did, I want to apologize!

Not myself yet, just saw posts about the new methods inpaint opposed and segmentation based. My thoughts? I’m looking forward to them! I favor the approach of having some kind of highlight reconstruction on by default in conjunction with raw-clipping warnings to see where reconstruction happens.
I personally don’t see, apart from deliberate creative intent, a usecase where having no highlight reconstruction (and the artefacts caused by it) looks better than an attempted reconstruction.

1 Like

Ah, now I see, my misunderstanding then. That’s what was talked about further down after all, so you made a good assumption!

1 Like

Well, I am. :slight_smile: But Aurelien’s recent video on highlight reconstruction made me believe that the way things are in 4.0 is the way they are supposed to be. The consensus in this thread seems to differ, which is good news to me.

1 Like

The video is “old”, dt 4.2 will have new algos.

The main valid point for 4.0 is: all highlight reconstruction algos would love to have perfect coefficients but we don’t have them at this stage of the pipeline.

There are more problems not discussed in the video at all but that’s not the point here.

5 Likes

I think the upshot is that the semantics of “Invalid” are not valid for upper-end data… :laughing:

I checked out 4.2 and the new highlight reconstruction method works really well. Congratulations @hannoschwalm (et al.) for what seems to even involve original research! Treating invalid pixels early in the pipeline is of course preferable to looping them through.

Now if “someone” one day would find the time and inspiration to improve Darktable’s noise reduction, this software would become almost perfect from my point of view.

Post some examples of where it fails. I have ON1 photo that touts its fancy nonoise AI and usually the results with DT are as good if not better. Maybe you are simply using profiled denoise in DT and the profile for your hardware maybe could be better???

I suspect you have watched @rawfiner videos so as to get the most out of the 4 or 5 ways to denoise in DT??

In any case an example of “poor” performance would be useful on a path to better…

1 Like

Perhaps move further discussion of noise reduction to its own subject?