Should/could darktable warn when invalid pixels pass the pipeline?

highlight reconstruction with the new modes (guided laplacians in v4, and even more the new inpaint opposed and segmentation based in 4.1) gives good enough results, especially with later desaturation performed by filmic or sigmoid.
And filmic has no clear idea of ‘white’, at least not v6. It turned an extremely bright, but also very saturated (false colour, due to clipping) Sun disk into dark purple.
Magenta highlights vs raw clipping indicator vs filmic white level - #3 by kofa, and the responses below it.

" Properly" is entirely too subjective. The highlight reconstruction module in the current stable has the guided lapsians method which works well. Next stable around the holidays will have two new ones that also work quite well.

This was Aurelien and it was some time back…he was talking about the older methods and the conflict with the use of the color calibration module to wb or more accurately manage the illuminant… at the time all he had as an option was the GLHR…it was very slow, worked best on things like blown lighting and was shown by him to actually be used to restore gradients in the highlights and then in concert with filmic make the correction… The two new modes that I think you have not tried correct what you have experienced with the instant cyan and magenta… the exacerbation of which came from the default application of the simpler clip highlights and the new approach to managing them in filmic v6… I think you will see if you use a more upto date version that you may find yourself less concerned…

It also makes me wonder with many sensors at 6000 or more pixels and many screens still at less than 1920 there are already artifacts due to scaling for display…managing how the inf or invalid or whatever values are processed as you scale in and out at different zoom levels might require something more complex

@kofa, @paperdigits, thanks, I’ll check out the development version (but then, darktable is not exactly new software, so one could expect not to need the bleeding edge version to treat a common problem satisfactorily).

@kofa, I read that thread to which you refer, and I’m also puzzled by filmic’s v6 behavior, but I also like the results that I’m able to achieve with it.

What made me start this thread was me editing an image that contains a setting sun. Guided laplacians did not get rid of all the weird colors. It was only in combination with using filmic’s reconstruction tab that I obtained good results. Overall, I had the impression that there are wrong colors in the pipeline, and it’s my responsibility to adjust filmic’s parameters in order to make them disappear (i.e. saturate them away). I was wondering whether this process could not be made more robust.

I do very much recommend the new highlight reconstruction options - I very rarely find any need for the filmic reconstruction now. I’ve been using the weekly and or nightly windows builds for a while now, and it’s good. With all respect to the developer(s), I find guided laplacians far too slow on my system, and also ineffective in most cases where I really need it.

That is exactly how AP demonstrates those tools in his video… v6 is far harder to manage sunsets IMO…v5 using no for color preservation and then using the latitude with the necessary shift and tweak of the midtone saturation is again IMO far easier to handle and the new sigmoid module often works really well on these images as well with little to no input … I think you will be pleasantly surprised… also maxrgb can be nice for some images but also it can sort of blur things and distort colors other times and the default will be moving to powernorm… this also seems to help with fighting sunsets , reds, yellows, oranges …

This was part of what happened with v6 right…many complained about v5 and desaturation so it was remodelled and also and it is fairly significant that a new perceputal color space is being used…

If you want to see the potential difference that you might have seen in the past or indeed in some other software… add some reasonable strong corrections in rgb color balance and then go to the mask tab and change from UCS to JzAzBz and see the difference when colors are more compressed in certain areas… it can give the impression of contrast but you can lose detail in the shadows… I guess my point is there is a lot going on under the hood and some changes introduced certain behaviors that some have found tricky to manage and efforts and strides have been made with quite some success I would say to improve things in 4.2

I don’t know what to tell you about your expectations except that they’re wrong.

You should be happy that the software is consistently improving.

2 Likes

quick clarify after the discussion has evolvel so much: Initial post was about conflation of clipped and unclipped data and how image processing in general deals (or rather not deals) with the distinction. Your post brings an example of how a simple algorithm is not prepared for that distinction. I agree with you, but I thought that OPs intent was to go down that rabbit-hole and talk about how algorithms could make the distinction.
Obviously that is not going to code itself, so implementation is the big problem.

Not sure if I went Alice in Wonderland on that? If I did, I want to apologize!

Not myself yet, just saw posts about the new methods inpaint opposed and segmentation based. My thoughts? I’m looking forward to them! I favor the approach of having some kind of highlight reconstruction on by default in conjunction with raw-clipping warnings to see where reconstruction happens.
I personally don’t see, apart from deliberate creative intent, a usecase where having no highlight reconstruction (and the artefacts caused by it) looks better than an attempted reconstruction.

1 Like

Ah, now I see, my misunderstanding then. That’s what was talked about further down after all, so you made a good assumption!

1 Like

Well, I am. :slight_smile: But Aurelien’s recent video on highlight reconstruction made me believe that the way things are in 4.0 is the way they are supposed to be. The consensus in this thread seems to differ, which is good news to me.

1 Like

The video is “old”, dt 4.2 will have new algos.

The main valid point for 4.0 is: all highlight reconstruction algos would love to have perfect coefficients but we don’t have them at this stage of the pipeline.

There are more problems not discussed in the video at all but that’s not the point here.

5 Likes

I think the upshot is that the semantics of “Invalid” are not valid for upper-end data… :laughing:

I checked out 4.2 and the new highlight reconstruction method works really well. Congratulations @hannoschwalm (et al.) for what seems to even involve original research! Treating invalid pixels early in the pipeline is of course preferable to looping them through.

Now if “someone” one day would find the time and inspiration to improve Darktable’s noise reduction, this software would become almost perfect from my point of view.

Post some examples of where it fails. I have ON1 photo that touts its fancy nonoise AI and usually the results with DT are as good if not better. Maybe you are simply using profiled denoise in DT and the profile for your hardware maybe could be better???

I suspect you have watched @rawfiner videos so as to get the most out of the 4 or 5 ways to denoise in DT??

In any case an example of “poor” performance would be useful on a path to better…

1 Like

Perhaps move further discussion of noise reduction to its own subject?