Should/could darktable warn when invalid pixels pass the pipeline?

In that case, if feasible, a checkbox to set that the clipping should be measured just before the conversion would probably not be a bad idea. That way you can be sure you’re feeding the tone-mapper the best possible data.

I know it often has artifacts and I suspect the new method of inpainting is likely considered superior…. I think as with all things if you have an old edit that uses it you can create a style from that

Is this though not sort of a display referred mentality… for filmic is the goal for the most part not to feed it a nicely exposed middle gray and then it does its thing. You adjust the parameters to manage how the data is mapped around that anchor… Depending on the DNR of the image you might mess with clipping and then have an 80% dark image that now you are trying to elevate… but maybe I misunderstand what your thought process…

EDIT

My way of analysing the data is to use the waveform…this really lays out what is going on across the image…much better than a histogram and I don’t apply filmic when I open the image. I look at it first to make my first impression… Doing this its pretty easy to see what you are working with and where any trouble spots are…

1 Like

But, correct me if I’m wrong @alpinist !, this is exactly what this discussion was intended to be about. If your whole pixelpipe just pretends that clipped pixels must propagated through the whole pipe and be treated like all other pixels, because the implemented algorithms don’t know any better, you pretty much describe what was written in the first post.

I agree that I can’t imagine a rewrite of every single module in dt to accomodate another ‘clipped pixels’ layer that propagates through the pipe. The conceptual problem with that is: how to decide what each modules behaviour should be taking into account the clipped values?

The simple answer to the problem is: without a reasonable raw-clipping paint-in method (aka highlight reconstruction in the broadest sense) the conflation of clipped and unclipped values remains.

On top of that comes the problem of determining ‘true clipping’.

1 Like

Sorry, I probably expressed myself poorly.
The data are “clipped” after the conversion, not prior to. If you feed the pixelpipe valid data (in the mathematical sense), you have to work really hard to have clipping in the scene-referred part of the chain (maximum float value is something like 10^38, we are usually working with values below 10 000)

It probably is. I just don’t have such in my hack software. I should probably start a ‘rawproc’ tag, but I’d probably be just talking to myself with it… :crazy_face:

In rawproc, I use the RawTherapee highlight reconstruction routine as packaged in their librtprocess library. I don’t even parameterize it; it pulls the relevant white point and other data from the internal metadata and does its thing. It only affects things > 1.0, so I’ve occasionally considered adding it to the default toolchain and inflicting it on every image, but I don’t because just like to deal with it…

Maybe I misread what you’re saying, but I think that’s going a bit Alice in wonderland on interpretation! I’m talking about image processing in general. This seems like a general curiosity/learning post, so I guess my point is implementation is a big problem for something like that. After all, even if it’s possible, who’s going to write the code?

Have you evaluated the new methods?? Thoughts??

I look at it this way.

If you have a shitty color profile then all your pixels might be “invalid”. I think a better choice is to stick to calling them clipped …because they are clipped they are missing data and because they are missing data they are an incomplete data set. By labelling some of this data as invalid provokes deviations in the discussion around what invalid means and I don’t think that is the best way to look at it IMO. So the question then will be more one of is DT presenting the most complete data set for editing given what has been captured by the sensor and recorded to the file that it can and if not how might that be improved… I think we now have some new methods of highlight recovery that attempt to fill in and complete the data…

So you can improve the starting point and then hopefully each module will do as little “harm” to that data set as possible… I think the modules are trying to do that esp the newer ones but again if they are failing and the deficits should be identified and targeted for discussion and improvement where necessary.

If you could have a toggle that would compress the current history stack to match the active module list I guess you could leave the current monitors on and walk your way up the processing chain to follow what happens to the data at each step…

Didn’t think of that, but of course.

By definition scene-referred data can’t clip, so once you are past the demosaic and inside the scene-referred part of the pixelpipe, there’s no way of knowing if a pixel is clipped. Which of course is why HLR is before demosaicing. Only when you are back in display-referred does it make sense to talk of clipping.

The only way to do what @alpinist talks about, would be to “tag” the pixels before entering scene-referred space. Keeping track of that would likely cause a significant performance hit. And every single module would need to be updated to make sure the tags are passed on correctly - don’t even want to contemplate what that would entail for modules that change image geometry…

My motivation for this thread is mostly curiosity. I like to understand why things are how they are, and how they could be improved, even if that’s only theoretical.

But this doesn’t prevent us from a thought experiment:

I imagine that values inside the pixel pipeline are represented as IEEE 754 floating point numbers. This standard knows some “special numbers” that are transparent to arithmetical operations. One among them is +infinity, and it seems like this could be one possible representation of a blown channel. After all, the true value of a saturated channel is something inbetween the saturation value and infinity, so infinity can be considered as the worst case.

Without knowing any internals of Darktable’s pipeline, I could imagine that adopting infinity would do the right thing:

  • New values that source an invalid value would become invalid themselves, either positively or negatively. For example, a blur would spread the marking to neighbors of an invalid pixels.

  • An infinity can be overwritten. So if a highlight reconstruction module overpaints pixels, they would be considered normal further downstream.

I think that this approach would be compatible with most algorithms that work within a RGB space. If the image is converted to some other color space, than I fear that just converting there and back to RGB again (without any other modification) would spread the infs, which is probably not good.

Here is a small Python program that demonstrates how infs are transparent to a Gaussian blur filter:

import numpy as np
from scipy.ndimage import gaussian_filter

a = np.zeros((10, 10))
a[2, 2] = 100
a[-2, -2] = np.inf

print(np.round(gaussian_filter(a, sigma=1)))

The outpout is

[[ 0.  1.  2.  1.  0.  0.  0.  0.  0.  0.]
 [ 1.  6. 10.  6.  1.  0.  0.  0.  0.  0.]
 [ 2. 10. 16. 10.  2.  0.  0.  0.  0.  0.]
 [ 1.  6. 10.  6.  1.  0.  0.  0.  0.  0.]
 [ 0.  1.  2.  1. inf inf inf inf inf inf]
 [ 0.  0.  0.  0. inf inf inf inf inf inf]
 [ 0.  0.  0.  0. inf inf inf inf inf inf]
 [ 0.  0.  0.  0. inf inf inf inf inf inf]
 [ 0.  0.  0.  0. inf inf inf inf inf inf]
 [ 0.  0.  0.  0. inf inf inf inf inf inf]]

I could code the same filter in C or Fortran, or any other language, and the result would be the same.

2 Likes

The predominant convention regarding using floating point for image values is to put black and white at 0.0 and 1.0, respectively (RawTherapee uses 0.0-65535.0, if the librtprocess library is indicative, and I have to convert to use their demosaic routines…)

Thing is, 1.0 isn’t important until a rendition has to be made; until then, the data can go as high as it needs to. That is what usually happens when the white balance multipliers are applied; red and blue values at or near the white point “go right” a ways past white. That’s okay, some highlight reconstruction tools use that ‘over-the’top’ data to postulate definition in the highlights just below the white point.

When the rendition is made, either for display or export, that’s when the clamp-to-white has to happen. If your processing is smart (both in the algos themselves as well as your choice of order-of-operations), it’s already done that with some deliberation. But there seems to be software out there that does white clamping in intermediate operations for various reasons. Bear-of-little-brain here doesn’t like that prospect…

IMHO what you’re jonesing for clipped values should be arbitrarily done at the low end, ops that make negative values should just truncate them to zero. But doing that at the high end just bolloxes scene-referred processing and the need to let stuff go past white, for a time…

Aha, good example indeed; you just demonstrated the problem exactly: now you propagated inf values from inside the “invalid” region into what was previously valid. So, the question is, how do you modify this nice guassian library routine to not do that?

I don’t think that that would work in practice.
As it is now, you can have pixels where only one of the channels is clipped, the other two values are within the valid range for the sensor involved. That doesn’t cause any problem for most of the routines, they just use the input they get.

Now, setting the value for the clipped channel to +Inf is not going to play nice with any change of colour space: such a change of colour space usually uses (linear?) combinations of the three colour channels to calculate the values for the new space. So when you start with a pixel with one channel at +Inf, you end up with a pixel with +Inf in all three channels. Not good if you want to do any kind of reconstruction based on the valid channels…

There may also be issues with demosaicing diffusing the +inf values over neighbouring pixels…

1 Like

As far as I’m concerned it’s working as it should. It’s true that the sole inf pixel of the input spreads quite a lot. But this is simply the area potentially affected by it. If the inf pixel was a false magenta instead, that magenta would spread (in bigger and smaller amounts) to all these pixels. It might spread just a tiny bit - but this is the kind of insidious arbitrary image alteration that I’d like to avoid.

But perhaps we want to crop away this area of the image anyway. Then we do not have to treat the infinity. Otherwise, the result serves as a reminder that feeding invalid values into a Gaussian blur produces even more invalid values. Some sort of effective highlight reconstruction seems essential before a blur (as indeed seems physically reasonable!).

Besides, many image modules are much more local than a Gaussian blur, so that the infinity would not spread as spectacularly.

Modern builds have a nicer reconstruction set as default .

So, those invalid pixels arr actually repaired and then fed into the rest of the pipeline.

Note that reconstruction is also done by default (without a method to change it or turn it off) in programs like Lightroom and DxO Photolab, and many more.

So older darktable (still 4.0.1 i believe ) had the defaults set to clip those pixels. Preventing the magenta , but also destroying any detail that might still be possible . The 4.1/4.2 defaults give more out of the box.

But, there still is a problem on some camera makers / models , where Darktable does NOT know which pixels are invalid. The raw-clipping indicator will not show them , and thus will not clip or repair them out of the box.

This is a problem / bug where Darktable doesn’t know the invalid pixels als you call them , so it also can’t do anything sensible with them.

So, to your question : Darktable doesn’t need to prevent anything with those pixels, since they get a good attempt at automatic repair in 4.1/4.2. and if Darktable doesn’t know they are invalid , your idea will also not work (together with the repair ).

I can see how some might desire that behaviour, but it’s quite possible to make a gaussian blur which treats some regions as “masked” (or boundary) and won’t do any such spreading. That’s the deal: many algorithms don’t play as nicely, and in general you probably don’t want your masked region to have effects outside of it.

If you enable highlight reconstruction (which is an automated treatment), the lost highlights will be treated. If you don’t, they won’t be (not by some automated way to get sensible values into those pixels), and you’ll have to treat them somehow (e.g., by clipping to white, desaturating or whatever).

The only module that cares about what is clipped and what is not is the module having the explicit responsibility to do so: highlight reconstruction.

It’d be pretty much impossible to specify what each module should do if it encounters a blown pixel. What should lens correction or rotate and perspective do, for example? Or any of the sharpening/blurring filters (including those for noise reduction). They don’t map pixels 1:1 (1 input → 1 output).

Plus, a blown raw pixel (let’s say, a green one) will affect all the pixels surrounding it (via demosaicing). You could say you try to estimate the green component from the remaining unclipped green pixels, but what if some red or blue raw pixel has no unclipped green neighbours? These are exactly the problems highlight reconstruction deals with.

Yes, indeed, that’s what I meant when I wrote “I fear that just converting there and back to RGB again (without any other modification) would spread the infs, which is probably not good”.

But there are other possibilities. One could add something akin to an alpha channel to each color channel separately. That value could describe the degree of validity of the associated color value, with 0.0 meaning invalid and 1.0 meaning valid. This would double the amount of data to be processed, so I do not really mean this seriously. It’s only to show that there are different possibilities.

Kofa, you raise excellent points, and I agree that what you describe seems to totally make sense. However, in my experience it’s not how darktable behaves.

With current darktable (4.0.0), I have made the experience that I can treat blown highlights only in filmic rgb properly. Either by using the highlight reconstruction functionality of that module, or by lowering the white point. And I spent many hours on this topic, read a lot, and watched several videos, including the recent 1,5h lecture by the author of filmic rgb.

And I remember reading or hearing somewhere that the highlight reconstruction cannot reconstruct highlights properly because it doesn’t even know properly what is white (at that point in the pipeline).