Port the lmmse gamma step to the other demosaicers

More users to break your toys ?

Thank you for your insightful retort. Anyone else have a more constructive answer?

6 Likes

There are many problems you donā€™t see until you really push the algos. The designer of the tool is rarely as qualified as a monkey to break it.

3 Likes

I think I need to study a bit deeper on why demosaic algos need proper whitesā€¦ I know they compare gradients between the channels to do the upsamling often assuming green as the luminance estimate.
But it still weird to me why we would be forced to apply white balance before. Having a white balance invariant demosaic step would resolve the problem of proper color handling to be done afterwards as well as not introducing weird scaling that has to be taken into account when denoising afterwards. Heck the demosaic should take the camera noise profile into account such that it doesnā€™t propagate noise across channels!

1 Like

i think this is most prominently due to the old ahd implementation dt had copied from ufraw. that one did some fake lab conversion on the pixels and computed gradients only on luminance, not on colour. it does show severe mazing artifacts if no white balance is applied before. i think many others do not have this property. in vkdt, iā€™m jointly estimating the overall direction of edges by looking at features in the half-size image (including all sensor readings).

before demosaicing, while still working on the single sensor reading per pixel you canā€™t really apply any sophisticated chromatic adaptation transform. all you can do for a single pixel in isolation is a per-channel multiplication in camera raw. that has some restrictions (blows values out of all gamuts, doesnā€™t really match physical reference so well etc). but youā€™d probably apply a matrix later, maybe even an interpolated one? for a specific scene illuminant.

i very much dislike this camera-raw multiplier. once you do that, your colour coordinates are off/irrecoverably out of gamut. the dng matrices come with illuminant anyways, and you can replace them by a more accurate lut/spectral input transform.

3 Likes

Thanks, makes sense to bear-of-little-brain here. I may do some experimentation with the librtprocess algorithms, see if this holds out in current implementations.

From the start, it always seem to me to be a rather egregious thing to do to the data. Some years ago, I tried making a colorchecker matrix profile with an unwhitebalanced target shot, and the results were quite satisfying in terms of preserving color ā€œrichnessā€, for lack of a more precise term. Hereā€™s the post I did on it:

i would second your suspicion that maybe the reason was the quality of the first profile. it seems to me that camera raw wb (a multiplication by a diagonal matrix) followed by a matrix multiplication (the one fitted for the profile) is still a linear operation, i.e. can be expressed by a single matrix multiplication just fine. so given the same input and a capable optimisation algorithm i would expect the same result here.

Besides the demosaicing methods that explicitly try to demodulate luminance from chrominance (whatever that means before having applied a color profileā€¦), and therefore need some proper relative weighting or R, G, and B, most methods try to read the green channel to extract its gradients and bend the gradients over R and B in a way that keep them correlated (meaning no chromatic aberrations). As soon as you start messing up with gradients, you need their magnitudes to be similarly scaled if you want to swap them between channels without overshooting.

Also, behold the X-Trans CFA and its bullshit non-uniform pattern that makes pretty much any uniform discretization impossible. This one has to decorrelate luminance (actually ā€œgreenā€) from the chrominance because of that weird sampling. I will take the fact that Fuji medium format cameras use Bayer as a remorse.

4 Likes

i donā€™t agree. iā€™m using the same code for both bayer and xtrans now, and iā€™m going to claim that it doesnā€™t depend on white balancing beforehand. the whole difference is that bayer uses 2x2 blocks and xtrans uses 3x3 blocks.

The Google variance-based method ?

yis, thatā€™s what i started out with. by now i have so many extra regularisation measures in place that iā€™m not sure iā€™d call it that. seemed necessary to suppress a few artifacts and to improve sharpness.

Keeps you regular. (Sorry, couldnā€™t resist - happy new year, folks!)