Port the lmmse gamma step to the other demosaicers

The first step in lmmse demosaicing is adding gamma (like srgb gamma?) to the raw values, while the median steps are more suitable only for high iso noise images the first step could have benefits for low iso images too.

Demosaicing is not so different from upscaling and upscaling in linear gamma could have stronger artifacts like black dots and stronger ringing/ aliasing.

lmmse linear gamma

lmmse gamma

For the linear gamma purist the sigmoidal resize like imagemagick could be an option too and very close to gamma resize

imagemagick sigmoidal resize
“In many ways this is similar to resizing images in the default non-linear sRGB colorspace”

linear (not from raw)

sigmoidal (not from raw)

test using the latest rawproc version and this script, zoomed 2x

colorspace:camera,assign subtract:camera blackwhitepoint:camera whitebalance:camera tone:gamma,2.50 demosaic:xtrans_markesteijn,3 tone:gamma,0.40

Linear gamma markesteijn 3 pass

gamma corrected markesteijn 3 pass

From this PlayRaw

1 Like

@age, help me out here, trying to discern the difference. Also, is there a paper on this somewhere?

And, and, rawproc! Useful for this very sort of exercise… :smiley:

Yes asking for a paper is good.

I have read many papers about demosaicing over the last year and I didn’t come across one.

Sure. looks at the artifact in the blue lights

I see, I think this is so evident here because of the sharp black-to-“white” transition.
maybe also some colour channels might be clipped. Can you observe such good results also for perfectly exposed images?

I would be very interested in the raw file, especially if it has a Bayer sensor.

Would do some experiments and might implement such a correction for rcd.

@heckflosse any opinion on this?

I’m ambivalent about this atm. But if you need a bayer raw to see the differences betwwen lmmse without/with gamma, you can find one here

Nope, it’s just an idea with a very limited number of tests but I haven’t see any downsides.

Another thing not so difficult to test is the noise reduction before demosaicing using a different strategy that I haven’t read elsewere.

Basically the first step is demosaicing using the simple bilinear interpolation, do the noise reduction possibly in the y0u0v0 color space, reverse the demosaicing (this operation after the bilinear upscaling is mostly drop rows and columns), finally it’s possibile to use a more advanced gradient based method.

Hopefully it makes sense, at least it looks to me better than use the green channel as luminance or directly denoise the raw values

Sounds like another insane hack deduced from empirism that will work until it doesn’t.

A power transfer function (not a gamma, please) changes the variance in the picture. Depending how your interpolation problem is formulated, that can get in the way of its assumptions.

If the problem you are trying to solve is non-continuous reconstruction, that should be dealt with using spatial correction, and most likely a chroma low-pass filtering. I don’t see how correcting intensity is going to change that, and it’s not excluded that you only got lucky on your example with side-effects hiding conveniently the problem.

If we define the picture gradients as \nabla u(x, y) = \dfrac{d u}{dx} \vec{i} + \dfrac{d u}{d y} \vec{j} then the gradients over each R, G, B plate should be linked by a linear relationship such as a \nabla u_R = b \nabla u_G = c \nabla u_B, with a, b, c real factors. So the interpolation problem can be reduced to propagating gradients between channels to reconstruct missing data, with some normalization factor to take relative intensity into account.

But re-expressing \nabla u^\gamma(x, y) = \dfrac{d u^\gamma}{dx} \vec{i} + \dfrac{d u^\gamma}{d y} \vec{j}, you completely void the a \nabla u_R = b \nabla u_G = c \nabla u_B model so you are messing up the inter-channel correlation hypothesis, which is kinda important given that the available gradients from sensor readings are actually spatially shifted on the sensor plane.

And the most successful demosaicing methods right now (albeit slow) use a mix of laplacians and guided filtering (ARI, MLRI), which intrinsically use inter-channel covariance and intra-channel variance to express the R and B plates as a linear fit of the green plate like R = \dfrac{cov(R, G)}{var{R} + \epsilon} G + \left(mean(R) - \dfrac{cov(R, G)}{var{R} + \epsilon} mean(G)\right). So it’s pretty clear that tinkering with pixel intensity will mess that up by changing the variance, which does not seem necessary either to get high PSNR.

1 Like

Also, I don’t see a gamma in the LMMSE method (Minimum mean square error - Wikipedia) nor in the DLMMSE paper (https://www4.comp.polyu.edu.hk/~cslzhang/paper/LMMSEdemosaicing.pdf). There might be a square root as part of the least-squares minimization scheme, but that’s not a gamma.

1 Like

No one is denying this. At least, the IM discussion is not, but people can blow it out of proportion. I fear the situation where it becomes a canonical feature.

PS The way I see it, the right way to implement this is in a limited selective manner with if-thens. The issue with that is obvious. Is it worth implementing? If so, it would be inelegant at best.

It doesn’t changes that, it minimize some ugly artifacts of doing the operations in linear gamma.

That’s sure, I’ve never seen a perfect algorithm in image processing, as such it should ideally be implented like an option in the existing demosaicing choices.

I don’t have enough practice with demosaicing attempts to know if you could call them ‘not so different from upscaling’. It doesn’t sound correct though.

But in scaling algorithms (well, at least all the different variations of window resizers like bicubic / lanczos ) isn’t it commonly said that it’s better to do in linear space , to prevent artifacts?
Or is that only for downscaling?

I remember that image from a world map scaled in perceptual space and scaled in linear space, and ‘linear clearly wins’.

(And I know of a popular method with an imagemagick script, where the scaling is basically done twice. One time in linear, one time with a gamma of 3 applied. And the results are merged to find a middle ground ).

I’m not saying the idea is wrong , because I know way too little about the subject matter.

But could it be that the artifacts you highlight is basically the more ‘correct’ process, but you don’t like the result as much?
As in, it highlights problems with the scene recorded, those high contrast bright lights through a lens . L
It could also easily be that one of the channels clipping brings forth these kinds of artifacts, which means the solution is in proper clipping or highlight-reconstruction, not changing the demosaicing algorithm ?

What I meant is there is no theory to backup this and empirism sucks big time because you never have enough test samples to actually validate anything empiric. Looking at a stopped clock just at the right time may give you the false idea that the clock is working.

So, you either start from theory or you are committing shit. Because nobody gives a flying shit about black magic that conveniently hides issues if you void the hypothesis of the interpolation method you are hacking this way.

The “gamma” you are using has no physical nor perceptual meaning anyway, it’s an encoding trick for integer file formats.

Thats exactly why i asked for that raw file :slight_smile:

BTW, there is in fact a gamma step in the lmmse demosaicers in dt, rt and iirc libraw too.

Though optional

Ha, not exactly explicit. Apparently, in librtprocess it’s invoked when iterations > 0…

1 Like

Lanczos scaling in linear space can have ringing artifacts at the edges between bright and dark areas. Applying the scaling in log space, even if it’s not exactly ‘correct’, is one way to try to avoid these artifacts. See page 35 of Cinematic Color white paper for an example.

After more testing using the srgb gamma before demosaicing definitely helps with a lot a pictures, is more noticeable with x-trans sensors, generally i think that it renders saturated colors in a more “organic” way.

Pretty much all the local adjustments are perceptually better, with less artifacts, if they are performed in a gamma/log color space instead of linear.
Without testing how could you say what is better?
It just could be a valid alternative for what is worth.

(fwiw this was 1990s standard procedure in 3d rendering to antialias directly visible light sources… no matter how great your reconstruction filter, if it only has say 4x4 support it’ll render this whole block as a solid clipped 1.0 white because the lights are usually something like 1e8,1e10 or what. to render the borders nicely antialiased nonetheless and without 100px filter support, you first tonemap to non linear and antialias/accumulate then. everybody knows it’s wrong and would not publicly admit doing it but if the images need to be delivered what can you do) /me goes back into hiding again

3 Likes