What is the best high ISO bayer demosaicer?

Hi friends, i am thinking about implementing a demosaicer suited specially for high ISO images in darktable. Currently evaluating IGV and LMMSE.

Is there any rationale which one is better? Any other suggestions? Is it really worth it?

1 Like

DLMMSE (http://www4.comp.polyu.edu.hk/~cslzhang/paper/LMMSEdemosaicing.pdf) is very good, I think it’s the best trade-off currently and it made second place on a NASA study for their next rover camera, among the non-AI demosaicers (which perfs come surprisingly close to the heavy deep-learning ones).

ARI and its improvements (https://www.mdpi.com/1424-8220/17/12/2787) seem very good too (regarding PSNR), however the timings are not really realistic.

I would stay away from anything that relies too heavily on the luma/chroma methaphor because they perform well if tested against white-balanced images (which is the only case tested in any demosaicing paper) but there are no guaranty they will work on non-WB images (and they most likely won’t).


DLMMSE ? I just knew about LMMSE?

There is a slight disadvantage for LMMSE, it’s not so easy to tweak it’s performance as it is for IGV, IGV can be easily done as a tiled version with excellent performance but as you pointed out, LMMSE is documented well and somewhat “state of the art”. Also LMMSE seems to be more tricky to do in OpenCL.

Let’s see what other people say but your argument is very strong here.

The benefit of tiling (+ padding + overhead…) on modern architectures is not as it used to be, especially with all the clever memory optimizations performed by current compilers, I wouldn’t bother with that.

The benefit of tiling (+ padding + overhead…) on modern architectures is not as it used to be …

Yes, much better with vectorizing. But these algos all have to work on a huge amount of mem and that might only be improved with smaller local data per thread. As an example take RCD; the tiled version is at least 4fold faster …

1 Like

Sure, but GPUs have their own ways to manage memory, and anyway you need to hand-tune the tile size of CPU for each architecture. I remember @heckflosse and I trying to devise the best tile size for Amaze and got different results on different CPUs.

Also @hanatos demosaics RAWs with Vulkan/GPU in a matter of milliseconds, I’m not sure tiling is involved at all.

Following with interest!

I was surprised at how many methods that aren’t working on linear raw data and/or do not care about sensor noise modeling at all when I did a reading tour some weeks ago.

right. i use the google way of splatting gaussians with a covariance matrix that adapts to the local surroundings. it’s super general and fast: it does not care about colour, about black points or white points, it works for bayer and x-trans pretty much the same (shares most code), and can be extended to superresolution via aligning multiple shots. (https://github.com/hanatos/vkdt/tree/master/src/pipe/modules/demosaic)

i denoise the raw data before demosaicing so there’s no noise model involved. also i reconstruct highlights on raw data, so the demosaicing does not need to take care about interpolating clipped values.

the only place where i do tiling is @heckflosse’s iterative deconvolution, because this way it can run like 20 iterations on the same data in shared memory instead of going to global memory in between.

can provide timings for demosaicing if you want to do a perf shootout… but it’s not going to be an interesting comparison between CPU and the vkdt version.

sample for 24MP:

[perf] demosaic_down:	   0.124 ms
[perf] demosaic_gauss:	   0.128 ms
[perf] demosaic_splat:	   1.521 ms
[perf] demosaic_fix:	   2.010 ms

where the last step is optional (colour smoothing). i suppose i could optimise this a little, maybe merge the first two kernels. maybe more important to look at quality.