However, I had to deal with ISO10,000 RAW files lately and wasn’t very happy with the artifacts in the denoised image. Decided to revisit the workflow, made some minor improvements (possible thanks to newer versions of darktable)
I incorrectly thought because the training data was properly exposed noisy-clean pairs, that I should apply exposure + filmic/sigmoid + WB/color-calibration before feeding to nind-denoise. That was a mistake, feeding the demosaiced RAW as-is (even very underexposed) spits out cleaner denoised image with almost no artifact. Thus, this time I tried keeping as few operations in first stage as possible (channelmixerrgb is still needed to maintain the correct color profile for the second stage).
EDIT: just to clarify, in the very dark/night scenes where I need to crank +5EV (e.g. firework, lightshow, …), it’s better to shift the exposure module to the first stage.
for RAW with clipped highlights, the Highlight Reconstruction module does a very good job. However, I found that only OpenEXR and 32-bit TIFF retain the recovered highlight data when exporting/importing into darktable. OpenEXR is not well supported with torch, and cannot carry EXIF data, thus, I modified nind-denoise to output 32-bit TIFF instead (took a while to figure out that imageio works instead of tifffile).
when I split the sidecar XMP file into 2 stages, the second stage is a 32-bit TIFF which is different from the original RAW file, thus resulted in mismatched colors and exposure. Luckily, after experimenting and comparing against the manual process, only iop_order_version and colorin need to be adjusted to import the denoised TIFF file correctly back into darktable.
Different demosaic algorithms produce different denoised results (perhaps it depends on which demosaic was used for the training data). For Bayer, I think AMaZE-dual and RCD-dual give more details, but for human subjects, I found PPG to be cleaner (match greens should use local average ). For X-Trans, Markesteijn 3-pass is the best. For all demosaic, color smoothing must be disabled. Different models/demosaic will also give slightly different color casts.
Uploading the denoised outputs of different Bayer demosaic algos in darktable for comparison, using the same sample RAW file in the previous post.
Hopefully that will result in something. The author of nind-denoise was looking into that, too, but it didn’t get much interest from the darktable devs to move forward.
Rewriting nind-denoise in C to be integrated into darktable was also on the author’s to-do list, but looks like the author got better things to focus on
There are a few reasons that pushed me to work on this script:
not enough devs and interest: I’m not familiar with C/C++, and don’t know much about the internal architecture of darktable, can’t contribute to coding directly.
I need an immediate solution to make use of the readily available nind-denoise, thus, hacking/injecting it into the workflow instead of integrating (thanks to the available darktable-cli).
the darkroom needs to be responsive in real time, it’s already slightly laggy without a GPU. Whichever neural-network denoise integrated into DT will need to be on-demand or at export only, otherwise the UI will be unacceptably laggy (nind-denoise takes about 8-9 seconds for a 24MP RAF on my 8GB RTX3060)
training data for different sensors/demosaics is still the biggest hurdle for FOSS as we don’t have budget for samples. Community contribution didn’t take off for the NIND dataset so far. Hopefully ONNX with the backing of big players will.
This stopgap solution is all I have at the moment, while waiting for a proper long-terms solution like ONNX
ONNX integration will at least make it much easier to play around. Prompted by this thread, I did try several models with chaiNNer, but eventually gave up because it was too much of a faff and it wasn’t really obvious either what kind of data the models expected.
But as you say, the biggest problem is a lack of high quality data sets. In fact, the only one I’m aware of is SIDD, and that only contains samples from a handful or so older smartphones. I had at look at the NIND set, but I’m not sure if that’s rigorous enough to make something that can compete with DxO and Adobe.
The module can just cache the result and have a button to rerun the model.
I’ve floated the idea of a generic input/output module in DT a few times, so you could basically shell out data somewhere else and then bring it back into the pipe. Obviously makes storing that data a nightmare, but…
I think at this point there is no ideal solution yet, even LR’s implementation is clunky (e.g. extra DNG file needed, but any software will need to cache the denoised image anyway). There will have to be compromises, one way or another.
For my workflow, I don’t care much about pixel-peeping the denoised image during darkroom development. I might inspect how good it is a few times while playing around with different demosaic, … but not during typical development. Before nind-denoise, I have a preset that applies Profiled Denoise on export only, otherwise it’ll introduce some lag in the darkroom. Thus, I think any time-consuming processing for extra quality should be applied on export only by default.
I shoot at events quite often, usually about 3,000-5,000 shots, culled down to about 500-1,000 shots using Geeque before importing into darktable. Applying NN denoise up front, even with caching, will still be 1000 * 8 secs => over 2 hours, plus a huge cache of denoised 16/32-bit images. I’d rather save that till the export phase, no cache needed.
This script is working very well for me, so far I don’t see a need to improve this workflow, given the known constraints.