Denoising based on AI

Several applications for denoising utilize artificial intelligence (e.g. DxO PhotoLab, Topaz Denoise AI). Their performance is the state of the art. Darktable cannot catch up with these applications (check out this comparison for instance: darktable noise reduction and how does it compare to Topaz Denoise AI - YouTube). Unfortunately, the mentioned applications run only on Windows or Mac. This leads to the question of whether somebody of you knows similar applications that also run under linux and whether there will be an AI-based denoising tool in darktable in the future?

With that said I think CR2 >Grey scale .PGM > Noise reduction > pgm2dng > .DNG that I wrote before is the wrong way to go if one want to implement it into darktable

also see this post (resnet in gmic): Machine Learning Library in G'MIC - #14 by David_Tschumperle

2 Likes

This may be just hype, it looks like machine learning; which is basically just fitting a very flexible function to a ton of data.

There is no inherent reason why Darktable could not have an ML-based denoising module, it ā€œjustā€ needs to be implemented and trained.

That said, while an ML-based option would be nice to have as it could be trained on a lot of different kinds of textures and structures, with diffuse or sharpen in 3.7 I usually manage to recover an almost uncanny amount of detail after denoising (using the dehaze preset) when I bother.

agree about the hype. marketing departments all over the planet are happy that thereā€™s something new to sell (glad itā€™s not the metaverse this time). still calling a convolutional network ā€œartificial intelligenceā€ really hurts imho.

and no, thereā€™s no inherent problem (like manpower or machine power) that we wouldnā€™t have in the open source community. one personal issue i have is that usually these networks are super slow even just for the inferencing and have to ship a ton of weights. i really like the trade off in gā€™mic, a few 100k state seems to be well balanced. also denoising networks donā€™t require that much input/training, but degenerate to standard wavelets when reducing the channels in the u-net too much (and we can do that much faster).

for real-time non-destructive editing, iā€™d really like my filters to run fast.

6 Likes

When pixels vary really fast in a neighbourhood, that can be because of a legitimate edge or because of noise. The gist of all modern denoising methods is to guess if a particular high variation is a legitimate edge or noise, and in case itā€™s seen as noise, patch it with a legitimate edge taken from another neighbourhood (possibily in another picture) or entirely synthesized.

In which machine learning is only one of the many options available. Not the a goal in itself. Unless your goal is to slap marketing keywords onto your app.

Many thanks for your comments. I cannot say anything about the diffuse or sharpen module in 3.7, which I havenā€™t tested yet.

The neural network for Gā€™MIC seems very, very promising and Iā€™ll have a look at it.

I donā€™t agree with the idea that the use of machine learning algorithm, that is, neural networks specifically, is only a marketing hype. With the three denoising modules in darktable, different demosaicing algorithms and the sharpness modules, it is just not possible to remove noise while maintaining sharpness to the same extent as with most recent proprietary software (I highly recommend you to compare the software by yourself). From my viewpoint, neural networks just perform extraordinarily well with respect to denoising.

For me, it doesnā€™t matter whether darktable utilizes neural networks for denoising or not - as long as the results can catch up with the state of the art. I highly doubt, however, that state of the art performance can be achieved without utilizing neural networks.

3 Likes

It would be great if you could start a new thread and provide a RAW to play with, comparing the best result from proprietary software to what people on this forum can achieve with FOSS software.

Use my bird photo Exporting/importing photos in full 32-bit representation (to train a neural network for image denoising) - #85 by Peter

https://discuss.pixls.us/uploads/short-url/wIMwO8k7QMy3cPJ0uTavmeprtoz.CR2
IMG_6990.CR2.xmp (14.2 KB)

IMG_6990-CR2_DxO_DeepPRIME.dng (92.5 MB)
IMG_6990-CR2_DxO_DeepPRIME.dng.xmp (12.2 KB)

Often, an interesting to do with image denoising algorithms is to repeat them several times, to see what kind of features it removes, what kind it preserves, and how long it takes to completely remove the strong edges in an image (and eventually what artefacts it generates as well).

Here is an example with a Patch-based denoising algorithm (25 iters):

Here is another one with a CNN (25 iters):

3 Likes

dtā€™s denoising methods have been chosen to be fairly real-time since dt is more than a simple denoiser, while the state-of-the-art soft are standalone denoisers that work on their own and re-injects their output into processing soft later.

The circumstances are different. I believe the non-local means are already pushing the limit of what can be called real-time.

1 Like

Not true for all the denoisers out there. DxO integrates their NN-based denoiser in the RAW development software, and as far as I understood it couples it with the sharpener for recovering details.

I have ON1 versionā€¦itā€™s quite slow and prone to artifactsā€¦so while its integratedā€¦itā€™s not close to real time and I donā€™t think Topaz or Dxo are either

Hi all,

I asked two people to denoise three raw photographs for me with DxO. You can download the raws and the denoised photographs here: dicuss.pixls - pCloud

NEFs represent the raws. Photographs ending in DNG were denoised and sharpened with DxO Pure Raw. Photographs ending in tif were denoised with DxO Elite Deep Prime. Donā€™t ask me what the difference is.

Denoising a photograph took less than a minute.

Feel free to show me what you can achieve with darktable!

Cheers, Hanno

You donā€™t seem to get it right ?

Nobody is claiming these products canā€™t give good - if not superior - results.

But the very limited development energy that is available in an open source project, is put where the developers want to put their time.

Darktable is not made to be a competitor to Lightroom, itā€™s not even made to be everyoneā€™s Lightroom replacement.
Itā€™s made because the developers wanted to make a piece of software that suits them.

This line from the start shows everything that it wrong with your way of thinking.

Who says darktable wants to ā€˜catch upā€™? It might just be fine with what itā€™s capable of now. It might think other things have more priority. Or - most probably - other things are just more fun to be working on.

Thatā€™s open source. You think something should be in a project? Contribute.

You sound like youā€™re almost demanding where the development time of darktable should be spent.

https://twitter.com/fabpot/status/1456175998874144768?s=20

2 Likes

a violin canā€™t catch up with a trumpet. But you cant play chords on a trumpet :wink:

8 Likes

That said, this also sounds like a fun project and I wouldnā€™t be surprised if a FOSS implementation surfaced in the medium run (cf the Gā€™MIC topic mentioned above), at which point Darktable can just incorporate it.

You are of course right that the two options are either contributing, or just waiting patiently for this to happen.

1 Like

Also, darktable is, afaik, supposed to be used interactively (thatā€™s why the deconvolution module was abandoned).
So any procedure that takes in the order of a minute per image isnā€™t really suitable for darktable in its current form.

Do you really want to adjust parameters interactively with such a module?

1 Like

Edit the whole picture the way you like it. After that, apply noise reduction and wait 10-30 seconds or even a minute. That is fine with me. Even longer if the result is good like DxO/Topaz.

2 Likes

To my viewpoint, this discussion made clear:

  • that many users of darktable find the performance of software for denoising based on neural networks superior to darktable in terms of the result.

  • With respect to runtime, neural network denoising does not fill well into the idea of darktable as a fairly real-time software, although nobody mentioned that denoising also takes a minute if you increase ā€˜search radiusā€™ in ā€˜denoise (profiled)ā€™ to a certain value.

  • A state-of-the-art denoising tool may not be desirable for darktable, as other challenges also call for attention.

  • First applications for denoising based on neural networks already exist for Linux, e.g. in Gā€™MIC.

To my viewpoint, most users of darktable highly appreciate this sophisticated non-proprietary software including myself. Iā€™m sure that the developers and contributors of darktable know this and make sensible decision about how they invest their time.

2 Likes