Denoising based on AI

Hi all,

I asked two people to denoise three raw photographs for me with DxO. You can download the raws and the denoised photographs here: dicuss.pixls - pCloud

NEFs represent the raws. Photographs ending in DNG were denoised and sharpened with DxO Pure Raw. Photographs ending in tif were denoised with DxO Elite Deep Prime. Don’t ask me what the difference is.

Denoising a photograph took less than a minute.

Feel free to show me what you can achieve with darktable!

Cheers, Hanno

You don’t seem to get it right ?

Nobody is claiming these products can’t give good - if not superior - results.

But the very limited development energy that is available in an open source project, is put where the developers want to put their time.

Darktable is not made to be a competitor to Lightroom, it’s not even made to be everyone’s Lightroom replacement.
It’s made because the developers wanted to make a piece of software that suits them.

This line from the start shows everything that it wrong with your way of thinking.

Who says darktable wants to ‘catch up’? It might just be fine with what it’s capable of now. It might think other things have more priority. Or - most probably - other things are just more fun to be working on.

That’s open source. You think something should be in a project? Contribute.

You sound like you’re almost demanding where the development time of darktable should be spent.

https://twitter.com/fabpot/status/1456175998874144768?s=20

2 Likes

a violin can’t catch up with a trumpet. But you cant play chords on a trumpet :wink:

8 Likes

That said, this also sounds like a fun project and I wouldn’t be surprised if a FOSS implementation surfaced in the medium run (cf the G’MIC topic mentioned above), at which point Darktable can just incorporate it.

You are of course right that the two options are either contributing, or just waiting patiently for this to happen.

1 Like

Also, darktable is, afaik, supposed to be used interactively (that’s why the deconvolution module was abandoned).
So any procedure that takes in the order of a minute per image isn’t really suitable for darktable in its current form.

Do you really want to adjust parameters interactively with such a module?

1 Like

Edit the whole picture the way you like it. After that, apply noise reduction and wait 10-30 seconds or even a minute. That is fine with me. Even longer if the result is good like DxO/Topaz.

2 Likes

To my viewpoint, this discussion made clear:

  • that many users of darktable find the performance of software for denoising based on neural networks superior to darktable in terms of the result.

  • With respect to runtime, neural network denoising does not fill well into the idea of darktable as a fairly real-time software, although nobody mentioned that denoising also takes a minute if you increase ‘search radius’ in ‘denoise (profiled)’ to a certain value.

  • A state-of-the-art denoising tool may not be desirable for darktable, as other challenges also call for attention.

  • First applications for denoising based on neural networks already exist for Linux, e.g. in G’MIC.

To my viewpoint, most users of darktable highly appreciate this sophisticated non-proprietary software including myself. I’m sure that the developers and contributors of darktable know this and make sensible decision about how they invest their time.

2 Likes

If it happens very early in the pipeline (like denoise profiled), I would be fine with just trying out a default and tweaking it only if necessary, even if it takes a minute. Also, fiddling with settings interactively could just produce a preview on a crop that the user is interested in.

In case you are not aware of this: most FOSS contributors just work on whatever they are interested in, regardless of popular demand. So it is pretty useless (and occasionally, counterproductive) to try to pressure them like this.

Let me remind you of my question for this thread:

It’s not my aim to push anybody in any direction…

Besides, I cannot imagine more patient people than wildlife photographers.

Cheers!

1 Like

Not saying you shouldn’t use processes that take a long time. I just don’t think that their place is in an interactive program. If there’s a way to have a fast way to set ep such a module, for application on export only, that would be fine (but rather different from how modules work currently)

Keep in mind that the position in the pipeline isn’t really important for working with a module. It’s the position in your workflow that is important. And as soon as you have a slow module activated, you pay the price every time you make any adjustment, even in a completely different module.

Up to a point, that can be circumvented by only activating the slow module(s) at the end of the workflow, but I like to do e.g. sharpening on something as close to the final result as possible, and denoising has an influence on the (perceived) sharpness…

Well, you posted in the “darktable section”, that introduces a certain bias.

It could be a stand alone app also that converts to DNG, but that would take longer time and with more mouse clicks. Exporting/importing photos in full 32-bit representation (to train a neural network for image denoising) was aimed for darktable:

“My goal has always been to integrate this into an image development software, such as darktable.”

Half way there. NR takes 9 sec for 20 mpix with RTX 2080.

the question of whether somebody of you knows similar applications that also run under linux and whether there will be an AI-based denoising tool in darktable in the future?

There is GitHub - trougnouf/nind-denoise: Image denoising using the Natural Image Noise Dataset but it’s really not user-friendly (clone the repository, download the models, and run the command for a given mostly processed (but not sharpened) image, then sharpen the image).

Darktable integration would require a couple of operations ((transposed) convolution and ReLU) written in C/OpenCL.

Per https://discuss.pixls.us/t/exporting-importing-photos-in-full-32-bit-representation-to-train-a-neural-network-for-image-denoising , It would also probably make most sense to retrain the model such that it applies earlier in the pipeline (probably right after demosaic, before or after exposure) rather than right before sharpening. For that the dataset would have to be reprocessed (the images need to be aligned then have the exposure matched between shots). I would be happy to release all of the raw files if someone is interested in doing the work but (as is usually the core issue) I will have no time to work on this in the foreseeable future.

Ideally we should also try different, simpler, architectures to minimize the runtime. (Though I am with @hyhno in that “denoising [in darktable] also takes a minute if you increase ‘search radius’ in ‘denoise (profiled)’ to a certain value.”)

… in fact i managed to download some of the raw files using the link you posted quite some time ago. it took me weeks to do that, and to tell the truth i never went back to test if it actually finished :slight_smile:

from what i’ve seen there are some subtle issues, not so much with alignment but with exposure/highlights due to aperture changes maybe? but there are also images that may be very viable to use for pre-demosaic processing.

yes. and if you ask me you don’t want to fix this in dt. i reimplemented something similar to the wavelet denoising in vk/glsl and it runs in a few milliseconds (single digit) on a 2080Ti, full raw. you cannot reach this kind of interactive performance in dt, not even when only operating on a preview/cropped/downscaled buffer. the whole pipeline setup is too clumsy and drops performance on the floor between kernel executions. finally the pixels are copied back to cpu and then carried through a couple of libraries between gtk and xorg…

in the olden days in dt we tried to keep module execution times below 40ms, then below 100ms and now i’m not sure. having processed a few videos my pain threshold is actually lowered considerably nowadays.

sounds promising! keep optimising! a factor of 100 speedup is often realistic to reach on gpu architectures when starting from a naive implementation (though i don’t know your base level here). maybe remove a few channels or levels?

My understanding is that darktable stores intermediate results. The later the module is in the pixelpipe, the more of these are invalidated and need to be regenerated after module parameters change.

The workflow I envision for this kind of denoising is for it to happen after demosaicing, with early potential user intervention before or right after setting the exposure. Not a lot of other modules would be active in a typical workflow. Conversely, whatever changes later should not invoke denoising again.

Thanks for clarifying that! Switches the thread to another mood in my opinion :smiley: .

1 Like

A decent review on denoising CNNs :

The models seems rather complicated, I imagine developing that kind of models in C would be quite the nightmare. There’s a library (cONNXr) that can run ONNX models (so development/training could be done in a different environment) but it doesn’t support GPU. There’s onnxruntime in cpp but it seems not like the lightest dependency…

1 Like

Not a trumpet but close enough:

1 Like

Yes you can play one note and sing another note above it then then the interference between the two notes produces other notes filling in a chord. I have done it on the French horn. Going off topic here!

Beyond being interested, having time and will, there is skill too. Truth is, most devs/engineers have forgotten all their maths 10 years after graduation, so forget about reading paper and doing R&D. Most of image processing dev work in FLOSS is adapting code taken from another project. Turning a research paper into actual code is a rare skill.

3 Likes

All I have to say is : iterative anisotropic PDE solving… XD

2 Likes