Possibly a new deconvolution module for Darktable

This is amazing and the times as at least as a standalone process, really isn’t that much for its capacity of rescuing images. :slight_smile:

1 Like

Bravo Aurélien, j’attends qu’il soit dispo dans darktable.

1 Like

Looks very promising to me. Hope this will become a new module in the future!

1 Like

Hey Aurélien,

what happended to that module? :slight_smile:

Hi Andreas !

One year ago, Edgardo (the dev behind the retouch module) did a prototype of a dt module, based on my Python code, that is functional but too slow for real life use (not his fault, that’s the algo).

So I tried to adjust the maths to make it converge faster. It does converge better in some cases, but blows away in some others. I’m still working on it to make it more robust. I have been in touch with a german researcher to improve the convergence (ein anderer Deutscher : ich weiß nicht, was Sie Deutschen mit Bildverarbeitung haben :smile:) .

Now, I’m limited by Python (super slow & no real multithreading) so I’m looking forward to continue my work in full C. In the grand scheme of things, my first 2 modules now merged into master were only a way to get my hands inside darktable sourcecode with easier projects. With the knowledge I have acquired, I’m ready to continue my work on the deconvolution.

17 Likes

Thank you for your dedication! :clap::clap:

2 Likes

Salut!

That is some truly amazing work. I’ve been comparing results between DT and LR for some pictures and noise reduction is one of the areas where DT sometimes falls short and produces blotchy images compared with the sharp images LR can make. (Sometimes it’s fine too of course!)

Your work seems really promising, but the performance issues seem to be a major blocker. Do I understand correctly that the DT module runs in 10 minutes? I understand it’s a huge improvement compared to the research material (which counts in hours), but it seems to me this couldn’t be used as a basic module that we would enable lightly, unless performance significantly improves.

Is there a target performance that could make this usable as a normal DT module? Say sub-minute rendering?

Thanks again for all your work!

1 Like

yes, basically, as for many great image processing algorithms, the real-life performance is the main barrier between promising research papers and general use implementation.

My latest work on this has been to accelerate the numerical convergence, and this has turned into a research project rather than the simple “paper to code” project it use to be. Very time-consuming…

2 Likes

Wondering if this module would be opencl-friendly. If it is, 10min run without gpu assistance would perhaps mean sub-minute range once openCL + cpu multithreading running in parallel - with a half-decent gpu. I’m seeing 600% overall improvement with some modules like profiled-denoise with an amd rx560

Of course, OpenCL would do great here, although FFT implementation in OpenCL is something I have never done.

@aurelienpierre I’m impressed by this research and attempt to make it a workable darktable module. The results you’re showing are very pleasant to my eye especially the portrait of yours.

I hope that in the near future you, or someone else, will succeed it this work. I have a lot of slightly blurred photos of my children I it would be nice feature to have.
Thank you!

I’m wondering what the timeline for inserting this into dt might be? Not a hard date but a “maybe version x” date.

I have a photo in which some leaves moved slightly, enough to give them a bit of a blur. This module might help. I really like the photo and am sorry that, with that particular point and shoot camera, I couldn’t increase the shutter speed so I prayed that the leaves wouldn’t move too much (they didn’t too much but enough for me to wish that they hadn’t at all).

Thanks

Read this:

2 Likes

I read through this whole thread. Very interesting. Has anything more been done with this? Anything in dt yet? While having the best implementation, of course, is great, if it is too heavy to do in dt that has to show quickly the results on screen then even a less than ideal implementation that works pretty well would be nice. And maybe a heavier, slower option.

Eric Chan says that Lightroom and Photoshop in some cases use deconvolution when sharpening:

Yes Photoshop’s Smart Sharpen is based on deconvolution (but you will need to choose the “More Accurate” option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.

That is from an old thread on the Luminous Landscape site. Unfortunately, it seems the website is mostly behind a firewall now so the link to that thread I have no longer works:

http://www.luminous-landscape.com/forum/index.php?topic=45038.msg377910#msg377910

This guy:

Principal Scientist and “Mad Man” Eric Chan Discusses His Role in Improving Photoshop

https://blog.adobe.com/en/2013/07/12/principal-scientist-and-mad-man-eric-chan-discusses-his-role-in-improving-photoshop.html#gs.ifkhlv

2 Likes

Exposure-invariant guided filter has been finished by @rawfiner and myself today, which is a requirement for the “image doctor” module that implements the ideas developed here, but in a faster way.

9 Likes

Hi,

Interesting. Any reference (or pointer, if you prefer)?

You can take a look here, though I wrote this document quite a long time ago so I should update it: https://github.com/rawfiner/eigf
And on the PR: https://github.com/darktable-org/darktable/pull/6444
though the text on the PR is also a bit outdated.

Basically, we do a standard guided filter, except:

  • we use gaussian blurs instead of box blur
  • we normalize the variance by the geometric mean of the pixel value and the average (this makes the filter exposure invariant as this ratio does not change when exposure changes)
  • we removed the blur on “a” and “b”, as it gave halos or big areas without smoothing near the edges, and it seemed impossible to fix this blur, although we tried many things (including approach from anisotropic guided filtering paper)

Also, the implementation uses downscaling to speedup variance and average computation, but it uses downscaling in a slightly different way than what was proposed with standard guided filter: at the end we upscale the variance, covariance and averages and compute a and b at full resolution, instead of upscaling a and b (which gave artefacts due to the fact that we removed the blur on a and b)

3 Likes

Congrats. Comports with my independent study. Looking forward to seeing what you settle on because I am undecided on which ideas to discard or keep. I have too many. :slight_smile:

Thank you. I will look at your image doctor thread.

One very minor note is that there is a paper that names its method as EIGF (Enhanced Input Guided Filter) but is something different (not as good as our methods). Up to you whether you want to differentiate by choosing a different acronym.