Possibly a new deconvolution module for Darktable

Could you link us to this research? Thanks. Welcome to the forum, BTW, @anon41087856!

the links to the papers are in the OPs github repo.

taking minutes for single-digit megapixels is certainly impractical for a darkroom mode iop. might be useful as a lighttable button, or it might be possible to speed it up substantially. after all this is still python.

Silly me, I was viewing the repo in my small screened mobile well past bedtime. Yes, it is there all right!

Could you also show the result without all those other enhancements? From that image it’s hard to tell what improvements were actually done by your algorithm.

I agree with @houz. I have seen other methods that deconvolve well but it would be hard to compare when multiple enhancements might have been applied.


In the meantime, this is from the poster of the 2014 research (TV-PAM).

1 Like

After deconvolution, without any sharpening/local contrast except deconvolution :

Original, before deconvolution, no sharpening/local contrast :

This one took 1 hour with one single CPU thread to process with the base quality setting (12 Mpx, 100 iterations). Keep in mind that the deconvolution doesn’t bring back the contrast, so the result is subtle unless you pixel-peep or… add more local contrast.

I find it more interesting to compare the deconvoluted enhanced picture to the original image with the same laplacian + wavelets filters but without the deconvolution :

Left/Right comparison (right : deconvolution + Laplacian + wavelets, left : Laplacian + wavelets)

So the deconvolution effect is just multiplied by the local contrast enhancement. I believe it’s what Photoshop does in its smart sharpening tool.

@anon41087856 Thanks for the examples :slight_smile:. BTW, you could just drag and drop the images into your post editor; no need to upload it elsewhere.

ok, I’m not familiar with this forum yet :wink:

TBH uploading them to your own sites doesnt harm either.

2 Likes

Also, the masking option let me solve complicated motion blur where several motions happen in the same pic (subject + camera). A global approach would lead to crossed noise with the solver trying to optimize blurs of different directions and magnitude.

See an original here [WARNING : NSFW pic, naked girl in a non sexual setup], and the slightly deblured version. On this image, the girl is moving to the right, bending backwards, myself following the movement to the right (thus background lens + motion blur) and the perspective has been corrected to make the floor look straight, so it’s a non-physical blur with no global solution. In addition, the image is noisy. This picture is my benchmark for the worst possible blur setup.

1 Like

Is there a possibility to calculate the algorithm for a lower Resolution image to have a preview? So it does not take much time when you play with the parameters. After you finished. You hit a button and the algorithm will be calculated for the high res image.

Unfortunately that will only help so much when doing it in darktable. The image also needs to e recomputed every time any setting of a module coming before this one changes. That implies that having it as a lighttable operation to pre-process the whole image and cache it means that this module would have to be the first in the pipe. In the end it’s probably only feasible to add this if it’s fast enough for real time use – maybe with a less precise mode for interactive use and a better one for exporting.

Using a scaled picture as a preview is possible, however it’s hard to ensure that the preview will be an accurate representation of the final image. The PSF (the blur kernel) is usually 3 to 15 px wide thus very sensible to rounding errors during the scaling since it can’t be less than 3 px nor an even number. So a variety of sizes may end up rounded to 3 px, adding problems that don’t exist or hide problems existing in the high resolution process.

The deconvolution should be performed early in the pipe since it has been proved that filtered images (denoised) lead to a less robust convergence, halos and crossed noise. In the other hand, the regularization term in the algorithm is already a denoising term. So, recomputing the deconvolution in a real-time fashion at every change done in the darkroom seems useless to me, regardless of the time it needs.

considering the compute time I wonder if it would be better suited as a g’mic plugin?

My assumption is that it’ll be ported to a faster language than python :slight_smile:

No you really want to access the RAW data and apply it before anything else to get the best of this method. It’s more low-level signal processing than a filter.

I’m currently working on a Cython (hybrid Python/C) implementation, and I already reduced the computing time by a quarter by just optimizing the total variation regularization term calculation, so I’m confident that with a tiled/multithreaded C FFT implementation, we could divide the global execution time by at least 2.

The 12 Mpx image took me an hour on 1 process because running it on 3 triggered a memory error. Playing with lower-level layers of code allows me to reduce both the multithreading over-head and and the RAM footprint.

1 Like

I wasn’t too worried about python, its load time is a bit painful for certian types of things but it can be fast
if optimized correctly. it is used very often during scientific reasearch after all

ah then yeah needs to be in the raw processor. does darktable do caching of the output of a filter if you modify things down the stack during editing?

Research is no engineering. Scientists try to make things possible, engineers try to make them usable.

Python is widely used because it’s fast to code, but a nightmare to optimize. So it’s more a prototyping language.

I disagree. Much of the research is multidisciplinary with partnerships, consultation and multi-talented individuals. Many scientists are themselves accomplished engineers and software devs. If the processing is this intensive, even with tiling and multi-threading, it would still be impractical. That is why researchers come up with fast approximations that build on existing research and efficient novel methods.

1 Like