Possibly a new deconvolution module for Darktable

Hi,

I’m a photographer from Montréal, Canada and an engineering student. I have been working on and off these past 7 months to develop a blind deconvolution algorithm in Python, based on state-of-the-art research (2014/2015), as a prototype for a Darktable module.

The prototype is 90 % ready, and the first results are amazing :

Picture © David49100

(the deconvoluted version is on the right, enhanced with a Laplacian local contrast filter and a wavelets high-pass).

It gives “true” sharpness (meaning edges correction, not just local contrast enhancement), recovers motion and lens blur, and theoritically chromatic aberrations (although I have not observed this effect).

The algorithm is auto-adaptative, meaning that most of the parameters are computed inside on statistical assumptions and not required from the user, who just need to enter the size of the blur, the quality/time of computation expected and can optionnaly input the noise correction and iteration factors in case of a problem with the default settings. The user can also hard-set a priority zone to optimize, for example if a subject is in front of a bokeh background, the blur can be computed only on a relevant portion of the image (which improves the computing time as well).

All the details and the code are here : GitHub - aurelienpierre/Image-Cases-Studies: Python prototypes of image processing methods for Darktable modules

I’m working now with Edgardo (edgardoh · GitHub) to port the code into a Darktable module. The main challenge is the computation time, since we compute 2 gradients and 4 FFT per iteration. We will see…

24 Likes

Salut @anon41087856 & welcome :slight_smile:
Your work sounds thrilling & I am looking forward to the module.

When you say that the main challenge is the computation time
(depends on the computer power, of course) – what are you
estimating? Minutes? Hours? Or…?

Have fun!
Claes in Lund, Sweden

Well, for a 3.8 Mpx picture, it takes between 2 and 8 minutes on 3 processes, depending the target quality and the size of the blur to correct. For a 6 Mpx picture, up to 30 min. But Python is really not a good benchmark since the GIL avoids real multiprocessing, so a lot could be improved in C regarding parallelization and even GPU computation.

1 Like

Very promising result.

About the runtime, how much of that has to be done every time the image is processed and how much could be preprocessed once and stored in the parameters? Having an iop that takes minutes every time something changes isn’t going to work in darktable, but having a one-time step that is slow while applying the precomputed data later is fast would work IMO. Of course, the data needs to be reasonably sized to be easily put into the XMP sidecars. But those are details, first we need to find out if the algorithm can be split like that.

it’s pixel-level stuff, with an iterative process trying to solve an equation by finite differences, converging toward the no-blur solution step by step. The solution is the deconvoluted image itself, there is no intermediate parameter ready to be saved as a futur input. Every iteration depends on the previous and the next can’t be assumed separately. It’s similar to the HDR or panorama workflow, the whole process has to be done at once.

I thought maybe we could store the deconvoluted image in cache or as a sidecar hidden file and use it as replacement of the RAW in the pipe if it exists.

Could you link us to this research? Thanks. Welcome to the forum, BTW, @anon41087856!

the links to the papers are in the OPs github repo.

taking minutes for single-digit megapixels is certainly impractical for a darkroom mode iop. might be useful as a lighttable button, or it might be possible to speed it up substantially. after all this is still python.

Silly me, I was viewing the repo in my small screened mobile well past bedtime. Yes, it is there all right!

Could you also show the result without all those other enhancements? From that image it’s hard to tell what improvements were actually done by your algorithm.

I agree with @houz. I have seen other methods that deconvolve well but it would be hard to compare when multiple enhancements might have been applied.


In the meantime, this is from the poster of the 2014 research (TV-PAM).

1 Like

After deconvolution, without any sharpening/local contrast except deconvolution :

Original, before deconvolution, no sharpening/local contrast :

This one took 1 hour with one single CPU thread to process with the base quality setting (12 Mpx, 100 iterations). Keep in mind that the deconvolution doesn’t bring back the contrast, so the result is subtle unless you pixel-peep or… add more local contrast.

I find it more interesting to compare the deconvoluted enhanced picture to the original image with the same laplacian + wavelets filters but without the deconvolution :

Left/Right comparison (right : deconvolution + Laplacian + wavelets, left : Laplacian + wavelets)

So the deconvolution effect is just multiplied by the local contrast enhancement. I believe it’s what Photoshop does in its smart sharpening tool.

@anon41087856 Thanks for the examples :slight_smile:. BTW, you could just drag and drop the images into your post editor; no need to upload it elsewhere.

ok, I’m not familiar with this forum yet :wink:

TBH uploading them to your own sites doesnt harm either.

2 Likes

Also, the masking option let me solve complicated motion blur where several motions happen in the same pic (subject + camera). A global approach would lead to crossed noise with the solver trying to optimize blurs of different directions and magnitude.

See an original here [WARNING : NSFW pic, naked girl in a non sexual setup], and the slightly deblured version. On this image, the girl is moving to the right, bending backwards, myself following the movement to the right (thus background lens + motion blur) and the perspective has been corrected to make the floor look straight, so it’s a non-physical blur with no global solution. In addition, the image is noisy. This picture is my benchmark for the worst possible blur setup.

1 Like

Is there a possibility to calculate the algorithm for a lower Resolution image to have a preview? So it does not take much time when you play with the parameters. After you finished. You hit a button and the algorithm will be calculated for the high res image.

Unfortunately that will only help so much when doing it in darktable. The image also needs to e recomputed every time any setting of a module coming before this one changes. That implies that having it as a lighttable operation to pre-process the whole image and cache it means that this module would have to be the first in the pipe. In the end it’s probably only feasible to add this if it’s fast enough for real time use – maybe with a less precise mode for interactive use and a better one for exporting.

Using a scaled picture as a preview is possible, however it’s hard to ensure that the preview will be an accurate representation of the final image. The PSF (the blur kernel) is usually 3 to 15 px wide thus very sensible to rounding errors during the scaling since it can’t be less than 3 px nor an even number. So a variety of sizes may end up rounded to 3 px, adding problems that don’t exist or hide problems existing in the high resolution process.

The deconvolution should be performed early in the pipe since it has been proved that filtered images (denoised) lead to a less robust convergence, halos and crossed noise. In the other hand, the regularization term in the algorithm is already a denoising term. So, recomputing the deconvolution in a real-time fashion at every change done in the darkroom seems useless to me, regardless of the time it needs.

considering the compute time I wonder if it would be better suited as a g’mic plugin?

My assumption is that it’ll be ported to a faster language than python :slight_smile: