I’m a photographer from Montréal, Canada and an engineering student. I have been working on and off these past 7 months to develop a blind deconvolution algorithm in Python, based on state-of-the-art research (2014/2015), as a prototype for a Darktable module.
The prototype is 90 % ready, and the first results are amazing :
Picture © David49100
(the deconvoluted version is on the right, enhanced with a Laplacian local contrast filter and a wavelets high-pass).
It gives “true” sharpness (meaning edges correction, not just local contrast enhancement), recovers motion and lens blur, and theoritically chromatic aberrations (although I have not observed this effect).
The algorithm is auto-adaptative, meaning that most of the parameters are computed inside on statistical assumptions and not required from the user, who just need to enter the size of the blur, the quality/time of computation expected and can optionnaly input the noise correction and iteration factors in case of a problem with the default settings. The user can also hard-set a priority zone to optimize, for example if a subject is in front of a bokeh background, the blur can be computed only on a relevant portion of the image (which improves the computing time as well).
All the details and the code are here : https://github.com/aurelienpierre/Image-Cases-Studies
I’m working now with Edgardo (https://github.com/edgardoh) to port the code into a Darktable module. The main challenge is the computation time, since we compute 2 gradients and 4 FFT per iteration. We will see…