it’s been 1 year 4 months that I have been working on and off on that topic, and I have great news !
I have improved greatly the maths behind to make the algorithm converge 99.99 % of the time, and a lot faster than before (needs less iterations). Now, it’s truly 100 % auto-adaptative, meaning that it computes different metrics to update its inside parameters (to ensure convergence), hiding a lot of Ph.D-level stuff (Tikhonov regularization parameter, Cauchy distribution parameter, Sobolev space norm) to the basic user. This is a brand-new algorithm, combining several approaches I have seen in various papers, and it seems to perform very well in a various range of blurs, even in noisy conditions.
It allows to refocus (to a certain extent) on a specific area without affecting (too much) the other areas : especially usefull when there are different types of blur on the same picture (motion/focus/gaussian), now the algorithm tries to evaluate the blur in a user input area and only correct the zones where the real blur matches the evaluated one.
It allows to chose the desired behavior between : denoise, deblur, or average both. The deconvolution is, by design, aimed at deblurring. Doing so, it adds more noise and amplify the one already there. So this algorithm regularizes (= denoise) and deblurs at the same time. The drawback is both are inverse phenomenons : if you regularize too much, you don’t deblur but you denoise instead. So, why not use the drawback of this method to actually denoise without (de)blurring ? Since the regularization parameter is optimized and refined automatically inside the solver, to take account of the variance (a metric of the noise amount) and the residual (a metric of the sharpness), we just have to tell the regularization optimizer to favor the variance or the residual, or average both, to adjust the regularization.
Asks only 3 inputs : the size of the blur (pixels), the size/position of the sample window (to evaluate the blur), the sharpness/noisyness priority. That’s all. Everything else is estimated inside. Other parameters like the error tolerance are there too, but more as a clutch, to take back the control.
The optimal inside parameters are now evaluated until convergence usually in 15 - 30 iterations
Two different metrics are now used to stop the iterations before the solution degenerates :
- one ensures the noise created by the deblurring is white (in the sense of signal processing), so no patterns (ringing, fringing) are created. Since white noise looks natural, it’s a fair trade-off to allow some good-looking noise to get some more sharpness. This is done by computing the auto-covariance of the picture, ensuring it decreases monotonically, and stop the iterations when it increases back of a certain amount. The user can set the tolerance he wants on that amount (more tolerance = more sharpness, too much tolerance = )
- the other ensures the solution is not stagnating, i.e the convergence is reached and it’s useless to continue.
The code is still Python/Cython mixture, so it’s better than pure Python but still not as good as pure C. Don’t freak on the running times. However, from what I had 8 months ago, I have seen ×2 up to × 10 improvements, essentially because of the better convergence rate of the algorithm rather than the implementing (maths win). The code is fully parallelized (8 cores) running on a 3.6 GHz Intel Xeon laptop.
Denoise without deblurring : 24 Mpx, 11 min. (original on the left). Auto-covariance tolerance set to 1 %.
That one was processed (not by me) on Adobe Camera RAW from a poorly exposed shot, sharpened but not denoised. That’s a nightmare to correct. Also the colors are different because the original is a JPEG, probably with an ICC profile, and my code outputs TIFF and strips everything that is not a pixel.
Deblur without affecting the background : 16 Mpx, 8 min. (original on tfe left). Motion blur from the camera and/or the horse of 5 px.
At a more realistic zoom factor :
My face, corrupted with a synthetic gaussian blur and gaussian noise (std = 5). 4 Mpix, 45 sec.
Obviously, on that one, you create some grain because there is already noise.