An Error Reduction Technique in Richardson-Lucy Deconvolution Method

I wonder if this would be helpful in understanding and perhaps improving RL. Personally, I don’t use RL because it is slow and produces artifacts (with which I am uncomfortable; perhaps, I don’t know how to make it work). The paper is licensed as follows:

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Abstract

An error reduction technique for Richardson-Lucy deconvolution (RL-deconv) is proposed. The deconvolution is indispensable technique for inversely analysing the SRAM fail-bit probability variations caused by the Random Telegraph Noise (RTN). The proposed technique reduces the phase difference between the two distributions of the deconvoluted RTN and the feedback-gain in the maximum likelihood (MLE) gradient iteration cycles. This avoids an unwanted positive feedback, resulting in a significant decrease in probability of undesired ringing occurrence. A quicker convergence benefit of the RL-deconv algorithm while avoiding the ringing is achieved. It has been demonstrated that the proposed technique reduces its relative deconvolution errors by 100 times compared with the conventional RL-deconv. This provides an increase in accuracy of the fail-bit-count prediction by over 2-orders of magnitude while accelerating its convergence speed by 33 times of the conventional one.

It would be really cool to see that implemented.

There are also lots of papers about improving the basic RL algorithm to prevent boundary effects, for instance: https://www.aanda.org/articles/aa/pdf/2005/25/aa2717-05.pdf and http://www2.tku.edu.tw/~tkjse/16-3/06-IE10211.pdf

The number of iterations could be determined automatically:
http://iopscience.iop.org/article/10.1088/1742-6596/630/1/012003

Other point-spread function methods could also be explored. These have already been implemented in GIMP.

One uses a Hopfield neural network:
http://refocus-it.sourceforge.net/

The other uses a Wiener filter:
http://refocus.sourceforge.net/doc.html

The assumptions of the LR and Wiener algorithms are different (Poisson versus Gaussian noise, I believe), so I would be curious to see what performs better in practice. From the examples in this paper it seem to be LR:
http://www.ijsett.com/images/Paper(2)1-2.pdf