Quick question on RT Richardson–Lucy implementation

I also aim to get the best halo-free (less halos) result out of RL deconvolution. For that reason I created the capture sharpening branch, which already reduces (but not eliminates) halos (even when gamma is set to 1.0 there are still some halos).

I would be very happy to have a completly halofree RL capture sharpening, but I dont think, that’s possible…

In general, less halos almost always means more softness, so while it might still be considered RL, it might not necessarily be what we would call sharpening. In any case, I have never been satisfied with the previous RL to the extent that I rarely use it. Capture sharpening is a big step forward, enough for me to consider it an option to my processing.

1 Like

I think @afre made an important comment regarding deblurring vs sharpening. My impression is that most of the halo’s appear when you’re actually pushing the sliders beyond what is physically necessary. What I mean is that capture sharpening is a deblurring (deconvolution) effect to counteract the blurring due to the lens. A light point source captured through a lens on the sensor gives an approximate Gaussian blur. Deconvolution should result in a single bright pixel and no blur.
If you set a wrong radius setting you’re actually underestimating or overestimating the deblurring. So you either have some blur left, or go over the limit and introduce halo’s. Imho there is only one correct setting here…

I agree. Btw: Maybe in the new tool we should remane the radius to sigma, because that’s what it is…

From a non programmer point of view, radius is a bit more user-friendly.

And even if you set the right radius (and that is a must), the white haloing might disappear, but the dark haloing is yet quite strong, and too noticeable.

Yes, that’s still an issue :frowning:

Do you mean one that is in-focus affected only by diffraction?

Because the OOF PSF of nearly all lenses (for out-of-focus bokeh) is usually circular in nature. The Sony STF lenses and mirror lenses are some of the only exceptions, and their OOF PSF is still not Gaussian. (And obviously the radius of that point spread is dependent on how far out of focus you are…) http://aggregate.org/DIT/ei20140204.pdf (Edit: In fact, Hank has simulated Gaussian OOF PSFs using aperture bracketing similar to how some of Minolta’s older DSLRs from pre-SLT days did it, and found that a Gaussian OOF PSF was usually visually disturbing to him)

I’m assuming that pixels which have been convoluted by an OOF PSF are not your target, as opposed to diffraction or things that can be traced back to an OLPF?

I tried reading and understanding the original Richardson paper, trying to understand what RT is doing, and see if they match. Still a little mysterious :upside_down_face:

I’m no expert, so what you say is probably more true than what I said :slight_smile: Gaussian, circular, … maybe we need a fancy algorithm that doesn’t assume any a priori knowledge of the PSF.

But I think you’re right that this tool is primarily aimed at ‘sharpening’ things that should have been in focus in the first place. And not trying to recover detail from things that were (accidentally) out of focus.

I’ve seen references to approaches that don’t require a priori knowledge, but I believe these get very computationally expensive.

It seems like the easiest things to model (and perhaps Gaussian is an OK-enough approximation for them?) would be sensor OLPFs and diffraction, since these both affect in-focus items. They’re unfortunately also a lot harder to measure to confirm one’s assumptions! (Hank has some great techniques for measuring lens OOF PSFs in the paper above, which have the advantage of being one of the only methods I’ve seen for testing lens decentering that is easily separable from other defects such as tilt.)

MTF Mapper: Importance sampling: How to simulate diffraction and OLPF effects may be useful?

Same here :slight_smile: . But as far as I understand it, we will never know the Point Spread Function of our lenses, so the algorithm has to perform a Blind Deconvolution to make a guess of what those PSFs are. As I see it, the radius of an Out Of Focus point is a good(?) starting point, refined by the iterations of the algorithm.

As a general idea, am I too wrong?

Just a thought here: set up a laser source perpendicular to your sensor to generate a tiny bright spot. Then sample your entire lens. That should give you a pretty good idea of the PSF, right? Or am I missing something very obvious here?

:slightly_smiling_face: Well, maybe that’s a bit above a plain user, isn’t it?

Of course there are methods to get the PSF of each lens-sensor combination we may use, and then introduce it somehow to a RL tool, but I don’t think many users would follow that path. That’s what I was referring to.

You can measure the PSF for your lens with the method described in the paper I linked - but that only gets you the PSF for one value of ObjectDistance on slide 29.

Your actual PSF will vary within the frame dependent on ObjectDistance. I’m assuming const1 and const2 can be derived from the current focus position of the lens - but you still don’t know objectDistance.

There are ways to attempt to recover ObjectDistance based on knowledge of the OOF PSF behavior (Panasonic’s DFD is one such example, although some papers I’ve seen strongly imply that this requires at least two different frames taken at two different focus distance settings), but as to whether it can be done well enough to deconvolve fully without all sorts of other artifacts is questionable.

Mobile Phones go the opposite direction - estimating ObjectDistance from PDAF sensel information, and using this to convolve with a known OOF PSF to simulate the bokeh of a much wider aperture lens.

Totally unrelated to object distance let me explain the current kernels used for deconvolution in capture sharpening. I will restrict on the radius (sigma) range [0.6;1.0] in the explanations.

For the range [0.6;0.84] a 5x5 kernel is used, but not the whole 5x5 kernel, only this points, of course weighted correctly to give a sum of 1:

k means the weight is > 0

  k k k
k k k k k
k k k k k
k k k k k
  k k k

For the range ]0.84;1.0] a 7x7 kernel is used, but not the whole 7x7 kernel, only this points, of course weighted correctly to give a sum of 1:

k means the weight is > 0

    k k k
  k k k k k
k k k k k k k
k k k k k k k
k k k k k k k
  k k k k k
    k k k 

Now, in the current implementation the 7x7 kernel is only 5% slower than the 5x5 kernel, means we could always use the 7x7 kernel, means we could allow symmetric 7x7 kernels which are not gaussian without getting a slowdown. For non symmetric 7x7 kernels there would be a slowdown, but it would still be usable concerning performance.

Edit: You may ask, why asymmetric 7x7 kernels would be slower. A symmetric 7x7 kernel as described above has only 8 distinct weights which easily fit into the 16 SSE registers of current x86_64 processors (x86_32 has only 8 of this registers btw). An asymmetric 7x7 kernel as described above can have 37 distinct weigths, which can not be preloaded into SSE registers even if you have a AVX512 machine where you have 32 SSE registers.

1 Like

Interesting. If I could shave off time from my scripts, it would be awesome. G’MIC scripting, as you may know, is orders of magnitude slower than compiled C++ code.

I know :wink:

Just testing code takes 1e10 times the time, given that my computer is also super low end. :blush:

Hold my beer and watch me :stuck_out_tongue:

Halos mean you are reversing gradients along edges. Guided filters are supposed to not have this side effect.

If you want to deblur the lens, you need to know its point spread function (aka PSF, aka the kernel used to blur, which is not gaussian as RL assumes), so you either have it profiled by measurement or find it by optimization using a blind deconvolution. This is goddam difficult because it is not homogenous along the frame, so you actually have different PSF along the frame, and the local PSF are not the same either for the 3 RGB layers, because refraction depends on wavelength. @hanatos showed me a paper that did that on images subsets (reference needed).

TL;DR RL deconvolution with gaussian blur is mostly sharpening, not really reverting lens blur (because you don’t use the actual lens blur model and you don’t do it in RGB).

Actually, it’s closer to a circle blur where the kernel coeffs are constant inside the radius.

I will spend you a beer the next time we meet for sure :slight_smile:

I agree it is not perfect, but it does revert the lens blur to a certain amount where it leads to an improvement compared to the original.