Quick question on RT Richardson–Lucy implementation

I agree that capture sharpening does indeed sharpen the image, and that it is possible to overdo the sliders.
For the pp3 i posted, i just raised to an arbitrary high value.I guess I am just used to the regular RL tool where when I move a slider a small amount, there are big changes to the output. With capture sharpening, big slider changes are needed for me to perceive any difference.

I just pushed some changes.

1 Like

I like the results very much.
I used to use Unsharp mask sharpening followed with RL sharpening when downsizing. Now, I replace the USM with Capture sharpening at default settings, and I get slightly better detail.
Good job!

1 Like

@heckflosse Maybe it’s already working like this, but since the capture sharpening takes place on demosaiced data, shouldn’t it also be possible to let this tool work on non-raw images?

That should be possible, but does it make sense? We know nothing about the things already done to non-raw images…

Something that has been on my mind for a couple of weeks is an R-L deconvolution using a guided-filter as surface blur (e.g. a guided filter where guiding mask == guided image), possibly using a multi-resolution pyramid, and, why not, a total variation regularization.

That would avoid edges (aka no halos) and noise sharpening altogether.

Still no time to test that.

Fair enough, but do we need to reassess the way we sharpen non-raw images using RL, since this formally has to happen on ‘linear’ data. Somehow?

It matters where in the workflow you do RL and in which space. I have always considered RL as deblurring rather than sharpening, so I would do it very early if I decide to use it, regardless of whether it is a raw file or not. As for USM, I don’t use it at all or understand why it is lumped with RL.

I have had similar thoughts brewing for a few years. Would be good to see real life results.

All you have to do is ensure non-raw images have their gamma decoded at the beginning of the pipe, and re-encode them at the end.

Not only for RL, also for unsharp mask we could allow to use Y instead of Lab L.

Because it was this way from the beginning and noone thought about it.

I also aim to get the best halo-free (less halos) result out of RL deconvolution. For that reason I created the capture sharpening branch, which already reduces (but not eliminates) halos (even when gamma is set to 1.0 there are still some halos).

I would be very happy to have a completly halofree RL capture sharpening, but I dont think, that’s possible…

In general, less halos almost always means more softness, so while it might still be considered RL, it might not necessarily be what we would call sharpening. In any case, I have never been satisfied with the previous RL to the extent that I rarely use it. Capture sharpening is a big step forward, enough for me to consider it an option to my processing.

1 Like

I think @afre made an important comment regarding deblurring vs sharpening. My impression is that most of the halo’s appear when you’re actually pushing the sliders beyond what is physically necessary. What I mean is that capture sharpening is a deblurring (deconvolution) effect to counteract the blurring due to the lens. A light point source captured through a lens on the sensor gives an approximate Gaussian blur. Deconvolution should result in a single bright pixel and no blur.
If you set a wrong radius setting you’re actually underestimating or overestimating the deblurring. So you either have some blur left, or go over the limit and introduce halo’s. Imho there is only one correct setting here…

I agree. Btw: Maybe in the new tool we should remane the radius to sigma, because that’s what it is…

From a non programmer point of view, radius is a bit more user-friendly.

And even if you set the right radius (and that is a must), the white haloing might disappear, but the dark haloing is yet quite strong, and too noticeable.

Yes, that’s still an issue :frowning:

Do you mean one that is in-focus affected only by diffraction?

Because the OOF PSF of nearly all lenses (for out-of-focus bokeh) is usually circular in nature. The Sony STF lenses and mirror lenses are some of the only exceptions, and their OOF PSF is still not Gaussian. (And obviously the radius of that point spread is dependent on how far out of focus you are…) http://aggregate.org/DIT/ei20140204.pdf (Edit: In fact, Hank has simulated Gaussian OOF PSFs using aperture bracketing similar to how some of Minolta’s older DSLRs from pre-SLT days did it, and found that a Gaussian OOF PSF was usually visually disturbing to him)

I’m assuming that pixels which have been convoluted by an OOF PSF are not your target, as opposed to diffraction or things that can be traced back to an OLPF?

I tried reading and understanding the original Richardson paper, trying to understand what RT is doing, and see if they match. Still a little mysterious :upside_down_face:

I’m no expert, so what you say is probably more true than what I said :slight_smile: Gaussian, circular, … maybe we need a fancy algorithm that doesn’t assume any a priori knowledge of the PSF.

But I think you’re right that this tool is primarily aimed at ‘sharpening’ things that should have been in focus in the first place. And not trying to recover detail from things that were (accidentally) out of focus.

I’ve seen references to approaches that don’t require a priori knowledge, but I believe these get very computationally expensive.

It seems like the easiest things to model (and perhaps Gaussian is an OK-enough approximation for them?) would be sensor OLPFs and diffraction, since these both affect in-focus items. They’re unfortunately also a lot harder to measure to confirm one’s assumptions! (Hank has some great techniques for measuring lens OOF PSFs in the paper above, which have the advantage of being one of the only methods I’ve seen for testing lens decentering that is easily separable from other defects such as tilt.)

MTF Mapper: Importance sampling: How to simulate diffraction and OLPF effects may be useful?