# Quick question on RT Richardson–Lucy implementation

Yes, thanks for that! I have only been able to follow it loosely so far. Will examine it further once real life is less dire. Since I am not a code, math or science wiz, it has taken me a while to process the information. Recently, I have been able to build the guided filter and extend it. Quite the feat for a newbie like me. A good exercise, with lessons that I can apply elsewhere… hopefully.

1 Like

I realize this, but I don’t think it actually addresses my question. With Richardson–Lucy deconvolution, they assume a Poisson noise model, and this only makes sense physically (as photon noise) in a linear space. It’s been a while since I’ve looked at wavelets, but I don’t recall any such assumption—or any other assumption that holds only in a linear color space—in their derivation. It’s just “input signal, output wavelet coefficients”: no assumption is made on the signal’s structure (besides being in L^2).

The argument in my link is that with JPEG compression, the wavelet expansion and truncation of higher coefficients produces more pleasing results in a perceptually uniform space, since minimizing L^2 distance in that space minimizes perceptual distance (more or less), which is what we’re after with good compression.

Accepting this argument for the moment, it seems like the same reasoning applies to, say, wavelet sharpening. We are interested in enhancing high frequency detail as perceived by the viewer, which suggests the use of a wavelet decomposition in a gamma-corrected space. I don’t especially care about features that are high-frequency only in a linear space if enhancing them doesn’t improve perceptible sharpness.

So, it’s not clear to me a priori which way is correct. The JPEG example suggests a nonlinear space might be the correct one. Can you point to any features of the wavelet algorithms RT uses that demand a linear signal?

Similar remarks apply to guided filtering. I care about enhancing edges as perceived by the viewer, not edges in a liner space necessarily. Why do you believe they will work erratically outside a linear space? Again, you may be empirically correct, but I don’t recall a telltale assumption like Poisson noise that mandates this.

Even though you have lumped the three in one discussion, I view RL, wavelets and guided filtering as apples and oranges. We have already established that RL does well in linear, so I will let that go for now.

To me, wavelets are their own domain, just as there is a Fourier domain and a gradient one, etc. As long as I get to recompose and return to the original domain, all is well. Of course, it might be simpler to start from linear, as transforming one too many times may introduce discontinuities or artifacts that would shock those of us who would care about those things.

Then there is guided filtering. It and similar filters are interested in smoothing flat areas while preserving edges. Let me put it this way, what would you consider an edge? In linear space, the edges would be focused in the brightest regions and the rest would be regularized; in most cases, that is not what we want. Hence, why I linked @Carmelo_DrRaw’s work. There might be objections to the implementation details, but the idea that you need to find the edges remain.

3 Likes

Here’s a first screenshot from R-L deconvolution on Y-channel immediately after demosaic. Left is without R-L, middle (old) is R-L on Lab L channel after exposure compensation and tone curve, right (new) is R-L on Y channel before exposure compensation and tone curve:

Here’s a filebin with the raw and the processed tif files:
https://filebin.net/dddrkxr8dg3bmuco

still wip…

4 Likes
7 Likes

Branch capture_sharpening has a functional ui now
It’s in the raw tab as it works only on raw files atm:

Because capture sharpening is applied to the whole image (not only the part of the image you see in preview) it takes some processing time, but also allows to preview the contrast mask at zoom levels < 100%

For the same reason, when using capture sharpening you also get a sharper preview at less than 100% zoom. Left old sharpening/right new, both at 50% zoom:

8 Likes

There was some progress recently:

1. speedup for the deconvolution
2. improved quality for deconvolution radius > 0.6
3. avoid hue shift introduced in first versions of capture sharpening

Just try it and give feedback

1 Like

I tried the new branch yesterday, so far I’m liking it. I used to not like RL very much for my camera (Fuji X-T2), but the capture sharpening looks better to me. And it’s really cool to have the sharpened preview below 1:1!

2 Likes

Not sure about halos, but in my blind test I preferred the top right. The image seemed to have richer colors and more contrast and visually the most appealing.

The screenshot is outdated. I will have to make a new one…

I suggest actually posting the screenshots blind when asking which is better. After receiving the input you can let us know which is which.

2 Likes

Ok, this time blind. One of the sharpened images is using RL-deconvolution in Details tab, the other one is using Capture sharpening right after demosaic with gamma 1. RL settings are the same for both sharpened images. In the middle for reference the unsharpened image.

Left side is better: less gradient reversals and clipping when zoomed in. When zoomed out, right side looks sharper at first glance.

I like the left sharpened images, they look a little more natural and with less halos

As @afre. So I guess capture RL sharpening is on the right left side (less halos).
lower perceived sharpness at lower zoom factor can be expected as the new version produces less halos.

I see more halos on the right side

Indeed, I meant left side. Post edited.

1 Like

I think the right looks better, appears to have more detail.

Edit: upon zooming in, the right does have more halo-ing. Maybe that is why it looks better zoomed out? More defined edges?

Yes, I think we shouldn’t confuse resolution with halo-induced edge separation here. The latter can always be added by unsharp mask and tuned as desired.

The right side images are worse quality at the pixel level. For the first set, zoom in and compare the foliage in the upper left (move down in a 45 degree angle from the upper left corner until you hit the green “bump” of leaves). There’s more pixelation and artifacting in the leaves (on the right side image), and the dots where the sky peaks through are whiter in a distracting way. For the second set, zoom in and compare the right and top edges of the tower where they meet the sky. There are weird halos on the right side image.

2 Likes

Left is Capture Sharpening, right is RL deconvolution in Details tab/Sharpening

1 Like