Quick question on RT Richardson–Lucy implementation

There is a small difference. I could not detect it using the Fuji GFX50R raw file at 100 ISO from the DPReview link, presumably because the file is already quite high resolution.

However, I get the following results using the D7500 file at 800 ISO.25 27 A B

It’s fairly clear the haloing is worse in the first image of each pair. Unsurprisingly, these are the ones following the default RT pipeline. Reversing the order and doing the deconvolution in linear space gives the second image of each pair, with reduced haloing. The tone curve I used is the one from here: Exposure - RawPedia. (I didn’t use auto-curve because the curve would differ between images.)

Relatedly, I would like to raise the idea of decoupling the unsharp mask and deconvolution options. Right now, unless I am missing something, only one can be active at a time. But they do very different things—increasing resolution versus increasing acutance—and it would be very reasonable to want to apply both. Further, I suggest they belong at different stages at the pipline. USM should stay at the end, for the usual reasons. However, deconvolution is essentially “capture sharpening” and compensates for lens diffraction, camera shake, etc., and should be thought of as belonging in the same bucket as lens corrections (distortion correction, CA, etc.) which come early in the pipeline. You could conceivably put it right after the demosaicing with good results, I think.

1 Like

The same trick (linearizing before applying USM, then delinearizing) reduces haloing in unsharp mask, by the way. So there’s another easy way to improve the sharpening. Examples below were oversharpened to emphasize the effect.

The theoretical explanation is that USM is a linear operation, and so works best in a linear space.

I have to think about how to integrate your method, which clearly has advantages, as you can see in this example (left your method, right standard method), into RT pipeline…

Edit: My example was oversharpenend intentionally

3 Likes

I look forward to seeing what you come up with!

It may also be worth thinking about what other algorithms (e.g. wavelets) would benefit from being done in a linear color space. My guess is most of them, because of how the math works out in their derivations.

You can already do that, though the second sharpening is blind:
grafik

1 Like

I have a quick question myself. If resize is later in the pipeline and post-sharpening after that, how do you make the data linear after nonlinear manipulations? How does RT, dt, PhF, etc., tackle this, respectively?

Not thinking too hard, I can see 2 options: a do all nonlinear operations at the end or b all operations are somehow in linear space; only preview and final output are nonlinear.

1 Like

I do this in rawproc, but I didn’t specifically set out to do it. rawproc has a processing chain, and it allows selection of any step in the chain to pipe to the display, through the display color/tone transform. So, I can stack a bunch of operators on the base raw image, and select the last one for display. All the operators then just work, in-turn, on the radiometrically linear original data.

But with this great power comes great responsibility… :smile: - the user is responsible for stacking the processing tools in a beneficial order, and there are a lot of orders that are decidedly less-than-beneficial.

Oh, back to the RL intent of the thread, I’d surmise that, 1) since sharpening is generically an edge-contrast manipulation, and 2) gamma- and other tone transforms definitely mess with contrast. 3) doing any sort of sharpening before the tone transforms will produce outcomes more in line with the sharpening tool’s intent, and less productive of egregious outcomes like haloing. This thinking is making me consider moving my output resize/sharpen steps before any non-linear transforms like filmic…

Imho this thread is about the right place in pipeline for R-L deconvolution, not about linear or non-linear processing in general.

I agree with @nik:

Though also post-resize R-L deconvolution has its use-cases and we also don’t want to break compatibility with older pp3 files. Means, I will add the possibility to apply R-L deconvolution after demosaic. It’s really an improvement.

Here another example (look at the garden hose). Left is old behaviour (Tonecurve before RL, right is RL before Tonecurve, same RL settings for both)

1 Like

Sorry, which is why I created a new thread right after.

Excellent.

1 Like

@afre No need to apologize :slight_smile:

There is a fundamental difference between the two approaches which are:

  1. apply RL in linear color space

  2. apply RL at the right step in pipeline

Let me explain:

Applying RL at the end of the pipeline (whatever color space) will apply RL on the transformations done earlier (which in my example is a curve like this (the auto matched tone curve)):

grafik

Applying RL before applying the same tone curve results in less haloing. Linear or non-linear processing is not involved here. Just the place in pipeline…

1 Like

That was what I was saying earlier: later in the pipeline, the data is no longer linear. Could you clarify what you mean by ↓? I think I know but could you elaborate for us?

@afre Sure. I mean that every processing (linear or not-linear) before applying RL changes the input for RL

You’re right, I was definitely speaking inaccurately above. RT doesn’t do “gamma correction” per se; the tone curve is the gamma correction. Thanks for your help!

2 Likes

Nevertheless, applying RL at an earlier step in pipe seems better. Thanks for bringing that to my attention :+1:

3 Likes

Well, wait a second. I may have spoken a little too quickly. Following the original paper, I want to do deconvolution on the linear data captured by the sensor.

So, I think right after the demosaicing you have RGB data that’s basically linear. (Someone please correct me if I’m wrong.) If we were to deconvolve on one of the RGB channels now, then I agree everything would work out mathematically.

However, RT deconvolves on the L channel. According to some data I found, the L channel is not linear in the number of photons captured. So it doesn’t fit the Poisson noise model used in the Richardson paper.

In particular, reflectance percentages of (12.5, 25, 50, 100) correspond to L values of (42.4, 57.0, 75.8, 100). Definitely not linear. I’m not sure how to work around this, or whether it is worth the effort. I’ll think a little and let you know.

You can definitely test this if you know some C++ (for RT) or gmic scripting.

I believe, though am not totally sure, that the conversion is going to depend on the illuminant used. That’s a bigger worry to me (regarding implementing that could eventually be used in RT) than doing the math/coding.

Bart on the Luminous Landscape forums says the gamma for L is essentially 2.2. More info here.

@heckflosse Would it be possible to check whether accounting for the gamma transform on the L channel further improves the deconvolution? So convert to L, transform using gamma 0.455, deconvolve as usual, then transform using gamma = 2.2 again? I think this may help just as much, or even more, as putting it earlier in the pipeline does.

Here are some examples from the LL forum post I linked. The settings are not quite the same, but this is the best I could find without actually coding it myself. The algorithm used is from 2014 but I don’t think any subsequent improvements have addressed the gamma issue.

nonlinear

linear

I’m currently testing some changes. I will report back later…

1 Like