Quick question on RT Richardson–Lucy implementation

You’re right, I was definitely speaking inaccurately above. RT doesn’t do “gamma correction” per se; the tone curve is the gamma correction. Thanks for your help!


Nevertheless, applying RL at an earlier step in pipe seems better. Thanks for bringing that to my attention :+1:


Well, wait a second. I may have spoken a little too quickly. Following the original paper, I want to do deconvolution on the linear data captured by the sensor.

So, I think right after the demosaicing you have RGB data that’s basically linear. (Someone please correct me if I’m wrong.) If we were to deconvolve on one of the RGB channels now, then I agree everything would work out mathematically.

However, RT deconvolves on the L channel. According to some data I found, the L channel is not linear in the number of photons captured. So it doesn’t fit the Poisson noise model used in the Richardson paper.

In particular, reflectance percentages of (12.5, 25, 50, 100) correspond to L values of (42.4, 57.0, 75.8, 100). Definitely not linear. I’m not sure how to work around this, or whether it is worth the effort. I’ll think a little and let you know.

You can definitely test this if you know some C++ (for RT) or gmic scripting.

I believe, though am not totally sure, that the conversion is going to depend on the illuminant used. That’s a bigger worry to me (regarding implementing that could eventually be used in RT) than doing the math/coding.

Bart on the Luminous Landscape forums says the gamma for L is essentially 2.2. More info here.

@heckflosse Would it be possible to check whether accounting for the gamma transform on the L channel further improves the deconvolution? So convert to L, transform using gamma 0.455, deconvolve as usual, then transform using gamma = 2.2 again? I think this may help just as much, or even more, as putting it earlier in the pipeline does.

Here are some examples from the LL forum post I linked. The settings are not quite the same, but this is the best I could find without actually coding it myself. The algorithm used is from 2014 but I don’t think any subsequent improvements have addressed the gamma issue.



I’m currently testing some changes. I will report back later…

1 Like

I asked on StackExchange and got a good suggestion here: image processing - What Is the Correct Way to Apply Richarson Lucy Deconvolution to Luminance Data? - Signal Processing Stack Exchange.

That’s funny because I just tried to use RL on Y channel :slight_smile:

Here’s a first screenshot:
Bottom right: default RT RL deconvolution
Top right: Same as before, but RL applied on Y instead of L
Bottom left: As Bottom right, but RL applied before tone curve
Top left: as Top right, but RL applied before tone curve

If there is interest, I can upload the raw and also the tiffs to a filebin

1 Like

Imho top left of my latest screenshot has much less halos than the other ones. Though it would be nice to get this verified by other people…

1 Like

Doing it blind I strongly preferred the top row to the bottom, but top left and top right seemed comparable.

Looking after your post, it’s clear the object behind the hose is rendered slightly better in the top left. Also there’s less haloing on the left edge of the door.

1 Like


Also agreed!

1 Like

Using Y instead of L for RL deconvolution is a simple change (about 20 lines of code) , though we need a gui element to switch between both methods to ensure backwards compatibility which means some more lines of code.

Applying RL immediately after demosaic is a bit more work, but I’m willing to put some effort into that as well.

1 Like

Little internet data left so I cannot see the samples. What I came here to say is that I have done some experiments on filtering using Y at various encodings a year or so ago and it seems to give better results than L* and norm[1]. I haven’t found time to test and update my own gmic filters but looks like you have confirmed that it works at least for RL.

PS [1] I typically use norms but perceptually Ys can perform better. It depends on your goals.

1 Like

One may ask now: why the hell did RT use RL deconvolution on Lab L at all? : Well, that’s because it was implemented this way in 2013 (maybe even before), noone changed it and noone thought about that.

Thanks to @nik who brought that to attention we may get at better RL sharpening in RT 5.8 :+1:


It’s 2019 folks.

I keep repeating myself, but convolutions (so, deconvolutions too) are operations that need to conserve the energy of the signal. As such, they need to operate in Hilbert spaces of square integrable functions, in order to respect the Parseval’s identity.

That theorem is 215 years old, it’s about time image processing begins to care about it. In image processing, such spaces of square integrable functions are scene-referred because the energy to preserve is the one of the light spectrum. From your camera sensor to your LED screen, it’s light in/light out, so light emissions are all you need to care about.

Lens blur is modeled mathematically by a convolution, and it does not blur gamma-encoded garbage but light emissions, aka photons characterized by their energy. Light is represented in digital imagery by scene-referred/scene-linear encoded RGB sensor readings. Deconvolving the lens blur in any space that got a non-linear transfer function squeezed in it is foolish, theoritically wrong, and visually unpleasant.

Dear opensource devs, please get your maths right before opening your IDE, because image processing is just about that, so you get no choice. That stuff is part of the first 3 years in the curriculum of any applied sciences university program. It’s not new. There are way too many engineers and scientists involved in opensource image soft to excuse any corner-cutting with theory. Apply the theory, then get proper results. Don’t, and expect problems. [2]

[1] Convolutions are a large family of filters using pixels and their neighbours, including:

  • blurring,
  • deblurring,
  • local averages,
  • guided & bilateral filters
  • highpass and lowpass filters,
  • wavelets/Fourier spectrum decompositions,
  • rotations,
  • interpolations,
  • conform distorsions,
  • etc.

[2] And when you do get problems, don’t add thresholds and opacities tricks in your broken code to hide halos, fringes and funny stuff under the carpet. Go fix your models instead, or you will loose everybody’s time in the community.


Hi Aurélien. While I largely agree, I’ve found references suggesting that certain wavelet transforms are better done in a perceptually uniform space such as LAB, which is not linear.

For example:

JPEG and other lossy image compression algorithms depend on discarding information that won’t be perceived . It is vital that the data presented to a JPEG compressor be coded in a perceptually-uniform manner, so that the information discarded has minimal perceptual impact. Also, although standardized as an image compression algorithm, JPEG is so popular that it is now effectively an image interchange standard. Standardization of the transfer function is necessary in order for JPEG to meet its users’ expectations.

Do you agree? The argument may extend to e.g. guided filters for edge detection; I’m not sure.

Mostly has to do with backwards compatibility. I tend to go the gmic route, which provides a high degree of freedom, but I have to hand do everything, so I am mostly lax in getting it right all the way through, at least for the PlayRaws.

It depends on what you want to achieve. See: New shadows/highlights tool with enhanced halo control. It is only a part of what @Carmelo_DrRaw has been working on. He has a way to preserve the ratios at the end, though one might disagree with the method.

:slight_smile: I agree with your suggestions, though changing all this old code won’t be done in one day…


These wavelets are meant to encode a file that will be later decoded, so overall, they are a no-op. It’s just a trick to compress pictures by removing high frequencies. It’s a whole different topic than what I’m taking about (aka picture editing & pixel pushing).

Guided filters for edge detection will behave erratically outside of a linear space. It’s yet another case of things that could work fairly in gamma/display-referred space, until someone finds a case where it fails badly. Scene-linear is just way more robust and simpler. Just stick to physics, and everything should be fine. That’s all we know anyway.

I have been working on a similar feature in parallel since last Christmas, also with iterative guided filters, but with a different approach. My code only makes use of linear operators, and doesn’t need thresholds to avoid halos, nor ratios preservation since it’s basically an exposure compensation.