Sharpening old files really can make a difference

No, as I don’t use the unsharp mask.

For RL devonvolution the default radius of 0.75 often is a bit too high (though it fits perfectly for my Ais 2.8/24mm shots on D200). It also depends on the aperture of the shot. If you shot past diffraction limit, larger radius can get some details back without introducing artifacts.

But even when using only RL there is no general rule for the settings (though I only change radius and contrast threshold). There are files from sensors like Nikon D200 where quite large radius settings work fine, which could one let assume it’s because of the AA filter in the D200 and it’s quite low sensor resolution (10MP APS-C). On the other side the almost same settings also work fine on a modern camera even when using pixelshift. For example this screenshot shows a 100% crop from a Pentax K1 pixelshift file. Left without sharpening, right with RL deconvolution:

It also matters what you want to sharpen. I always sharpen for fine details in 100% view. Others sharpen for edges and so on…

1 Like

So you never use unsharp mask?

I believe USM is more widely used than RL. Which one to use depends on the effect you desire. Also, different apps have different implementations, so always compare and contrast. Personally, I prefer RL. I suggest you read the following:

1 Like

I wouldn’t say ‘never’ as I have to use it sometimes when an issue is reported where the pp3 uses an unsharp mask. For my files I never use it. But that absolutely does not mean, that unsharp mask is bad. It’s just, that I’m used to RL since some years and prefer it.

As RL is much slower than USM, an alternative worth to try (compared to RL radius 0.75 / 30 iterations) is USM with radius 0.43 and amount 1000…
Much faster than RL and concerning detail resolution very close

1 Like

You made this thread some time ago.

Is it enough to use the method in this thread , or should I try to learn what was done in the other thread ?

1 Like

The method in the thread you mentioned (My sharpening workflow) fits my sharpening workflow.
Other cameras/lenses may need a different workflow, though the basics (contrast threshold…) should not differ much

According to the Wikipedia article on USM, it says that " For deconvolution to be effective, all variables in the image scene and capturing device need to be modeled, including aperture, focal length, distance to subject, lens, and media refractive indices and geometries.

Which values does RawTherapee use / need? Some of my photos are with manual lenses and can lack aperture data, and with manual zooms can even lack focal length data.

None, RT RL deconvolution just assumes gaussian blur.

3 Likes

Read more about “point spread function”.

The Richardson–Lucy algorithm , also known as Lucy–Richardson deconvolution , is an iterative procedure for recovering an underlying image that has been blurred by a known point spread function. It was named after William Richardson and Leon Lucy, who described it independently.[1][2]

Personally, for my Fujifilm X-T2 filed I prefer the look of USM over RL. RL seems to introduce artifacts more quickly in my opinion, but maybe I don’t know how to use RL. For USM I use a low radius (0.4 to 0.45) and high amount (800 to 1000).

For my X-T20 (which is the same sensor and processor) I find that moving the contrast threshold slider up and lowering the radius helps with the artifacts. I also often lower the the amount to somewhere between 70 and 85.
Basically, this means that defaults are too high and things need to be dialed back a few notches.

One thing that has puzzled me is why the two algorithms (USM and RL) are an either or scenario and can’t be stacked. My feeble mind’s logic says that a little bit of each would give the best of both.
Is there a technical reason why they can’t (or shouldn’t) be stacked or is it just how it was coded?

I never tried stacking them (one one top of the other), but I would guess that leads to artifacts when not using blend masks to decide where one or the other will be used.

After I posed my question, I did some Googling and found this:

http://www.clarkvision.com/articles/image-restoration2/index.html

Image deconvolution iterations reach a plateau and then only seem to enhance noise. In my experience it is best to find that plateau and stop the iterations just as the plateau is reached. Then run unsharp mask or smart sharpen on that result and try additional image deconvolution iterations using a smaller point spread function. A final unsharp mask on that result can help perceived sharpness. The multiple combinations of image deconvolution and unsharp mask (edge contrast enhancement) produces the best results in my experience with hundreds of images.

Hey, its on the internet so it must be true, right? :thinking:

Seriously though, it looks like something worth investigating when I have spare time. I guess I can export the file from RT and then reopen it?

1 Like

Yes, that’s a proper way to test it.

I have read conflicting accounts on this topic. Recent papers seem to indicate that stacking and iterating doesn’t improve sharpness (or acutance) all that much because artifacts tend to overtake the benefits very quickly. It might be better to optimize the parameters and perform the processing once. The real answer is that it depends, or the algorithm could be extended to adapt to more scenarios.

Personally, although RT has powerful tools, I prefer doing some custom sharpening (which is time and brain consuming), or not doing any at all. But that is just me.

Now I just need the free time – and have the computer free at the same time. (Its not easy sharing a computer with teenagers :scream:)

Get them interested. Nothing more satisfying than having others do your lab tests, for free!

You can sort of stack them by using post-resize sharpening.