That’s a trick I discovered yesterday. First instance compresses contrast locally, trying to balance lightness areas relatively to each other. Second instance is used as a tone curve (hence why it goes after the input color profile) that increases the contrast globally. The result is an increase of acutance. It’s also the principle of the local contrast module (although the implementation is quite different), but in a scene-referred RGB way.
Why do you think, it’s not reasonable to get the maximum fine detail out of a shot?
Until you want to print it. Not everything is screen related. I know, I know! Printing comes with its own problems.
Sorry, but I want my images crispy sharp. Not 4K/8K unnatural sharp, mind you.
Like Ingo, I would love to read your reasoning behind your comment.
Because when you zoom at 1:1, the material image you are looking at is more than 1-1.5 m wide. When you look at a painting or at a print, your efficient area is the center 2-10° of your field of vision. So if you want to fit a 1.5m print in that 10°, you will have to move back a couple of meters. Which will make most of your pixel-level sharpening endeavors completely useless.
May I point you to @s7habo’s tutorials (you will find them here in this forum)…
He can stack quite a lot of instances of the same module — and that makes
very good sense.
Addendum: or here https://www.youtube.com/results?search_query=s7habo+darktable
Claes in Lund, Sweden
@Baibomo Photos in JPG format, when posted on social networks / messengers, might end up compressed again and again… ad nauseam, leading to a loss of sharpness.
For example, if you’re posting photos on Facebook, you’ll have them sharper if you use PNG instead:
The downside of this approach is that your photos will consume more storage space, and take longer to upload.
Well, isn’t that calculation dependent on sensor reolution? I still shoot with a 12 Mp Nikon D700, where your calcalution does not hold.
Sure, but since 2013, the average resolution is around 24 Mpix, and the high-end tier goes up to 36-52 Mpx easily.
Sure, but does that mean, we don’t want to get this increase in resolution by deconvolution?
If we don’t need this increase in resolution of sensors, there’s no point in discussing deconvolution at all. …
No. Deconvolution sucks, it’s heavy on the CPU and full of artifacts. I see the point to salvage a backfocus or bring back old glass to the 21th century, but fighting pixels is doomed to fail. Sharpened pictures are usually more ugly than their blurry originals.
Ok, I’m out now
Deconvoluted I agree
CPU power is getting cheap these days. Used Intel Xeon server CPUs and matching dual socket motherboards with plenty of triple or quad channel RAM are very nice for rendering. Those CPUs can also be used in single socket desktop motherboards and overclocked to 4 GHz and beyond.
Socket 1366 Xeons (X56xx, 6 cores / 12 threads) have a good price to performance ratio.
Unless you have a team of specialists profiling the point-spread function of your lens/imaging system, your R-L deconvolution is merely a bank of iterative unsharp-masking filters trying to fight uniform or gaussian blur.
So it may work if you don’t iterate too much in the absence of noise. But the reconstruction will degenerate after some time and will turn noise into weird periodic patterns.
Thanks, I’ll try to use that and see how it works on the field
I already saw certain of his edits and even if I do understand certain usages of multiple instances, other remain mysterious to me… I guess I need more practice
Got it, not a magical solution (even if it’s amazing on certain cases)
Generally I use the capture sharpening in Rawtherapee with only 1 or 2 iterations, I would not use it alone as the main sharpener tool, that’s the work of unsharp mask.
For this photo however I’ve used 7 iterations and a small radius unsharp mask
DSC09835.ARW.pp3 (14.2 KB)
Ugh, reminds me of attempts to profile the differential interference contrast psf of a microscopy lens using glass beads. Tried to deconvolve DIC artifacts out of the pictures with it but it never really worked
Totally not fun
Well, all I can say to this is: improve it
I have been trying for the past 2 years with blind deconvolution. Still unsatisfactory results within reasonable runtimes. Results become good with 15 min/picture. So…