Well, Aurélien said at one point that all modules should eventually be moved before the tone mapper (filmic, of course). The reasoning was that once that’s done, you’d be able to instantly switch the output between SDR and HDR, for example.
I think the ‘only’ problem with that is you’d also need THE perfect tone mapper, which ‘understands’ human vision perfectly, and is thus able to generate the optimal output, perfectly suited to the output device’s capabilities.
I don’t think that is sufficient for all purposes, at least not without a very roundabout module workflow in scene-referred.
Consider, for example, local contrast enhancements and similar. In the very flat part of the tonal curve, especially the highlights but also the shadows, you need a much more aggressive transformation than in the midtones to get the same effect. Now I understand that you rarely want to get the same effect, the point is that you have to go out of your way to get some effect.
Of course one can do this with masks… but that takes a lot of manual fiddling around.
A (hypothetical) solution I could imagine is a 3-way tab for all modules to have varying effects depending on the tonal range. Currently only color balance rgb does this. But that could also lead to a very crowded GUI.
THE perfect tone mapper( in both senses) would understand human vision perfectly, so it’d be non-linear and know where and how to tweak the mapping. Of course I’m being a tiny bit sarcastic here.
I feel the same when it comes to sharpening in my G9 vs Darktable. But also mind you – in preview, the Diffuse or Sharpen module does its miracles on the preview, not on the image in its full resolution, so what you see during editing vs what you export can get very different.
That’s why you have the button to turn on full-resolution processing (or you can zoom in). It would also be important to enable high-quality resampling on the export module.
My interest in sharpening is declining at the same rate as my eyesight.
I’ve been to some exhibitions recently where the oversharpening has spoiled the photos. (Looking at many wildlife photographers in particular who seem obsessed about noise reduction and sharpening)
Global and well placed local contrast do more for me.
Not to mention good management of colour and tonality.
Here is a pic taken with my phone of a photo of polar bears that was exhibited recently in my area.
Is it a known fact ? Is it investigated ? Have someone post screenshots vs export ?
I’m quite disturb by the idea that darkroom view could wildly differ from the export …
Then, if you export at 100%, or at a lower resolution but with high-quality resampling enabled, you’d have (nearly) identical results.
It’s not only diffuse or sharpen, though.
Without zooming in to 100% / enabling the full-resolution preview processing, the image is scaled down to fit the view, and is processed that way. Pixel-wise operations will not work on individual pixels from the input, but on pixels derived from multiple input ones; operations that affect an area (e.g. a blur by 3 units, to give a simple example) will try to reduce the ‘size of the operation’ (e.g. blur radius) proportionally to how the image was downscaled, but this cannot always be done perfectly: if some operation can only be done with a radius of whole pixels, and you downscale 1:3, 3 pixels become 1; 2 should become 2/3, but will probably be rounded to 1; 1 should become 1/3, but may be rounded to 0 and omitted completely.
RawTherapee and ART also have such operations, marked with the 1:1 icon:
There have been some recent improvements, e.g. in this PR for color equalizer:
Slightly reduce saturations dependency on roi->scale.
Makes sure there are almost no visible differences in zoomed-out darkroom between HQ mode toggled on or not.
Also reduces subtle artifacts while non-HQ exporting with strong downscaling.
I’m not sure if you consider that an example of over-sharpening, but for me it looks nice and sharp, the narrow clear band around the bears is due to the back-lighting on the fur.
The banding around the sun is another story, that is something I would have tried to avoid (then again, my images are not selected for publication!)
It is known but not discussed enough. I’ve actually created an issue on GitHub some time ago (should you want to see some examples of the quirk). But it seems unlikely that it will get fixed (reworked), as that’s just how the module’s supposed to work.
Personally, I’ve returned to the contrast equalizer module. It does the job well enough and I can see the final result immediately.