This isn’t module behaviour it’s darktable behaviour. The darkroom editing view has always used a downscaled version of the image for UI display, because it’s not able to process the whole image on every slider change without significantly slowing the speed of editing. This has always affected any module that needs to take account of neighbouring pixels for its operation, including contrast equalizer (because on a downscaled image the neighbouring pixels are different to the full-size view). It doesn’t impact modules that treat pixels independently (like exposure, for example). IIRC the diffuse|sharpen module is actually better than the other modules in this respect, because the original designer attempted to take account of this normal darktable behaviour and make the downscaling effect less noticeable.
These days it is possible to make darktable process the whole image (by clicking the “high quality processing” button). This will make the darkroom view look exactly the same as the final (full-size) export, since it processes the whole image and downscales at the end (instead of downscaling then processing). But this makes editing a painfully slow process.
If you want an editor that always does full-image processing and works quickly, you might want to look at vkdt, but you’ll need a GPU to use that program.
Would the impact of scaling also not depend to an extent on the display? If you had a 4K vs standard resolution display and your image is say 6K by 4K pixels the amount of scaling necessary will vary at a zoomed out full screen view and so might also impact what happens to the resulting image based on the module math for those modules that use neighbouring pixels unless you used HQR??
Yes the extent of the issue depends on how much the image needs to be downscaled for display. Obviously if you have a screen that can display all of the pixels in the image then you won’t even need the HQ button, but you’ll probably have very slow darkroom interactivity
Guys, is there a way to process in DT first and then just shapen it in RT? Except sharpening I am fully onboard with DT but I tried capture sharpening in RT and it certainly turns out better than DT!
well then open the raw in RT, apply the sharpening, export a high bit depth tiff, and continue editing in DT. Seems like a lot of extra work for minimal gain, but if that’s what OP must have…
For a usual case we have a darkroom scaling ratio of less than 0.25 so we interpolate at least 4 photosites into one pixel (you can check via -d pipe and watch out for processed full modules reporting the scale).
This interpolation is done right after demosaicing and the interpolator is chosen from preferences. At least the two lanczos variants do some inherint sharpening - you could also call this adding subtle artifacts at sharp transitions that will affect data for all following modules in the pipe (leaving out moire issues that would require further filtering before downscaling). Thus i would recommend to stay with bicubic.
We discussed Capture Sharpening to be implemented in darktable before. It works on non-interpolated raw data and even if results differ just “slightly” it would be “the way” to improve data for the whole pixelpipe including demosaicing (also theory says so).
I recently looked into this a lot, first preparations have been done, more complicated maths involved to make it working in acceptable speed. But i am pretty sure it will be implemented
I actually don’t have a big problem with dt’s current sharpening, but I’m always excited for new features/improvements. And the less I can use the D&S module, the better as far as I’m concerned.
I have been interested in this for a while. Already runs here in some preliminary but working state. But for sure it will only find it’s way into dt if results are definitely better than what we already have.
To me the last comment in your GitHub issue is worrying.
The DoS module is very complex with really weird labels on sliders. I have been trying to find something somewhere that actually explains it in detail but there is no such thing. Mr. Hajdukovic do a pretty good lab on the module in episode 51, but his adjustments and use of the module is vast different from the presets that now exists. Like, not even close to be honest. There is a thread with 161(!) replies and even at the end there are debates of the in and outs of the module.
I would consider myself pretty decent when it comes to DT and there is no module that comes close to the number of times I’ve googled the settings. I am dependent on a module but I have ridiculous low understanding of the logic behind it.
I’ve been doing some presentations on DT for local user groups and getting into sharpening is a pain. The audience becomes a big solid what? unit when it comes to DoS. Try it yourself. Try to explain that the third slider affects “gradient of laplacian” in 30 seconds without sounding like you should be locked up in a cushioned room. The module can do amazing things no doubt. It’s just really really difficult to understand.
If someone could apply reasonable labels to settings in the module, that would help tremendously. I would if I could, but I honestly don’t know what to call the settings in a more understandable way. Or, maybe I do but in my own way and that’s not useful for the general public so to say.
You can try the original developer’s video, but to be honest, I just use the presets, c maybe tweaking the radius and iteration parameters now and then.
The most powerful modifiers in my opinion are the threshold sliders. They can really attenuate or bump up the effect so they are worth playing around with in the presets to tweak the results…
That’s understandable, but if the results are at least similar, then I still think there’s merit in implementing it so that the D&S module can be ignored by those who prefer not to use it.