As you can see, the zoom preview looks very different then the actual image (better IMHO).
First, I thought the zoom preview was based on the RAW preview as produced by the camera, but it changes when I adjust the filters, which I think wouldn’t make sense if it was based on the already processed version from the camera.
Could anyone tell me what the difference is, and maybe how I get the actual image to look like the preview?
The center preview and the navigation preview don’t use the same pipeline… I will have to get a link for you on the different previews in DT… 4.8 has introduced the use of the full image data preview that previously was only used during export and could lead to a mismatch there at times before it was available… if you are in 4.8 try toggling it on … it will be slow so you might not want to leave it on… this might show a better match…the ground truth is 100% zoom…
If it’s not possible to “just get” the left side result, I’d love to optimise the right side to look like the left side. It’s a bit annoying that filters also get applied to the left side, but I could work with a screenshot as a reference. However, I also don’t really have a clue about what the left side does different to the right side and how to get rid of the red pixels. I’d be thankful for any hint in that regard.
The flatpak isn’t that difficult. For nVidia, flatpak must install an extension matching the host. That’s why there are so many org.freedesktop.Platform.GL.nvidia-@@NVIDIA_VERSION@@ extensions.
Darktable runs several pipelines, which differ in settings (it’s not only a difference of image size). The navigation preview is scaled down early, so there’s less data to process, in order to speed things up.
The editor (and 2nd screen) pipelines are run separately. The image is processed fully only if:
in the export pipeline, you export at original (or larger) size;
in the export pipeline, you export at a downscaled size, but high quality resampling is enabled;
in the darkroom, if the full (slower) preview is enabled.
If you want deeper blacks, process the image accordingly. I recall there are some modules that use that downscaled preview to provide some information (including statistical info, like max/min brightness), and for difficult images, some colour picking may not be exact. You have to trust your eyes to tune the image. Maybe post it as a PlayRaw, so people can help.
I just tried the AppImage, to make sure my setup is the closest to what upstream provides.
When starting the AppImage, I immediately got a message about an OpenCL initialisation error, but it was shown too short for me to copy it, and it didn’t show up again.
So it seems like OpenCL is broken on my system (I have both, an Intel and an Nvidia GPU).
However, the red pixels are also in the exports of the image.
I just discovered that when I move the image around, new parts of the image look good first, but then the red noise gets applied to it:
So it seems like some processing step introduces the problem.
However, I could not find the problematic step by deactivating modules: None of the modules being deactivated leads to the image I briefly see when moving it around or in the zoom preview.
Deactivating illumination gets me the closes, as it removes the red pixels, but it also leads to the lighter parts of the image being much darker than in the zoom preview or in the parts I briefly see when moving the image.
Maybe someone who knows which parts of the pipeline take longer to be applied could give me a hint which rendering step introduces the problem?
No: when you move the image, first a lower-resolution, quickly processed version is shown; then, full processing kicks in. It’s just noise in your image.
Here’s darktable (left) and RawTherapee (right). The default exposure and tone curve is different, but the noise is there:
Processing typically doesn’t add noise (it can make it more or less visible, but that’s not the same thing).
And down-sampling reduces noise, as it is “averaged out”, which is why the initial low-resolution appears less noisy. Same with the “zoom preview”: it’s a reduced size, so (heavily) down-sampled, so noise disappears.
(Note: this works for random or Gaussian noise, some cameras may show patterned noise, which behaves differently. Why it works: noise values between neighbouring pixels are strictly random, so there’s a good chance that two neighbouring pixels will have a noise contribution of opposite sign. Averaging those pixels will then reduce the noise )
Strong denoise (profiled) in darktable, exported with bounds of 1024x1024, then upscaled 4x in ChaiNNer using the ‘codeformer’ model. About 10 MPx, 2 MB (I hope that’s OK).
You could export the original at the same size as the ChaiNNer output, load them as layers into Gimp, and mix according to taste (to avoid over-smoothing).