Zoom preview looks better than the actual image?

Hello everyone,

my Darkroom view in Darktable looks like this:

As you can see, the zoom preview looks very different then the actual image (better IMHO).

First, I thought the zoom preview was based on the RAW preview as produced by the camera, but it changes when I adjust the filters, which I think wouldn’t make sense if it was based on the already processed version from the camera.

Could anyone tell me what the difference is, and maybe how I get the actual image to look like the preview?

Thanks in advance and kind regards!


1 Like

Welcome …what version are you using…

The center preview and the navigation preview don’t use the same pipeline… I will have to get a link for you on the different previews in DT… 4.8 has introduced the use of the full image data preview that previously was only used during export and could lead to a mismatch there at times before it was available… if you are in 4.8 try toggling it on … it will be slow so you might not want to leave it on… this might show a better match…the ground truth is 100% zoom…

icon to the right of the light bulb…


Thanks a lot for your reply, Todd!

Yes, I have DT 4.8.0. The full image data preview indeed improves the quality, thanks for that!

But it still seems like there is a lot more red in the actual preview than in the zoom preview.

With full image data preview:

And it’s not that the red pixels get stripped because of the size – if I open a small preview in another window, it looks like this:

If it’s not possible to “just get” the left side result, I’d love to optimise the right side to look like the left side. It’s a bit annoying that filters also get applied to the left side, but I could work with a screenshot as a reference. However, I also don’t really have a clue about what the left side does different to the right side and how to get rid of the red pixels. I’d be thankful for any hint in that regard.

1 Like

Did you ever tweak anything in preferences?? What happens if you turn off opencl??

I never touched the settings.

I just touched them the first time to disable openCL. Didn’t change anything, even after restarting Darktable.

But now, if I open the settings, I can’t activate openCL any more as it says “not available”:

Maybe there’s something wrong with my GPU/driver? Or could it be a limitation of the Flatpak package?

I think maybe your right…flatpak issue but I’m not sure. Maybe try to update your driver.

With the flatpak, the drivers need to be installed, and kept up-to-date, in the flatpak as well as in the OS.

Isn’t the official AppImage easier to use? The team does not release a flatpak version.

The flatpak isn’t that difficult. For nVidia, flatpak must install an extension matching the host. That’s why there are so many org.freedesktop.Platform.GL.nvidia-@@NVIDIA_VERSION@@ extensions.

Darktable runs several pipelines, which differ in settings (it’s not only a difference of image size). The navigation preview is scaled down early, so there’s less data to process, in order to speed things up.
The editor (and 2nd screen) pipelines are run separately. The image is processed fully only if:

  • in the export pipeline, you export at original (or larger) size;
  • in the export pipeline, you export at a downscaled size, but high quality resampling is enabled;
  • in the darkroom, if the full (slower) preview is enabled.

If you want deeper blacks, process the image accordingly. I recall there are some modules that use that downscaled preview to provide some information (including statistical info, like max/min brightness), and for difficult images, some colour picking may not be exact. You have to trust your eyes to tune the image. Maybe post it as a PlayRaw, so people can help.

I just tried the AppImage, to make sure my setup is the closest to what upstream provides.

When starting the AppImage, I immediately got a message about an OpenCL initialisation error, but it was shown too short for me to copy it, and it didn’t show up again.

So it seems like OpenCL is broken on my system (I have both, an Intel and an Nvidia GPU).

However, the red pixels are also in the exports of the image.

I just discovered that when I move the image around, new parts of the image look good first, but then the red noise gets applied to it:

So it seems like some processing step introduces the problem.

However, I could not find the problematic step by deactivating modules: None of the modules being deactivated leads to the image I briefly see when moving it around or in the zoom preview.

Deactivating illumination gets me the closes, as it removes the red pixels, but it also leads to the lighter parts of the image being much darker than in the zoom preview or in the parts I briefly see when moving the image.

Maybe someone who knows which parts of the pipeline take longer to be applied could give me a hint which rendering step introduces the problem? :slight_smile:

Here is the raw file btw.:
DSC08571.ARW (23.8 MB)

I hereby grant access to modify the image and post copies of the original or modified versions in this forum

No: when you move the image, first a lower-resolution, quickly processed version is shown; then, full processing kicks in. It’s just noise in your image.

Here’s darktable (left) and RawTherapee (right). The default exposure and tone curve is different, but the noise is there:

The amount of noise is consistent with DPReview’s findings:

Processing typically doesn’t add noise (it can make it more or less visible, but that’s not the same thing).
And down-sampling reduces noise, as it is “averaged out”, which is why the initial low-resolution appears less noisy. Same with the “zoom preview”: it’s a reduced size, so (heavily) down-sampled, so noise disappears.

(Note: this works for random or Gaussian noise, some cameras may show patterned noise, which behaves differently. Why it works: noise values between neighbouring pixels are strictly random, so there’s a good chance that two neighbouring pixels will have a noise contribution of opposite sign. Averaging those pixels will then reduce the noise )

Strong denoise (profiled) in darktable, exported with bounds of 1024x1024, then upscaled 4x in ChaiNNer using the ‘codeformer’ model. About 10 MPx, 2 MB (I hope that’s OK).

DSC08571.ARW.xmp (8.2 KB)
jano-chainner.zip (924 Bytes)

An alternative version, with a bit more complex ChaiNNer setup (exported strongly denoised at 1024x1024, upscaled using the ’ RealESRGAN_x4plus’ model (see GitHub - xinntao/Real-ESRGAN: Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.), and then the face was refined using ‘codeformer’ (see GitHub - sczhou/CodeFormer: [NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer).

jano2-chainner.zip (1.2 KB)
DSC08571.ARW.xmp (9.3 KB)

A crop to the drummer:

You could export the original at the same size as the ChaiNNer output, load them as layers into Gimp, and mix according to taste (to avoid over-smoothing).