So I’ve noticed a step down in performance and the only thing changed so far is the version of Darktable (4.8.1) and newer Ubuntu (24.04). Disks look nice performance wise and I use other resource heavy applications and they are fine.
Opening a single image in darkroom takes about 10 seconds at best. I started DT with logging, opened a photo from the latest gallery (with 260 files) and stopped logging when the file was ready for editing. The output is here:
The image I opened is taken with a Canon EOS R8 (CR3).
If I go back to an older gallery where I have used an EOS RP (CR3), the files open significantly faster, even though they may be heavier edited.
Can you see anything in the log that would cause this slowdown?
Reading a -d all is very hard. Maybe use -d common. When you say, slower, do you have at terminal output from an older system? This image took 3.9s (row 5173), of those, 2s was in diffuse and coloreq. Nothing too atypical from that pipe execution.
I do see local copies and full copy path. Maybe there is an issue with the image locations.
The example image is using a Diffuse preset, I have in the past used Diffuse like Boris Hajdukovic do in his videos which is less complex compared to the presets. I copied the stack from the example image in the log and applied it on an older RP image and it still opens faster. The images have the same resolution and file size so I can’t really see why they differ in performance.
Disks aren’t an issue either, they are plenty fast. And the images are in two different parallel directories on the same disk.
I can’t really see that the graphics card is pushed to the brink, but maybe I need a faster GPU?
I have an NVIDIA GeForce GTX 1650 (4 GB) and memory is never above 1 GB and I can’t see that the card is above 45 % when processing images.
$ hdparm -Tt /dev/nvme0n1p2
/dev/nvme0n1p2:
Timing cached reads: 23156 MB in 1.99 seconds = 11609.09 MB/sec
Timing buffered disk reads: 2782 MB in 3.47 seconds = 802.28 MB/sec
Here’s the same procedure but with a -d common log instead
That could be a problem, leading to excessive tiling that kills performance. What’s your resource size setting? Maybe try increasing that to ‘large’, if it’s currently ‘default’.
I changed resources to large and may have seen a boost, hard to tell if it made a difference or if it was placebo.
I then changed the OpenCL scheduling to very fast GPU and that made a difference for sure. I already had use all device memory enabled since long time.
Still, it’s not like my poor little GPU is very fast nor is it drained on resources but now I can see that Darktable is at least peaking somewhere at half the GPU memory.
Maybe it is time to get another speedy GPU?
I am curious though, why I don’t see the same effect with photos in older libraries. Maybe this is one of the intergalactical mysteries I’ll never get to understand. That and if there are cats inside black holes.
Oh, BTW for anyone interested in monitoring Nvidia under linux:
The resource setting was just a shot in the dark, I haven’t checked the log file, was on the road. I also didn’t know about your use of the ‘use all memory’ setting.
Have you adjusted the ‘headroom’ (memory reserved for the OS and apps)?
But as long as you don’t see tiling, the low memory consumption is not a problem.
I normally use -d common -d perf -d tiling -d opencl (common probably implies at least one of the others, not at my computer now).
Do you see OpenCL timeouts? If yes, set opencl_mandatory_timeout to a high value in darktablerc.
Not really sure what I should look for in the log output, but can’t see anything similar to tiling nor timeouts. At least not after the recent config changes.
Color Eq and Diffuse take about 0.3 in total and the UI feels much better since yesterday.
Still, should get a second GPU to speed up things even more I guess.