Recommended hardware for version 4.4.2

Yes, with already a capable GPU, CPU upgrades may not show much of an improvement (unless of course upgrading to something like a Threadripper (with which sometimes even a GPU may not be needed)).

Hardware upgrade which does a noticable difference is using memory with more GT/s (“frequency”). I have noticed a difference even when overclocking existing DIMMs.

Hi stuntflyer

Try the following in the configuration section CPU / GPU / memory:
change Darktable resources from default to large;
activate prefer performance over quality;
change tune OpenCL performance to memory size and transfer.
And see if you get any better results.
Note: try to just have Darktable on when editing (and specially) exporting your final files.

Best regards,

This might actually be detrimental.

1 Like

It seems like the jury is out on this one?

‘Transfer’ tuning, if I recall correctly, is about something called ‘pinned memory’, which was never required by NVidia cards, but required by some (early?) AMD/Radeon cards/drivers.

I would not activate prefer performance over quality, unless there is no other way.

@kofa

I ordered the RTX 2060 XC Ultra. I think it will take care of any issues I had before. I will try it without activating performance over quality and ‘Transfer’ tuning.

Just got my RTX 2060 6GB up and running with the latest drivers. Absolutely amazing performance compared to GTX1050 2G.

I did a lot of runs and no tuning was the best for my gpu… 3060Ti… One big one was setting the micronap to zero… I think it defaults to 250… no crashes on my card and the bench runs were much faster… I don’t recall the other settings having too much impact but using the tuning setting was always slower…

@priort Need a bit more information on the software used to change the micro-map if available.

Micro-nap, not map. The software is darktable. :slight_smile:

https://darktable-org.github.io/dtdocs/en/special-topics/mem-performance/#device-specific-opencl-configuration

I’m not sure micro-nap is something to mess with. I used a large .raf file and exported with micro-nap at 250 and at 0 using the same xmp. My system is Fedora 38 KDE, Nvidia 3060 12gb

micronap at 0
26.7139 [dev_process_export] pixel pipeline processing took 2.991 secs (8.968 CPU)

micronap at 250
13.5380 [dev_process_export] pixel pipeline processing took 3.007 secs (9.065 CPU)

0.016 seconds are not that big of an improvement. Test on your system.

1 Like

Its not software but a manual tweak of the opencl settings. I would have to go back and read the manual and also compare with what has been done recently as some of the code was just changed but it might not impact things and if you are just running the latest release you would be okay….

This section explains the manual tweaking you can do… you edit the darktablerc config file….

https://darktable-org.github.io/dtdocs/en/special-topics/mem-performance/#device-specific-opencl-configuration

My experience was the same, back when I tried tuning OpenCL. pinned transfer was definitely bad for NVidia, the clroundup had some effect, I think. Micro-nap just made my UI a bit laggy.

I had this old thread. Some of the params were not present, and they were configured differently, so take care:

I don’t see the micronap setting in the darktablerc config file. Appdata\local\darktable\

That’s because:

Now, all of those parameters are in one line, as described in the manual, which I linked above:

Since darktable 4.0 most of the OpenCL-related options are managed with a “per device” strategy. The configuration parameter for each device looks like:

cldevice_v4_quadrortx4000=0 250 0 16 16 1024 0 0 0.017853 20.000

or, more generally

cldevice_version_canonicalname=a b c d e f g h i j

Look for a line starting with cldevice. micro_nap is the 2nd parameter (b) in the list. Do not expect miracles from it.

It’s worth a try. What would I change here. .
cldevice_v5_nvidiacudanvidiageforcertx2060=0 250 0 16 16 128 0 0 0.185014 0.000
cldevice_v5_nvidiacudanvidiageforcertx2060_building=-cl-fast-relaxed-math
cldevice_v5_nvidiacudanvidiageforcertx2060_id0=400

Did you read the manual we linked?

1 Like

So, the line with'gtx1050 is for your old card. You should simply delete that one to reduce the confusion. The rest of those have gtx2060, so are for your new card.

You are looking for a line that resembles what the documentation says, and what I have quoted above. I’m intentionally not telling you which one, so you put in the work and learn.

cldevice_v4_quadrortx4000=0 250 0 16 16 1024 0 0 0.017853 20.000

or, more generally

cldevice_version_canonicalname=a b c d e f g h i j
Look for a line starting with cldevice. micro_nap is the 2nd parameter (b) in the list.

That was for an older version of darktable (the documentation is a big lagging behind), so the version won’t be _v4_, but _v5_. Only modify the 2nd parameter.

1 Like

Yeah, I needed to go to work a bit here. I tried the micronap setting at “0” . the change if any is barely noticeable. I’ll just leave it at 250. Perhaps someone has a similar graphics card and found settings that improve performance.

Until then. .Things are so much better overall, so I have no complaints.

So it might not be something you just notice and in fact it can be with things like exporting that you see the most improvement as DT already tries to optimize display and previews but exporting is the full high quality pipeline… so I am not sure how you are testing but you can run a file and and xmp from the command line and it will show you what processing steps were used and how long each one took… so this is how you can gauge if any changes are improvements… I cant recall but you might also have to restart DT so see the impact of any changes…

You can see the process explained a bit here and you can just use one or two of your images and try any xmp with any set of modules… It can be good to put a few taxing ones in there to see if there can be improvements…

1 Like