Recommended hardware for version 4.4.2

There is another similar thread from last week. 2gb of vram is likely your main issue. 4 to 6gb would be your main benefit so the whole raw can be loaded into memory instead of tiles.

1 Like

Thanks, @g-man

In case this would help. .






That doesn’t help much. Output from darktable-cltest might.

There are benchmarking files in the source code that can be used to generate output… I will look up the link… you can run them from the command line… One thing is that you need to be careful…Its not always just as easy as buying a new GPU… if the rest of your hardware is older then the benefits can be less or it might not even be compatible with the new card so do some research to get the best bang for your buck that matches the configuration and potential of the rest of your hardware… the 2Gb card will for sure be a bottle neck…

1 Like

I agree! I’ve been thinking about an upgrade for a while now. I will check compatibility with my current system hardware.

Usually, NVidia seems to be the best supported platform.

I have a low-end 1060 with 6GB, and it works very well. But now, with GPU prices back to normal, more capable cards should also be affordable.
See the recent thread

1 Like

Switching to Linux for darktable made a huge performance difference for me. Should add that I came from macOS that probably has less devs allocated than the Windows version.

In my experience, with a decent GPU, a CPU upgrade makes hardly any difference. When I went from a 2Gb nVidia 770 to an 8Gb nVidia 1080 the difference was massive, but when I subsequently upgraded from a 4th gen i5 to a 11th gen i5 I couldn’t really notice much impact (though I had plenty of improvements elsewhere to make the upgrade worthwhile).

Yes, with already a capable GPU, CPU upgrades may not show much of an improvement (unless of course upgrading to something like a Threadripper (with which sometimes even a GPU may not be needed)).

Hardware upgrade which does a noticable difference is using memory with more GT/s (“frequency”). I have noticed a difference even when overclocking existing DIMMs.

Hi stuntflyer

Try the following in the configuration section CPU / GPU / memory:
change Darktable resources from default to large;
activate prefer performance over quality;
change tune OpenCL performance to memory size and transfer.
And see if you get any better results.
Note: try to just have Darktable on when editing (and specially) exporting your final files.

Best regards,

This might actually be detrimental.

1 Like

It seems like the jury is out on this one?

‘Transfer’ tuning, if I recall correctly, is about something called ‘pinned memory’, which was never required by NVidia cards, but required by some (early?) AMD/Radeon cards/drivers.

I would not activate prefer performance over quality, unless there is no other way.

@kofa

I ordered the RTX 2060 XC Ultra. I think it will take care of any issues I had before. I will try it without activating performance over quality and ‘Transfer’ tuning.

Just got my RTX 2060 6GB up and running with the latest drivers. Absolutely amazing performance compared to GTX1050 2G.

I did a lot of runs and no tuning was the best for my gpu… 3060Ti… One big one was setting the micronap to zero… I think it defaults to 250… no crashes on my card and the bench runs were much faster… I don’t recall the other settings having too much impact but using the tuning setting was always slower…

@priort Need a bit more information on the software used to change the micro-map if available.

Micro-nap, not map. The software is darktable. :slight_smile:

https://darktable-org.github.io/dtdocs/en/special-topics/mem-performance/#device-specific-opencl-configuration

I’m not sure micro-nap is something to mess with. I used a large .raf file and exported with micro-nap at 250 and at 0 using the same xmp. My system is Fedora 38 KDE, Nvidia 3060 12gb

micronap at 0
26.7139 [dev_process_export] pixel pipeline processing took 2.991 secs (8.968 CPU)

micronap at 250
13.5380 [dev_process_export] pixel pipeline processing took 3.007 secs (9.065 CPU)

0.016 seconds are not that big of an improvement. Test on your system.

1 Like

Its not software but a manual tweak of the opencl settings. I would have to go back and read the manual and also compare with what has been done recently as some of the code was just changed but it might not impact things and if you are just running the latest release you would be okay….

This section explains the manual tweaking you can do… you edit the darktablerc config file….

https://darktable-org.github.io/dtdocs/en/special-topics/mem-performance/#device-specific-opencl-configuration