Is there a methodology for getting the best performance out of your CPU GPU combination? Now as per my understanding there is some intelligence incorporated in DT for OpenCL settings but I am still not clear which setting to go for.
I would request someone to please provide some guide/tutorial for these setting and what exactly is going behind each settings .
compile options:
bit depth is 64 bit
normal build
SSE2 optimized codepath enabled
OpenMP support enabled
OpenCL support enabled
Lua support enabled, API version 8.0.0
Colord support disabled
gPhoto2 support enabled
GraphicsMagick support enabled
ImageMagick support disabled
OpenEXR support enabled
that methodology on a mac is called “try and error”
if you’re using a quite recent 3.9 version the defaults might be a good start, but if you have a mac with dedicated GPU then finding the optimum device priorities via opencl_device_priority in darktablerc ist the first step. * doesn’t prefer the fastest GPU by default
then you might try to change the “tune OpenCL performance” settings
I read the manual from the above link but was not able to understand certain parts so asked the question and raised this topic. I normally post here when I don’t understand something after reading available material.
I’ll test the 4.1 version once it is available here.