darktable 3.8.0 on macOS reports low GPU memory for OpenCL

darktable -d opencl reports just 512 MB of graphics memory under macOS 11.6, although the GPU has 2 GB (iMac 2014 with AMD Radeon R9 290X).
On the same machine, under Windows 10, it reports 1330 MB and is significantly faster.

Anybody has a hint? Thanks!

I believe there is a suggestion in the manual…go figure…


I have played with those parameters in darktablerc, unfortunately no change. Also I don’t know if/how to set the environment variables like GPU_MAX_ALLOC_PERCENT on macOS, it sounds very Windows-like and anyway on Windows everything seems ok - I am a bit clueless ,

Sorry i don’t really know the nuts and bolts for apple stuff…I expect you have the latest driver?? Maybe pour through all the settings for it to see if something is up on the apple setup…??

Not sure 2gb is enough for the system and openCL

The iMac has 16 GB of memory. The 2 GB are dedicated memory in the GPU - that‘s my problem, that OpenCL just uses 512 MB of it.

A short update about my findings so far: it seems that the OpenCL implementation (or GPU driver?) on Mac does a quite conservative interpretation of the specification and just assigns the minimum admissible value of 512 MB (1/4th of the available 2 GB of the graphics card) - the specification reads:

Max size of memory object allocation in bytes. The minimum value is max (1/4th of
CL_DEVICE_GLOBAL_MEM_SIZE , 128 * 1024 * 1024) for devices that are not of

I don’t know if this is the reason for the significantly worse performance compared to the Windows10 partition on the same Mac, or if Apple’s OpenCL implementation is just not optimized well.

BTW, darktable’s opencl compiler options on Mac show -DUNKNOWN for the vendor ID instead of -DAMD, not sure if this has an effect or not? On Windows it’s correctly -DAMD.

The numerical vendor ID is not reported as 4098 (AMD) but as 16915456 (Apple?), which is unknown to darktable. Patching it to 4098 to force -DAMD does not improve the performance, though …