I’m considering buying a Mini PC – without a dedicated GPU – to run linux, as Darktable is among the application I expect to use; I’m wondering if Darktable would benefit from choosing an AMD APU and using Rusticl as the OpenCL driver for the iGPU but the trouble is I have no idea if this will work.
Breaking my question down: Does Darktable work with Rusticl – that is do all modules produce the same results as if I’d used the CPU?
Does Rusticl support AMD iGPUs? I think the answer is yes, but it would be great if someone could confirm this.
I haven’t found any benchmarks for Darktable performance with Rusticl so I wonder how it performs compared to other OpenCL drivers? and if used on AMD iGPUs how does it compare to just using the CPU instead?
I don’t think you should expect a huge speedup from using openCL on an iGPU in general. The rusticl driver is still alpha, so I also wouldn’t expect it to work 100%.
I have an AMD 7040 CPU in my framework, and without openCL the editing speed is very worktable.
Many thanks. I didn’t expect a ‘huge speedup’ from an iGPU. Rusticl still being in alpha simplifies matters – I can just focus on price/CPU performance ratio without the wildcard of OpenCL on a more powerful iGPU.
I am not a computer expert or DT expert so please take my answer with a large pinch of salt. If I was buying a computer to run DT I would want a good dedicated graphics card. Money and other factors aside I would look for a gaming capable computer in the hope of getting the best performance. I base this on an expensive work laptop with not dedicated graphics card versus a better value private laptop with a dedicated graphics card.
Due to the size and costs involved I find it hard to justify a dedicated GPU, yet I understand what you’re saying and thanks for your input. If I were buying the computer specifically for Darktable a dedicated GPU would be a compelling option.
Very Interesting! Thanks for bring this post to my attention.
If I’m reading those benchmarks right rendering times at screen resolution – how responsive the editing experience is – seems to be less impacted by the GPU than I’d supposed. Does this change when using more demanding modules?
Does anyone know which factors (such as memory bandwidth or floating point math) are most relevant to Darktable’s CPU performance?
As far as I can tell, there seems to be a latency overhead to (non-integrated) GPU processing. It seem to take sone fixed time to engage the GPU, but once it is, it runs faster. If the image is large enough, the overhead is amortized quickly. But if the smaller the image, the more visible the overhead.
To my understanding, this overhead is caused by CPU-GPU memory transfers. It takes some time to push the image from main memory to GPU memory and back. For large images (export), the computation is dominated by image processing. But for small images (interactive), the processing is quick, and the overhead becomes more noticeable.
This does not affect integrated GPUs, as they share the main memory, so images don’t need to be transferred. Modern memory (DDR5) also helps a lot, as it is simply a lot faster (100-500GB/s) than older memory (DDR4 was ~40GB/s).