Thinking of getting a new desktop able to run opencl… does opencl improve speed of setting parametric masks?
My current home desktop NUC running linux is now about five-six years old and works well enough for pretty much everything I do day-to-day other than darktable (especially parametric masks). General rendering isn’t so bad - nor most of the modules (5-10sec). But masks are definitely a “go get coffee” activity at times (often 40sec+).
Online - Opencl seems to help with many benchmarks showing improvements for complete image rendering compared to just CPU. But I don’t see times for individual actions like parametric masks. Likewise it seems in many cases Opencl is actually slower than CPU.
One of the main recommendations online is getting a suitable Opencl card with enough memory to fully store the processed raw… how to I work out the required amount? Or do I just assume that anything over a few GB is enough?
As I don’t have a chance to “try-before-you-buy” - looking for feedback before I commit. Any comments on experience appreciated.
I am unsure about parametric masks which have never really slowed my machines, but the diffuse or sharpen module really needs OpenCl for decent performance. Others may be able to add more weight to this.
I don’t think opencl will help masking directly, but opencl will take a lot of load off your CPU and everything feels faster with it. If you’re getting a new system, I’d recommend at least 8 GB of ram on the GPU. I have a 4GB card and with 24Mpx files sometimes it still falls back to CPU and you can really tell.
Doesn’t every change to a parametric mask trigger a reprocessing of the complete pipline? I would guess opencl should therefore have a pleasant impact on working with masks.
changing a parametric mask results in changing something in the image using the module the mask is applied. So the input for subsequent modules in the pipe changes and they need to be reprocessed
What I can tell you is that until 2021 I used a machine with Core2 Duo CPU, 4 GB RAM – and an NVidia 1060 card with 6 GB memory that I still have today. The GPU meant a huge difference, keeping darktable usable on that ancient computer. Even today, with the Ryzen 5 5600X CPU, the modest GPU often brings a 5x speed-up when compared to the CPU.
So it seems that even though opencl may not directly affect or improve parametric mask setting speed. Because a parametric mask relies on the previous module processing stack - it will have an influence. Anecdotal evidence suggests that - that alone has a significant impact on usefulness.
From anecdotal evidence for 24Mpx raw images any video card with RAM greater than 6GB seems to be OK.
Again - thanks for everyone’s input - much appreciated.
Not all cards are created equally and beyond that maybe often overlooked is the OS support and driver… bad driver can really mess things up… so you can’t just go by the amount of ram on the card… also many cards have the same amount of Ram but much faster so there are lots of elements to consider for a card selection…
Likely fine if you are running Win OS but for Linux I can’t say… drivers for video cards and opencl seem a bit less straightforward but I also don’t have too much experience dealing with drivers in Linux so take it with a grain of salt…
And there are different types so all part of that lots of elements. I am not sure to what extent cuda cores vs tensor cores vs rt cores contribute to processing in DT.
Warning acknowledged - I’ll have to do some googling and try to make a decision.
At home I only run Linux - it does seem to be a corner case to use Linux & opencl for darktable/blender/Davinci Resolve etc… I’ve found some step-by-step instructions for opencl on Linux for blender or Davinci Resolve so I’m happy that with a opencl card that is now one-to-two years old there is probably a reasonable chance to get it working - but I appreciate that it probably won’t be plug-n-play.
It would appear the only way is to test-it-&-see… unfortunately.