Does opencl improve speed of setting parametric masks?

Thinking of getting a new desktop able to run opencl… does opencl improve speed of setting parametric masks?

My current home desktop NUC running linux is now about five-six years old and works well enough for pretty much everything I do day-to-day other than darktable (especially parametric masks). General rendering isn’t so bad - nor most of the modules (5-10sec). But masks are definitely a “go get coffee” activity at times (often 40sec+).

Online - Opencl seems to help with many benchmarks showing improvements for complete image rendering compared to just CPU. But I don’t see times for individual actions like parametric masks. Likewise it seems in many cases Opencl is actually slower than CPU.

One of the main recommendations online is getting a suitable Opencl card with enough memory to fully store the processed raw… how to I work out the required amount? Or do I just assume that anything over a few GB is enough?

As I don’t have a chance to “try-before-you-buy” - looking for feedback before I commit. Any comments on experience appreciated.

I am unsure about parametric masks which have never really slowed my machines, but the diffuse or sharpen module really needs OpenCl for decent performance. Others may be able to add more weight to this.

1 Like

I don’t think opencl will help masking directly, but opencl will take a lot of load off your CPU and everything feels faster with it. If you’re getting a new system, I’d recommend at least 8 GB of ram on the GPU. I have a 4GB card and with 24Mpx files sometimes it still falls back to CPU and you can really tell.

1 Like

Doesn’t every change to a parametric mask trigger a reprocessing of the complete pipline? I would guess opencl should therefore have a pleasant impact on working with masks.

1 Like

changing a parametric mask results in changing something in the image using the module the mask is applied. So the input for subsequent modules in the pipe changes and they need to be reprocessed


Certainly not the compete pipeline. It’ll recompute modules after the current module in the processing pipeline.

1 Like

Thanks for correcting. I knew about its behavior in my mind. But it’s too early here for me for using correct English terms. :fearful:

What I can tell you is that until 2021 I used a machine with Core2 Duo CPU, 4 GB RAM – and an NVidia 1060 card with 6 GB memory that I still have today. The GPU meant a huge difference, keeping darktable usable on that ancient computer. Even today, with the Ryzen 5 5600X CPU, the modest GPU often brings a 5x speed-up when compared to the CPU.


Thanks everyone for their input.

So it seems that even though opencl may not directly affect or improve parametric mask setting speed. Because a parametric mask relies on the previous module processing stack - it will have an influence. Anecdotal evidence suggests that - that alone has a significant impact on usefulness.

From anecdotal evidence for 24Mpx raw images any video card with RAM greater than 6GB seems to be OK.

Again - thanks for everyone’s input - much appreciated.

Not all cards are created equally and beyond that maybe often overlooked is the OS support and driver… bad driver can really mess things up… so you can’t just go by the amount of ram on the card… also many cards have the same amount of Ram but much faster so there are lots of elements to consider for a card selection…


Warning acknowledged - so the question then becomes - how to understand these elements prior to purchase and testing?

Would something with the following be OK?:

  • AMD Ryzen 7 5700G (8C 16T) @3.8-4.6Ghz / 20MB Cache
  • AMD RADEON RX 6700 XT 12GB @2620MHz boost

Nvidia is usually better supported, at least you find more problems reported on this forum from AMD Radeon users. But I could be wrong.

1 Like

Likely fine if you are running Win OS but for Linux I can’t say… drivers for video cards and opencl seem a bit less straightforward but I also don’t have too much experience dealing with drivers in Linux so take it with a grain of salt…

The number of processors on the graphics card is very important. For example, mine has 1,920 cores, and some have many more than that.

And there are different types so all part of that lots of elements. I am not sure to what extent cuda cores vs tensor cores vs rt cores contribute to processing in DT.

Warning acknowledged - I’ll have to do some googling and try to make a decision.

At home I only run Linux - it does seem to be a corner case to use Linux & opencl for darktable/blender/Davinci Resolve etc… I’ve found some step-by-step instructions for opencl on Linux for blender or Davinci Resolve so I’m happy that with a opencl card that is now one-to-two years old there is probably a reasonable chance to get it working - but I appreciate that it probably won’t be plug-n-play.

It would appear the only way is to test-it-&-see… unfortunately.