Thinking of getting a new desktop able to run opencl… does opencl improve speed of setting parametric masks?
My current home desktop NUC running linux is now about five-six years old and works well enough for pretty much everything I do day-to-day other than darktable (especially parametric masks). General rendering isn’t so bad - nor most of the modules (5-10sec). But masks are definitely a “go get coffee” activity at times (often 40sec+).
Online - Opencl seems to help with many benchmarks showing improvements for complete image rendering compared to just CPU. But I don’t see times for individual actions like parametric masks. Likewise it seems in many cases Opencl is actually slower than CPU.
One of the main recommendations online is getting a suitable Opencl card with enough memory to fully store the processed raw… how to I work out the required amount? Or do I just assume that anything over a few GB is enough?
As I don’t have a chance to “try-before-you-buy” - looking for feedback before I commit. Any comments on experience appreciated.
I am unsure about parametric masks which have never really slowed my machines, but the diffuse or sharpen module really needs OpenCl for decent performance. Others may be able to add more weight to this.
I don’t think opencl will help masking directly, but opencl will take a lot of load off your CPU and everything feels faster with it. If you’re getting a new system, I’d recommend at least 8 GB of ram on the GPU. I have a 4GB card and with 24Mpx files sometimes it still falls back to CPU and you can really tell.
What I can tell you is that until 2021 I used a machine with Core2 Duo CPU, 4 GB RAM – and an NVidia 1060 card with 6 GB memory that I still have today. The GPU meant a huge difference, keeping darktable usable on that ancient computer. Even today, with the Ryzen 5 5600X CPU, the modest GPU often brings a 5x speed-up when compared to the CPU.
So it seems that even though opencl may not directly affect or improve parametric mask setting speed. Because a parametric mask relies on the previous module processing stack - it will have an influence. Anecdotal evidence suggests that - that alone has a significant impact on usefulness.
From anecdotal evidence for 24Mpx raw images any video card with RAM greater than 6GB seems to be OK.
Again - thanks for everyone’s input - much appreciated.
Not all cards are created equally and beyond that maybe often overlooked is the OS support and driver… bad driver can really mess things up… so you can’t just go by the amount of ram on the card… also many cards have the same amount of Ram but much faster so there are lots of elements to consider for a card selection…
Likely fine if you are running Win OS but for Linux I can’t say… drivers for video cards and opencl seem a bit less straightforward but I also don’t have too much experience dealing with drivers in Linux so take it with a grain of salt…
Warning acknowledged - I’ll have to do some googling and try to make a decision.
At home I only run Linux - it does seem to be a corner case to use Linux & opencl for darktable/blender/Davinci Resolve etc… I’ve found some step-by-step instructions for opencl on Linux for blender or Davinci Resolve so I’m happy that with a opencl card that is now one-to-two years old there is probably a reasonable chance to get it working - but I appreciate that it probably won’t be plug-n-play.
It would appear the only way is to test-it-&-see… unfortunately.
Well - I can categorically now say after test-it-&-see“YES” - having a working opencl capable card does significantly improve the performance of creating and modifying parametric masks!
To complete the discussion - ended-up with the following:
AMD Ryzen 7 5700G Eight Core CPU with Radeon™ Graphics (3.8GHz-4.6GHz/20MB CACHE/AM4)
GIGABYTE B550I AORUS PRO AX: DDR4, USB 3.2 - ARGB Ready
64GB PCS PRO DDR4 3200MHz (2 x 32GB)
12GB NVIDIA GEFORCE RTX 3060 - HDMI, DP, LHR
1st M.2 SSD Drive
2TB SOLIDIGM P41+ GEN 4 M.2 NVMe PCIe SSD (up to 4125MB/sR, 3325MB/sW)
2nd M.2 SSD Drive
2TB SOLIDIGM P41+ GEN 4 M.2 NVMe PCIe SSD (up to 4125MB/sR, 3325MB/sW)
Did a preliminary Linux + Darktable install and had opencl working (almost) out of the box. Seems to work well enough for standard editing with default values. Though, have found that if I copy *.xmp history to a hundred or so images in lighttable. DT will complain about “inconsistent data” and error with something like: “disabling opencl for this session”. if encountered again - will need to dig further.
I realize that this is not a new revelation to many… but after working with RT for a while and DT for this year - this is the first time I’ve been able to actually see the change in the image in sync with the sliders in real-time! I had been editing by numbers previously as sliders were in many cases too painful. Wow - this changes everything!
Looks like a decent bump. I wish I could use my “mini-GPU” for what little it’s worth. I’ve got an AMD Radeon adapter on my Ryzen 7 5700U 16 GB Windows 11 laptop.1 It has a cheesy little integrated GPU with 0.5 GB dedicated RAM (plus system RAM).
I primarily use ART so it’s not a factor there. However, Affinity Photo supports OpenCL so I’d like to be able to use it for whatever minor bump it would provide but I’ve seen almost identical fatal crashes while in both darktable and AP with OpenCL enabled. Both were as I made back-and-forth adjustments, e.g., pushing a slider repeatedly back and forth while looking at various part of the image to watch the effect. The screen went blank and then the both displays were covered in a small herringbone / checkerboard type pattern with zero response to any input. Only way out was a button push.
Disabled OpenCL and it hasn’t done it again.
I’m not at all knowledgeable in terms of GPU drivers, etc.2 but I ran a utility from AMD which upgraded the chipset drivers, among others. After that I experienced frequent USB disconnections with my dock. Removed the AMD drivers and that stopped.
Now that I’ve canned that goofy WavLink dock, I might try again I guess. It can be confusing trying to extract “photo-useful” tidbits from all the gaming information, since GPUs are so game-centric.
Acer laptop. Yeah, I know… but to be fair I’m not a gamer so it’s more than plenty for everything except image processing and it’s actually been fine otherwise.
I spent two+ decades in IT but desktop (particularly gaming) hardware was never an area of interest for me.
I’m not a gamer (except for a brief moment with Doom while at Uni). I have a small 3rd bedroom - which when I moved into the house I repurposed as a computer room/study/hobby room. It is small and with a reasonable desk, chair, bookcase - there really isn’t a lot more room. So I’ve been happy with a small Intel i7 (4 Core) NUC running Linux stuck under the desk for the last six years. Zero extra space used - everything compact.
It does everything well enough. Its old enough to be fully supported in Linux. Fast enough for financial analysis with R, and more than enough for spreadsheets and documents, casual web browsing & watching videos from youtube/vimo. It was also fast enough for RT for my old camera. I was adamant that I was not going to spend money until the old computer died completely.
I upgraded the camera last Christmas and that encouraged me to update my image processing ability. The desktop was good enough for general editing - but the thing that pushed me over the edge - was DT’s parametric masks. Waiting for a minute to see if the setting had selected the targeted part of the image only to have to try again - was painful.
Exactly - I’ve been around Telecoms (my mother would tell her friends “its something to do with computers”) & IT since school. I’ve built my own computers and servers etc… but the effort required to figure-out if things were supported (by which toolkit) or compatible with, available in-stock, not end-of-life etc… was more than the effort and time that I had available. I’d rather be doing other things. So I found and spoke to a custom PC builder company - and went from there. I could have got slightly better quality/features by sourcing myself and putting together - but this way - I got something that they put their reputation on that should work.
The thinking about an AMD APU with integrated graphics as well as nvidia was that AMD graphics are better supported under Linux for displaying stuff whereas nvidia has much better support for CUDA (opencl) for calculating stuff. Time will tell if it was the right decision.