Trying to wrap my head around GPU considerations

So I am finally going to break down and buy a GPU for my desktop. Budget is very tight ($300 or less), so my options are pretty limited. I am considering the ARC B-series, RTX 5050/5060, and RX 9060XT. The Nvidia and AMD cards are 8GB of vram. The Intel cards are 10GB or 12GB.

This is for darktable image processing, I dont game and dont use LLMs, but may include AI denoise when the OSS stuff matures a little.

Phoronix sometimes includes darktable in their comparison articles, but it is hit or miss. I have looked into general OpenCL benchmarks but, tbh, I am not sure which are the most relevant to darktable’s modules. I am assuming FP 16 or 32? That is mere speculation on my part…

I have read here, and elsewhere, that AMD and Intel had driver issues in some instances when they came out. Do I still need to worry about that 1+ year later?

As a quick test I enabled openCL and configured it to use 8GB of system memory. I ran a few tests and did not see any tiling notifications in the logs (they do show up unless I force the 8GB memory use). So, 8GB of GPU memory should be ok…? Honestly, I have no idea how to gauge requirements here. I know the general idea is more is better, but that is only if all else is equal…and I dont know if all else is equal with these cards re: performance. Some of the benchmarks I see show big differences!

Am I over thinking this? Is real world performance probably similar enough that it wont matter which card I get…barring issues with getting openCL to be recognized?

In case it matters: Using dt_5.4 from my distro, or the build service now, so no flatpack concerns. I am running Deb 13-Stable with 6.16 kernel, and XFCE desktop.

1 Like

Yeah, for the same price it usually is all around +/- 20% performance difference depending on the workload. If future AI denoise is important I’d say go nvidia, even if things might change and things are no longer so cuda centric.

If that is not a hard requirement, I would go AMD or Intel instead merely due to having open drivers.

2 Likes

https://pcpartpicker.com/product/pD8bt6/msi-geforce-rtx-3060-ventus-2x-12g-geforce-rtx-3060-12gb-12-gb-video-card-rtx3060ventus2x12goc

If you can get another $30, get this.

1 Like

Any reason to choose a 3000-series card over any of the ones I mention? I assumed that the newer architectures would be more performant. Was that an error?

1 Like

I only considered the nVidia cards, since I assume you want it to mostly work with openCL. I don’t love nVidia or their proprietary driver, but its worked better for me, so that’s what I go with. If AMD could get their shit together with GPUs, I’d happily use one, but that doesn’t seem to be true.

The 3060 I linked had 4 more GB of VRAM than the 5050/60. If you’re going to drive your monitors off this card as well, then more VRAM, IMHO, is better than a speed increase. The thing that’ll slow your processing down the most is swapping between CPU/GPU and GPU tiling. 12GB should be enough to drive your monitors, have you browser open, and still give DT enough VRAM not to have to tile/fallback to CPU. I don’t remember what camera you have, but I am processing files from my GFX 100S II on the 3060 and its plenty fast for me.

In short, I’ll take a slower card with more VRAM over a faster card with less VRAM.

4 Likes

I had not thought about the display overhead at all. I am just using a Sony A6500 (24Mpx), so my files are much smaller than yours, but an 8GB card might cause issues down the line if I win the lottery and upgrade :laughing:

Thanks for the pointer!!

My experience with AMD on Linux (Fedora) was very positive, including OpenCL and DT. Can’t say the same for NVidia. First, need to use a separate repo rpmfusion. Second, sometimes rpmfusion is slow to update for new kernels and there are driver compatibility issues. A couple of times these issues were so serious that I had to reinstall the whole system. Maybe it is all Fedora specific.

1 Like

Have been using different nvidia cards on fedora for >10yrs, one phase of 3 weeks there were issues, orherwise nv has always been rockstable and reference. >95% of darktable opencl issues are amd and intel.

5 Likes

Are there any problems with Intel cards you have seen over and over?

I will go nvidia if needed, but I think I have been secretly hoping the Intel B-cards were on par with the nvidia stuff.

I have the NVIDIA GeForce RTX 3070 which can be purchased for around $300 right now. It has 8GB of VRAM as opposed to @paperdigits 12GB.

I run three monitors from the card: 2 UHD and one 1080-p, all at 60 Hz. My machine also has 32MB physical RAM - although that doesn’t seem to play a big factor when running dt.

When I installed the card in about 2021, I was using Wayland on Kubuntu and it was a hassle running the graphics card and three monitors. That and the fact I could never get two of my Windows-only applications to run nicely with Wine (et al), I went back to Windows, and have no problems with the card and 3 monitor setup.

I typically have dt, at least one browser, LibreOffice, and possibly another app or two open at the same time.

Like Mica, I shoot with a GFX, but the 100s as opposed to the 100s II, using 14 bit files, so I imagine our file sizes are about the same.

In dt I always have all four display profiles set to sRGB.

I do not have speed tests for how fast dt is completing tasks (is there a way to get that info, and has anyone compiled a table of some kind (database, even) of dt speeds with different configurations?). But it runs quickly enough that I’m not annoyed, but I would love for it to be a little snappier.

I’m in darkroom right now with an image with 22 modules, including 4 instances of local contrast (3 with masks), 1 d&s, denoise (profiled), 1 contrast equalizer, etc.

My GFX files (usually 100-106 MB) take about 4-5 seconds for each new module to render, regardless of which module it is: turning off all of the more GPU intensive modules really doesn’t change the speed.

MY ORF files (about 14-16 MB) take approximately 1-2 seconds to perform the same tasks.

Hopefully some of that information helps.

1 Like

With 8gb you will be fine with 40mpixel images, tiling is no issue.

2 Likes

IIRC If you start darktable with the -d opencl and -d perf options from a terminal you can see completion times. Add the -d tiling option to see if things are tiling.

I am going from memory regarding the actual commands. When I had to build a new machine in 2022, I used it to find out if openCL was useful on my iGPU. It wasnt…my CPU did things faster lol

Thanks for the detailed info about your experience. It was indeed helpful!

1 Like

I’ve been using an Arc B580 for about 6 months on Fedora (previously used a GTX 1070). It seems to work fine. I see no issues in my Darktable use. From memory all I had to do was dnf install intel-opencl.

Speed-wise, for full res exports with profiled denoise and diffuse or sharpen, the GTX 1070 is about 3x faster than CPU (Ryzen 5800X) and the B580 is about 2x faster again. This sounds like a lot, but in interactive use the differences aren’t so great (at least with a 1440p screen, maybe different at 4k). If budget is a concern I think a used 1070 would still be a pretty good choice.

If you have any image + xmp you’d like me to benchmark, happy to do so.

If you go with Intel, make sure you can get at least version 33944 of the Intel OpenCL drivers - whatever version Fedora packaged prior to that had some performance problems that made DorS very slow.

2 Likes

I’ve been using AMD cards for several years now and haven’t had really any issues. Getting OpenCL working with Darktable in CachyOS just required installing the mesa-opencl pacakage (or opencl-mesa? I forget…).

AMD has a reputation for driver issues but I haven’t experienced that in a long time. Nvidia has had lots of driver issues in the past year. I know Intel card drivers weren’t great for gaming for the first year or so but AFAIK that’s improved. It’s hard to say long term who is the most rock solid. If I had to guess, Intel graphics will be unexciting but reliable as time goes on.

It’s also just really hard to say with the state of the industry right now. Nvidia clearly does not care about consumer graphics right now and it’s hard to have faith that future support will remain. AMD is in a similar boat but they are poised to be the leader in consumer graphics if they could stop tripping over themselves. Intel similarly could lead the way but Intel lately has not been inspiring confidence.

2 Likes

What problems exactly should I experience with my AMD card? Never had anything else, problems with darktable are long gone. Other stuff I don’t care about too much but all the newer cards seem to be officially supported by rocm for inference stuff. But in my opinion AMD problems are a thing of the past.

1 Like

AMD has waffled several times on ROCm, their open driver, what cards are supported, etc etc. Admittedly problems exactly should I experience with my AMD card?ttedly I have not had a card since they were ATI, but that waffling by AMD caused my to buy nVidia since ATI was acquired by AMD. nVidia has their issues and I think they suck, but their drivers have been generally good, open CL support has been great, and I generally know what I’m getting.

1 Like

I understand where you are coming from. Luckily all the darktable problems were fixed by @hannoschwalm with clamping variables so the calculations don’t return nan. After that darktable worked just fine.

1 Like

AMD is also stuck on HDMI 2.0. So if you use a TV as a monitor and need HDMI 2.1 or higher for high refresh rates at 4k, you can forget about it. Intel fixes this by using a built in DP → HDMI converter in their cards, and Nvidia by having most of the driver as firmware inside their card. Nvidia cards now have risc v cores which run most of the driver code, which is why them open sourcing their drivers wasn’t as impactful as it should’ve been. Of course this is not an AMD fault so much as a HDMI group fault where they forbid open source drivers for the spec.

2 Likes

This is helpful, @benp

Currently I have an Nvidia GTX 1660 GPU, and have been curious as to what to replace it with.

I think there will not be new driver support for the 10xx series anymore. A 30xx should be future proof for a while more.

4 Likes