My PC doesn’t have a GPU at all at the moment (unless you count the Intel HD built in which doesn’t count).
I need a card that can handle OpenGL 3.3 for running video editing software (VideoProc) and I’m hoping to speed up dt at he same time.
How much GPU puff do I need to take advantage of OpenCL?
The video software I have in mind suggests that 2GB of vRAM and OpenGL3.3 is comfortable for the sort of basic HD editing I have in mind. BUT is that enough to do any good to darktable?
A further complication is that it needs to be a low profile card to fit in my old SFF Optiplex 790, and I think the power consumption mustn’t be more that ~35W
Looking at a GT1050 4GB… or even a 1030 2GB if I could get away with it… cheaper. I really don’t want to spend too much on it atm.
As @kofa said, less than 4GB would be slow due to tiling. To give you another data point: my GT960M with 4GB on the notebook is around 4 times faster than the i7-4720hq in it (both were medium-high level at the time of purchase). How cheap can you go and still get a meaningful speed-up will ultimately depend on the CPU and the thermal limitations before throttling occurs.
I have a i5-2400 3.1Ghz quad core… if it helps to give a reference.
Thanks very much to both you, @paperdigits and @kofa for the advice. I’ll do a bit more research before i click the ‘add to cart’, but it sounds like something around the lower end would be fine as long as it’s got 4GB of ram.
I use a 2nd hand GTX 1060 with 6 GB. With some 50 MPixel photos posted here, in needs tiling for diffuse or sharpen. No problems with my own 16 MPx images.
GPU prices are coming down fast at the moment. I bought a second hand 6Gb 1060 and it was great (though I now have an 8Gb 1080 to improve perf on my 4K monitor). Generally, prefer Nvidia (less chance of issues, since most darktable devs have Nvidia cards) the more memory the better, the more cuda cores the better. But you will get to the point of diminishing returns. I didn’t notice vast amounts of difference between the 1060 and the 1080, though probably 1050 to 1060 will be more noticeable.
I think generally the ‘50’ series is more designed to be used for driving a display, rather than doing the sort of parallel computations that darktable (and gaming) relies on. The ‘60’ series is probably the lowest I would go. I’m mostly basing that on the advertised number of cores.
That’s slower than my CPU, so probably the 1050 with 4GB would be fine. But one point I heard back when I was purchasing mine was that the 50 series of one generation should be on par with the 60-70 of the previous one. That’s why I went for the 960 instead of the 950, even if it meant more money it seemed to be the sweet spot for light calculus.
which is one of the reasons i started to experiment with a reimplementation of the processing graph. darktable’s logic how to string together the opencl kernels has grown a fair bit over the many years and let’s say it has accumulated a bit of sand between the gears which makes it less suitable for top performance on top hardware.
A note if you want to go team red (AMD) you’ll want a Vega or newer as AMD recently dropped OpenCL support for the RX 500 series and older. I’m running rocm on Fedora with a RX 6900 XT and it’s been pretty solid. Rocm was hit or miss until recently and I was running the OpenCL drivers provided by AMD’s proprietary stack.
Having the graphics drivers built into the kernel is real nice. I know nVidia is still king of GPGPU land and nvenc is still way ahead of AMD’s encoders.
Thanks. I’d already got the impression that AMD seemed a bit problematic for OpenCL. I’ve more or less decided on a GTX1650 4GB… slightly faster than a 1050 and actually cheaper…
I’ll give an update when I actually get it!
It depends on what you want to do with it. My impression has been that proprietary packages seem to lean more CUDA. Davinci Resolve I think didn’t like AMD for a long time. Most of the stuff I use (darktable and OBS-studio) work fine with the AMD cards.
I think AMD’s path is the better path forward as they are taking a more open standard/open source route but are playing catch up. Tying into vaapi for example, rocm and amdgpu are also more open source software friendly than nVidia’s approach. Neither are perfect but I’d rather support AMD’s efforts at this point.
I don’t really know enough about this stuff, but from what you mention I’d agree. Still, I think I’ll still go with a tried and tested nVidia - especially as my options are limited with my small form factor pc.
Yeah, not trying to convince you otherwise as I didn’t know you had made the purchase already when I posted. There aren’t a lot of small form factor AMD cards until you get into the Radeon Pro line and those cost a bundle. I don’t know if that’s a cost saving measure on the AIB side (use the same basic heatsink for everything) or what. The last couple of generations the AMD RX stuff has been cooling running and lower power than nVidia so in theory it should be possible to ship single slot RX 6600s for example but you just don’t see then.
But if you want greatest compatibility GPGPU software wise it’s kind of just nVidia right now on the PC platform. I’m just out here taking the lumps to try to break the cycle of nVidia is dominant, therefor everyone buys nVidia and they can charge or do whatever they want. AMD and Intel have to see uptake from customers to want to continue to provide support is my theory so I’ll keep at least one machine around with their cards in it. My laptop has an RTX 3060 in it as there are very few selling AMD discrete cards in portable configurations these days.
I’m excited about the Intel Arc lineup too. It’s kind of mediocre performance wise at the moment but Intel has traditionally had really nice Linux support for their integrated graphics products. Not so much OpenCL but with just integrated options out there I don’t think there was any want for it really. Hopefully with more powerful discrete cards out from them now they’ll get OpenCL support for them and we’ll have another option.
More power to you! And all good, I really appreciate the input. I decided to go ahead and buy as I didn’t want too long a wait… and GPUs hold their value quite well on the second hand market, as far as I can tell. If the worst happens and it doesn’t fit!
I did actually consider (briefly) getting a new(er) PC in a tower format, just to give me more options, but that’s not really practical atm. It’s all interesting stuff - I always learn something with this sort of upgrade too. I don’t really have a massive level on interest in computer insides, until I want to do something and can’t - - then I find out how to fix it!
With varying success occasionally, I admit