I guess I’ll re-run the AMD “detect and install everything appropriate” tool once more. Thing is, it assumes I’m gaming. So I have to go through all the options it wants to enable (color, etc.), try to figure out what does / doesn’t make sense for color-sensitive photo work, then disable / enable only what should be.
Oh well, at least it’s time to re-calibrate my displays so maybe I can just start over.
My setup has the NVIDIA GeForce RTX 3060 as a compute card only - no video responsibilities. Therefore, in theory the full 12G VRAM is available to opencl / darktable if needed. Display responsibilities are solely the domain of the AMD APU/iGPU integrated within the AMD Ryzen 7 5700G with Radeon Graphics.
First screenshot is DT 4.4.2 bulk processing one of my Canon R6mk2 (24 megapixels) image directories. The green bumps is nvidia opencl usage. nvidia memory usage for these 24 megapixel images doesn’t seem to exceed 3G.
Therefore, if you are using a 4G video card for both display responsibilities & opencl compute - even if processing 24 megapixel images and image processing only uses 3G of VRAM - it may still fallback because of the VRAM required for display responsibilities.
Second screenshot is DT 4.4.2 bulk processing my local_copy directory full of PlayRAW images. Various megapixel images. Some of the PlayRAW images exceed 6G of VRAM used!
That is not completely correct, I think, as darktable can apply tiling (breaking the images into smaller pieces, and processing each separately). It’s not as efficient as having enough RAM, of course.
You can play with the tuning parameters. If the example given there is still valid, with the default resource allocation, from your card’s 12 GB of VRAM, only (12 GB - 400 MB) * 700 / 1024 ~= 8 GB would ever be used. Of course, since you only hit 3G of peak usage, it’s unlikely that you came near this limit.
If you’re shelling out money for a completely new rig, I’d agree that you should target at least 8GB of GPU.
The advice generally given in our chat room is “how much money do you have? The higher spec’d GPU you can get, the faster it’ll be and the longer it’ll last you as you move through camera bodies that will get more and more megapixels as time goes on.”
Some of these are 100MB RAW files - which I then played with doing what I’d normally do for a typical edit. Includes masks, vignettes, highlight recovery…
What I got as an average across 5 different large 61MP RAW files was:
AVERAGES:
pixel pipeline processing = 2.9316 sec
CPU took = 19.8548 sec
The pixel pipeline for these 61MP images is not significantly different (<1sec) to my original test for 24MP images. However, the CPU took a while longer to spin-up to speed. The max VRAM usage I saw for any of these 61MP images was only about 7GB.
. [dev_process_export] pixel pipeline processing took 2.272 secs (16.462 CPU)
. [export_job] exported to `/darktable_exported/DSC00703_01.jp2'
. [dev_process_export] pixel pipeline processing took 3.218 secs (26.190 CPU)
. [export_job] exported to `/darktable_exported/DSC01308_01.jp2'
. [dev_process_export] pixel pipeline processing took 2.866 secs (21.270 CPU)
. [export_job] exported to `/darktable_exported/DSC01364_01.jp2'
. [dev_process_export] pixel pipeline processing took 4.236 secs (20.355 CPU)
. [export_job] exported to `/darktable_exported/DSC01388_01.jp2'
. [dev_process_export] pixel pipeline processing took 2.066 secs (14.997 CPU)
. [export_job] exported to `/darktable_exported/DSC01466_01.jp2'
I suspect that as the current crop of 50MP+ cameras are significantly out of my price range. The computing setup I now have will be enough for the foreseeable future.