I’m not sure about that. For example, my NVidia has 6 GB, but can allocate only 1.5 GB in one chunk (‘allows GPU allocations…’). But when used for processing, darktable allocates all available memory, just not in one operation. See
Which benchmarks provide an estimate to enable me compare and decide on which GPU to buy, for Image Processing ? Hardware
What @paolod said. Plus, there’s nvdia-smi: root@eagle:~# nvidia-smi Thu Nov 25 22:07:39 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.82.00 Driver Version: 470.82.00 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. …
Of course, that integrated GPU may simply be way too slow for diffuse or sharpen.
gpagnon:
Still, what puzzles me is why diffuse or sharpen renders correctly in the developing section of darktable within acceptable times (even when zooming in 100%
The ‘preview pipeline’ is different from the one used for export, it only does a partial rendering, and if you are not zoomed to 100%, it does what export does when you run a scaled-down export and you disable ‘high quality resampling’. See
While processing the raw from Venezia from Rialto bridge., I found that the darkroom image displayed zoom artefacts that were luckily absent from the identically zoomed exported image. Does darktable not do full-resolution processing of the visible area, using the selected method to then scale it to the preview area? darktable image, zoomed to 39%: [image] exported image, identically scaled during export: [Screenshot_20220115_223427] export options: [image] prefer performance over quali…