PR #20676 adds a streamlined way to install GPU-accelerated ONNX Runtime for darktable’s AI features (object masks, neural denoise, upscale). I need help from the community testing this on different hardware and platforms.
What darktable bundles
darktable ships with a CPU-only ONNX Runtime that works out of the box – no extra install needed. On macOS (Apple Silicon), CoreML acceleration is also bundled by default. On Windows, DirectML GPU acceleration is bundled and works with any DirectX 12 GPU.
Linux users who want GPU acceleration currently need to manually download the right ONNX Runtime build and point darktable to it via preferences. This PR aims to simplify that.
What this PR adds
-
Install button in preferences → processing → AI – detects your GPU (NVIDIA/AMD/Intel), checks if the required drivers and runtime are installed, and downloads the matching GPU-enabled ONNX Runtime package automatically.
-
Install scripts (
tools/ai/) – shell scripts that can be run standalone to download and set up GPU-accelerated ONNX Runtime:
-
NVIDIA (CUDA): requires CUDA Toolkit 12.x and cuDNN 9.x
-
AMD (ROCm): requires ROCm 6.x and MIGraphX (
apt install migraphx migraphx-devon Ubuntu) -
Intel (OpenVINO): on Linux, included in the ONNX Runtime OpenVINO package. On Windows, requires OpenVINO Toolkit installation
- Auto-detection – the install dialog probes your system for available GPUs, checks driver versions, and tells you what’s missing before downloading.
How it works
The script installs the GPU-enabled ONNX Runtime into user space (~/.local/lib/ on Linux). After that, darktable can detect it automatically from the AI preferences tab, or the user can manually browse to the library. On next startup, darktable loads the GPU library instead of the bundled CPU version and auto-detects the best execution provider (CUDA, MIGraphX, OpenVINO, etc.). You can verify it’s working by running darktable with -d ai:
[darktable_ai] loaded ONNX Runtime 1.24.4 (/path/to/libonnxruntime.so)
[darktable_ai] execution provider: CUDA
Help needed
I’d really appreciate testing on different setups. If you have a Linux or Windows machine with a dedicated GPU, please try the PR and report:
-
Does the install button detect your GPU correctly?
-
Does the download and setup complete successfully?
-
Does AI inference actually run on GPU after the install? (check with
-d ai) -
Any issues with the standalone install scripts in
tools/ai/?
Particularly looking for feedback from:
-
Linux NVIDIA users (different GPU generations, driver versions, distros)
-
Linux AMD users (RDNA2/CDNA, different ROCm versions)
-
Linux Intel Arc users
-
Windows NVIDIA users (CUDA)
-
Windows Intel Arc users (OpenVINO)
-
Different Linux distros (Ubuntu, Fedora, Arch, openSUSE, etc.)
You can also test the install scripts standalone without building from source – just download the script and the manifest from the PR:
# Linux
curl -O https://raw.githubusercontent.com/andriiryzhkov/darktable/ort_scripts/tools/ai/install-ort-gpu.sh
curl -O https://raw.githubusercontent.com/andriiryzhkov/darktable/ort_scripts/data/ort_gpu.json
chmod +x install-ort-gpu.sh
./install-ort-gpu.sh --manifest ort_gpu.json
# Windows (PowerShell)
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/andriiryzhkov/darktable/ort_scripts/tools/ai/install-ort-gpu.ps1" -OutFile install-ort-gpu.ps1
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/andriiryzhkov/darktable/ort_scripts/data/ort_gpu.json" -OutFile ort_gpu.json
.\install-ort-gpu.ps1 -Manifest ort_gpu.json
On macOS (Apple Silicon) CoreML and on Windows DirectML acceleration are bundled and work automatically – no testing needed for those.
Thanks for any help!

