Try to get graxpert to run on GPU

Sorry in advance for the long post.

I have graxpert installed and running, however I can not get it to use my GPU. Other tools like CosmicClarity run successfully on my GPU via Cuda.

Here is an overview of my setup

  • Arch linux
  • Siril 1.5.0 commit faaf267f6
  • Created a python 3.12 venv for siril (python 3.14 running at top level) : created 3.12 venv due to online reports torch, onnx etc. have issues with 3.14. Note. I saw the same issue with the venv that siril creates by default.
  • python --version (from venv) Python 3.12.12
  • Nvidia K2220 GPU, runs CosmicClarity, Sycon without issue
  • nvcc --version (run from venv)
  • nvcc: NVIDIA (R) Cuda compiler driver
  • Copyright (c) 2005-2025 NVIDIA Corporation
  • Built on Tue_Dec_16_07:23:41_PM_PST_2025
  • Cuda compilation tools, release 13.1, V13.1.115
  • Build cuda_13.1.r13.1/compiler.37061995_0

Siril console output when I run graxpert denoise

Starting script /home/grahams/.local/share/siril-scripts/processing/GraXpert-AI.py
15:09:45: Failed to load libcublasLt.so.12: libcublasLt.so.12: cannot open shared object file: No such file or directory
15:09:45: Failed to load libcublas.so.12: libcublas.so.12: cannot open shared object file: No such file or directory
15:09:45: Failed to load libnvrtc.so.12: libnvrtc.so.12: cannot open shared object file: No such file or directory
15:09:45: Failed to load libcufft.so.11: libcufft.so.11: cannot open shared object file: No such file or directory
15:09:45: Failed to load libcudart.so.12: libcudart.so.12: cannot open shared object file: No such file or directory
15:09:45: Please follow https://onnxruntime.ai/docs/install/#cuda-and-cudnn to install CUDA and CuDNN.
15:09:45: Detected ONNX Runtime with providers: ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
15:09:45: ONNX Runtime is already installed: onnxruntime-gpu
15:09:45: opencv-python is installed
15:09:46: Running command: requires
15:09:46: OK! This script is compatible with this version of Siril.
15:09:52: Starting denoising
15:09:53: Failed to load libcublasLt.so.12: libcublasLt.so.12: cannot open shared object file: No such file or directory
15:09:53: Failed to load libcublas.so.12: libcublas.so.12: cannot open shared object file: No such file or directory
15:09:53: Failed to load libnvrtc.so.12: libnvrtc.so.12: cannot open shared object file: No such file or directory
15:09:53: Failed to load libcufft.so.11: libcufft.so.11: cannot open shared object file: No such file or directory
15:09:53: Failed to load libcudart.so.12: libcudart.so.12: cannot open shared object file: No such file or directory
15:09:53: Please follow https://onnxruntime.ai/docs/install/#cuda-and-cudnn to install CUDA and CuDNN.
15:09:53: Using cached execution providers from /home/grahams/.config/siril/siril_onnx.conf
15:09:53: === ONNX Execution Provider Tester ===
15:09:53: Creating ONNX model...
15:09:53: Model saved to temporary file: /tmp/tmp65yyg76g.onnx
15:09:53: Running reference on CPU...
15:09:53: OK: CPU output computed.
15:09:53: Available execution providers:
15:09:53:   - TensorrtExecutionProvider
15:09:53:   - CUDAExecutionProvider
15:09:53:   - CPUExecutionProvider
15:09:53: Testing each provider without fallback...
15:09:53: Testing TensorrtExecutionProvider...
15:09:53: *************** EP Error ***************
15:09:53: EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:539 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(PySessionOptions&, const onnxruntime::ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
15:09:53:  when using ['TensorrtExecutionProvider']
15:09:53: Falling back to ['CPUExecutionProvider'] and retrying.
15:09:53: ****************************************
15:09:53: (x) TensorrtExecutionProvider: fallback occurred (used CPUExecutionProvider)
15:09:53: Testing CUDAExecutionProvider...
15:09:53: (x) CUDAExecutionProvider: fallback occurred (used CPUExecutionProvider)
15:09:53: Testing CPUExecutionProvider...
15:09:53: OK: CPUExecutionProvider ran successfully
15:09:53: === Summary ===
15:09:53: OK: Working providers (in priority order):
15:09:53:   - CPUExecutionProvider
15:09:53: → Best available provider: CPUExecutionProvider
15:09:53: Cached execution providers to /home/grahams/.config/siril/siril_onnx.conf
15:09:57: Using inference providers: ['CPUExecutionProvider']

I see a few missing libraries, but I do not know how to resolve.

From the venv pip list shows the following are installed:

Package                  Version
------------------------ -------------------
appdirs                  1.4.4
astropy                  7.2.0
astropy-iers-data        0.2026.2.23.0.48.33
certifi                  2026.2.25
charset-normalizer       3.4.4
filelock                 3.20.0
flatbuffers              25.12.19
fsspec                   2025.12.0
idna                     3.11
Jinja2                   3.1.6
MarkupSafe               3.0.2
ml_dtypes                0.5.4
mpmath                   1.3.0
networkx                 3.6.1
numpy                    2.4.2
nvidia-cublas-cu11       11.11.3.6
nvidia-cuda-cupti-cu11   11.8.87
nvidia-cuda-nvrtc-cu11   11.8.89
nvidia-cuda-runtime-cu11 11.8.89
nvidia-cudnn-cu11        9.1.0.70
nvidia-cufft-cu11        10.9.0.58
nvidia-curand-cu11       10.3.0.86
nvidia-cusolver-cu11     11.4.1.48
nvidia-cusparse-cu11     11.7.5.86
nvidia-nccl-cu11         2.21.5
nvidia-nvtx-cu11         11.8.86
onnx                     1.20.1
onnxruntime-gpu          1.24.2
opencv-python            4.13.0.92
packaging                26.0
pillow                   12.0.0
pip                      26.0.1
platformdirs             4.9.2
protobuf                 6.33.5
pyerfa                   2.0.1.5
PyQt6                    6.10.2
PyQt6-Qt6                6.10.2
PyQt6_sip                13.11.0
PyYAML                   6.0.3
requests                 2.32.5
scipy                    1.17.1
setuptools               70.2.0
sirilpy                  1.1.3
sympy                    1.14.0
tensorrt                 10.15.1.29
tensorrt_cu13            10.15.1.29
tensorrt_cu13_bindings   10.15.1.29
tensorrt_cu13_libs       10.15.1.29
tifffile                 2026.2.24
tiffile                  2018.10.18
torch                    2.7.1+cu118
torchvision              0.22.1+cu118
triton                   3.3.1
typing_extensions        4.15.0
urllib3                  2.6.3

Everything except tensorrt was installed by siril or GraXpert-AI.py.

Thanks in advance for any suggestions.