I have the same problem. I realized that my darktable version is no longer “5.5.0+XYZ” but “ae456d37b8”, which is probably due to my limited git skills…
But could this cause the error message? How can I get the version in the format “tag+number”?
Just tried it on my M4 MacBook Air with CoreML enabled and it was just as slow. DXO Pure Raw takes about 8 seconds to process an image with full de-noise at any ISO. Even if darktable takes 16 seconds that’s ok. But 2 minutes isn’t worth it, honestly.
As one of possible options, you can force a specific version string at CMake configure time (useful as an override regardless of git state):
cmake .. -DPROJECT_VERSION="5.5.0"
Do you have a throw away image that you can share…or maybe I missed something… I have what is now a pretty low end machine with a 12th gen i5 processor on Win 11 and a 3060Ti GPU and the few images that I tried only took a few seconds…I haven’t timed any but I could for reference… And as noted I apologize if I missed something and this is a Mac specific issue and ask…in that case sorry for the noise…
Wow. that’s fast. Try this one on your machine.
20260224_0903.ORF (21.4 MB)
GPU: 8 seconds.
EDIT: CPU: 36 seconds.
- GPU NVIDIA Quadro RTX 4000 8 GB (card from from 2019), NVIDIA CUDA, running UNet denoiser trained on NIND dataset.
- CPU Intel I9 14900K, 64 GB RAM
- Linux Debian 13.4
Now we know it is a either a Mac problem unique to my Mac Studio M2 Max or it is a Mac compute problem overall.
My specs are 12 core CPU, 30 core GPU, 32G RAM
I takes takes a tad over 4 seconds on my machine with the GPU…
I have same slowness problem.
Running nightly (903). Macbook M1 pro.
Takes close to 2 minutes for the ORF file (after clicking the Process button).
So, looks like only CPU is being used even when I have CoreML enabled. And I don’t see any activity in the GPU section of Activity Monitor.
Am I missing some other setting?
If you run dakrtable with -d ai logging you will see if CoreML was enabled.
2 minutes is not slow for this model. There’s no “slowness problem”, just don’t expect it to be very fast.
GPU acceleration helps, but strongly depends on vendor capabilities. NVidia CUDA provides the best acceleration possible even of slightly dated hardware. AMD MIGraphX works for many cases, but not all neural operators are supported yet. Comparing cross hardware does not make much sense.
Thank you. I ran darktable -d ai and looked to see if CoreML is enabled and it isn’t. For some reason, CoreML is not working on macOS.
And I would say that 2+ minutes is VERY slow to Denise a photo compared to Luminar NEO, DXO PureRaw, and every other denoising algorithm I have ever used. It is also MUCH slower on macOS than in Linux using NVIDIA and AMD GPUs.
Comparing cross hardware makes a lot of sense because if it works in a few seconds on NVIDIA and AMD and in a few minutes on a Mac, no one will use it on a Mac.
Yes, the screenshot you shared shows that CoreML acceleration is not available on your machine. How did you install darktable?
I install it from a build provided by @MStraeten. The AI mask algorithm is also quite slow. Perhaps the missing CoreML is endemic to the MACOS builds because most darktable users are on Linux and the devs neglected to turn it on in the nightly builds that @MStraeten depends on for his weekly macOS builds?
Please, try installing from this:
I work on macOS. Our nightly builds contain working ONNX Runtime with CoreML acceleration on macOS.
That is faster. Denoise takes about 30 seconds and the quality is very good at ISO 6400. At ISO 12800, the quality is not great. But the speed improvement is great. Is it possible the code can be optimized for macOS further?
Interestingly, the image segmentation models don’t change in image analysis speed. Is it possible only ONNX runtimes are optimized for CoreML in macOS and the AI masking models are not?
20260224_1563_denoise.tif (58.3 MB)
AI object mask is forced to runs on CPU due to speed constrains. It may change on Linux and Windows, but on macOS CoreML acceleration makes it slower, not faster.
Model tends to be pretty aggressive on very noisy images. You can try to use details recovery slider to bring some details from the original image back.
What do you mean?
As I said before, our baseline in term of speed is CPU processing. We don’t have much control of GPU acceleration in DT code rather than just enabling it. Optimization on a vendor side - both software (CoreML, CUDA, MIGraphX and OpenVINO) and hardware (Apple Neural Engine, GPUs).
Thank you for providing another compelling reason to upgrade my Mac Studio to a faster CPU!
Now that this AI denoising option is available in DT 5.5 I am interested in trying it out. However, is there any documentation to guide me a new user or do I just ask questions here. My first question is should I apply it at the end of editing as it outputs a tiff file. I am not keen on doing it early on with the image and then having to edit a tiff file.
I think you answered your question.
User documentation will be available later, before release date. You can definitely ask questions here.
Regarding your question, it is recommended to apply AI denoise at the end or at least after tone mapping.