You’re right that it seems like a low probability cause. Again, without more information from you, we’re going to be speculating here.
Debian 12.
great to see benoit updated the network to work on raw input, and also that you folks are working on streamlining the use of the code.
did any of you succeed in downloading the test dataset for training? for context, i have a very similar joint denoising and demosaicing network wired to the image processing graph in vkdt, but it uses simple nearest upsampling instead of transposed convolutions. i thought it should be simple to change the network arch and retrain.
it appears both the dl_ds_1.py and the script given on the wikipedia page don’t work any more, due to robot restrictions. the json request reads like: Please set a user-agent and respect our robot policy https://w.wiki/4wJS. See also T400119.
or do any of you guys maybe have a dropbox/something similar link for the training set? i remeber some years ago benoit opened a port on a private machine, but that was super slow and i assume the data set has changed since?
curl -s "https://dataverse.uclouvain.be/api/datasets/:persistentId/?persistentId=doi:10.14428/DVN/DEQCIM" | jq -r '.data.latestVersion.files[] | "wget -c -O \"\(.dataFile.filename)\" https://dataverse.uclouvain.be/api/access/datafile/\(.dataFile.id)"' | bash
Thats what’s in the readme of his new paper. I have a python script lying around somewhere thats a little more robust, but you probably get the idea ^^
I was kind of assuming its a superset of the original, but honestly I’m not sure. I haven’t tried to retrain the original net.
I do have a copy of this one that I can mirror if it comes to it
Would you be so kind as to tell me the output of the following?
python -c "import torch; torch.accelerator.is_available()"
From inside the venv. Stuff like the output of clinfo | grep device would also be interesting. If its related to that AMD card (which should be supported in theory but is untested), those might give us some clues.
Of course.
(nind-denoise-master) a@debian12:~/nind-denoise-master$ python
Python 3.13.7 (main, Sep 2 2025, 14:21:46) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
...
>>> torch.accelerator.is_available()
False
>>>
a@debian12:~/nind-denoise-master$ clinfo | grep device
Number of devices: 1
Max on device events: 1024
Queue on device max size: 8388608
Max on device queues: 1
Queue on device preferred size: 262144
Extensions: cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program
clinfo.txt (4.0 KB)
Finally have a chance to try out @rengo’ script, very nicely packaged, easy to install.
@rengo: please take a look at this commit that fixes a few things. I also switched the param to allow specifying multiple files, thus, shell wildcard expansion in Linux will also work, e.g. *.NEF, DSC{100…123}.RAF, … Exceptions are caught so that it will keep on processing the remaining files.
I’m not sure if I followed your design/convention correctly, so I didn’t create a PR, leaving it as a reference and let you make the final decision if you want to bring the fixes in.
@mikrom: I have bad news: I ran @rengo’ script through all my test cases and couldn’t reproduce your problem. Even the test case for highlight reconstruction that needs 32-bit TIFF also works fine. I even tested your NEF and XMP file as is (without opening it in darktable) and it still works fine. At this point, I can only recommend that you try starting from scratch/fresh just in case it’s an earlier bug that has been fixed.
Reinstalled again according to the instructions at GitHub - CommReteris/nind-denoise: Image denoising with darktable using the Natural Image Noise Dataset
I configured PyTorch on my AMD Radeon using https://phazertech.com/tutorials/rocm.html:
pip3 uninstall torch torchvision torchaudio
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.4
export PYTORCH_ROCM_ARCH="gfx1031"
export HIP_VISIBLE_DEVICES=0
export ROCM_PATH=/opt/rocm
export HSA_OVERRIDE_GFX_VERSION=10.3.0
It works now.
But there is still no result: the spots remain. ![]()
hmm, can you try modifying denoise.py as below to force running on CPU instead? I don’t expect it to make a difference (as you mentioned that it runs on CPU earlier), but just in case.
If you can find a temporary storage (e.g. GDrive) and have the bandwidth, I’d like to take a look at your clean/spotless _s1.tif file, too. That’d help trying to reproduce the problem.
thanks
I think I figured out the problem.
If I understand correctly, nind-denoise works with the sRGB color space. On my Nikon and when working in Darktable I use the Adobe RGB color space. When converting Adobe RGB to sRGB the color channels can become negative. In my example it’s the blue.
If I switch the gamut clipping (in “input color profile” module) to sRGB, the spots don’t appear.
But that’s not good.
Is there a way to make it work in other color spaces besides sRGB?
great troubleshooting! And yes, it’s really hacky just to get this to work so I had to set some constraints (sRGB only). You can try commenting out the part where it replaces the colorin values (basically using a fixed preset/colorspace), or replace with your AdobeRGB colorin value from your XMP.
I’ll play around with different colorspaces and see if I can make it more generic/flexible, but IIRC it was quite tricky especially with different versions of darktable and iop-orders. We’re both just hacking blindly here so it’s just a matter of who’s having more luck ![]()
Any DT devs can give us a finger (pointing in the right direction, not the middle one)?
Hey congrats! You are the first person to report success running with AMD! I’ll have to update the readme.
This will be far easier to handle with the rawnind version (when thats done), so don’t work too hard on this
Right now I’m thinking I’m going to continue minimal bug-fixes to this and try and get the lua plugin to integrate properly. Then I’ll probably put this project in maintenance mode and work on the next version in a separate repo so as not to break this workflow.
Just FYI
In the Lua script, you can include the string library and replace all the string substitution code with library calls.
What is this? Can you give details? What’s the roadmap?
PS Don’t forget about the bug ![]()




