Introducing a new FOSS raw image denoiser, RawRefinery, and seeking testers.

Hi all,

I’ve been working hard producing RawRefinery, a raw image quality enhancement program. Currently, it supports image denoising and some deblurring, and I have plans to support highlight reconstruction and more.

The application works best using CUDA or MPS, but can be run on CPU, and it saves its results as a DNG that can be edited in your favorite raw image editing program.

Here is an example of its denoising performance on an ISO 102400 photo!


Currently, the program is in an alpha state, and while I have tested it on Mac OS and an Ubuntu VM, I am seeking people to test the app on their systems and with their raw files and report any issues they find. You can report issues either here or on the GitHub.

Currently, the easiest way to try it out is via PyPI: rawrefinery · PyPI

Instructions to install on linux from source can be found on the GitHub.

A .dmg to install on MacOS is also provided. I will be adding instructions to install from source on Mac and Windows shortly, but I’ll focus my efforts on whichever OSes are most requested here first.

I will also be providing more detailed usage instructions after I establish that people can install and run the app, although I hope the app is reasonably intuitive to use.

I really appreciate anyone who tries out the application! I love FOSS software, and want to give something cool back to the community.

26 Likes

Hi,

Looks great! I’m just a bit concerned: why does this need OpenSSL and cryptography libraries?

I sign and verify all the model weights I distribute as I see that as a potential security vulnerability as the program downloads model weights and executes them.

Here is the code that uses the cryptography library:

The model weights themselves are not encrypted, they are just signed.

If you find anything concerning, let me know, I want to provide trustworthy code.

2 Likes

I’ve tried using cuda (NVidia 1060 / 6GB), as well as CPU.

With CUDA:

$ CUDA_LAUNCH_BLOCKING=1 .venv/bin/rawrefinery 
/home/kofa/src/RawRefinery/.venv/lib/python3.13/site-packages/colour/utilities/verbose.py:340: ColourUsageWarning: "Matplotlib" related API features are not available: "No module named 'matplotlib'".
See the installation guide for more information: https://www.colour-science.org/installation-guide/
  warn(*args, **kwargs)
Using Device cuda from cuda
Loading model: /home/kofa/.local/share/RawRefinery/ShadowWeightedL1.pt
Model /home/kofa/.local/share/RawRefinery/ShadowWeightedL1.pt verified!
/home/kofa/src/RawRefinery/.venv/lib/python3.13/site-packages/torch/cuda/__init__.py:283: UserWarning: 
    Found GPU0 NVIDIA GeForce GTX 1060 6GB which is of cuda capability 6.1.
    Minimum and Maximum cuda capability supported by this version of PyTorch is
    (7.0) - (12.0)
    
  warnings.warn(
/home/kofa/src/RawRefinery/.venv/lib/python3.13/site-packages/torch/cuda/__init__.py:304: UserWarning: 
    Please install PyTorch with a following CUDA
    configurations:  12.6 following instructions at
    https://pytorch.org/get-started/locally/
    
  warnings.warn(matched_cuda_warn.format(matched_arches))
/home/kofa/src/RawRefinery/.venv/lib/python3.13/site-packages/torch/cuda/__init__.py:326: UserWarning: 
NVIDIA GeForce GTX 1060 6GB with CUDA capability sm_61 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_70 sm_75 sm_80 sm_86 sm_90 sm_100 sm_120.
If you want to use the NVIDIA GeForce GTX 1060 6GB GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(

I haven’t checked that URL yet.
Clicking the thumbnail again dumps core.

If I switch to CPU after the initial CUDA error, it dumps core, too. If I start with the CPU, I’m able to generate the preview. I’m still waiting for generating the DNG. :slight_smile:

I’ll check by PyTorch setup.

Thank you! It looks like the version of PyTorch I’m asking for is incompatible with your CUDA version. I wonder if I can relax the versioning to increase compatibility.

If I switch to CPU after the initial CUDA error, it dumps core, too. If I start with the CPU, I’m able to generate the preview. I’m still waiting for generating the DNG.

Ah, I’m probably cleaning up well after the CUDA error.

p.s. the “super light” model will run much, much faster on CPU than the default one. It should be the third option in the model drop down menu. Thanks so much for the feedback.

I’ve put up a new version, 1.3.3, on git and PyPI that allows for user specified torch versioning. I am not sure which version of torch you need for your GPU, as I am not able to test it myself. If you find a working version, let me know, and I will document it.

In addition, I put in a function to check the cuda arch vs the torch version to avoid the crash you found earlier.

chaiNNer runs fine with PyTorch like so:

Doesn’t say much to me, I’m not a Python guy; maybe it’ll help you.

And (sorry, the window is not resizeable, I tried to fit in most text):

I think this might work for you:

Within the torch virtual environment:

  1. Uninstall the previous torch (or just restart the install from scratch)
python3 -m pip uninstall torch
  1. Verify the right torch is installed:
pip show torch

It should say

Name: torch
Version: 2.6.0+cu126

  1. Reinstall with:
pip install torch==2.6.0  --index-url https://download.pytorch.org/whl/test/cu126

Based on what you provided from chaiNNer and a quick google, torch2.6 should be compatible with your GPU architecture, and it is easily installable with pip.

Seriously though, thanks for the detailed error reporting

to play devil’s advocate:

you wrote a frontend to other people’s AI models which then do the denoising?

That works, thanks!

Nikon D7000, ISO 1600, heavily underexposed, needed +4.25 EV in darktable.

7 Likes

I trained the models myself!

I’m more of a machine learning person than a front end person, although this project has been good for learning some front end basics.

The architecture is based on NAFNet, modified to fit the required features and performance requirements of the application. Primarily, I changed the block structure and modified the channel attention module to accept conditioning signals.

There is a link to the repo with the model and training code in the README, but as I look at it I realize it’s a bit out of date haha. I’ll update it because I want the recipe to be very easy to follow so that I can produce more and more models (e.g. highlight reconstruction, banding removal, etc…)

The models are (mostly) trained on the RAWNIND dataset:

All the models are trained on rented google GPUs or on Kaggle (maybe I’ll get a GPU sometime haha).

I just realized that while I have an acknowledgement of RAWNIND on the bottom of the README, I only acknowledge NAFNet on the Restorer repo. They are both acknowledged now. I appreciate both of their work greatly! Open source research is awesome.

4 Likes

Awesome!

@kofa your result on this extremely noisy image looks promising. I look forward to trying this when a Windows installer is available.

1 Like

Really nice, it seems like the dmg link is not currently up. I will try to compile myself but wanted to let you know @RawRefinery

I have followed your revised steps but am having a problem getting RawRefinery to run. This is how far I am able to get:

Any suggestions? Thanks.

Thanks should be fixed now.

1 Like

That’s on the list, I’ll be sure to post here when I build one.

It looks like there might be some dependencies missing that the Qt (what runs the GUI) needs. That might be expected depending on what linux mint includes by default.

Something like this MIGHT work, however, I have not tested it in mint.

sudo apt update
sudo apt install libxcb-cursor0 

Or, maybe:

sudo apt install \
    libxcb-cursor0 \
    libxcb-xinerama0 \
    libxcb-xkb1 \
    libxkbcommon-x11-0 \
    libxcb-icccm4 \
    libxcb-image0 \
    libxcb-keysyms1 \
    libxcb-render-util0

If you do test it, let me know, and I can include a note on linux mint. Otherwise, I can try to spin up a linux mint vm tomorrow and test it myself.

1 Like

I will give it a try and get back to you. Thanks very much for your efforts.

Edit - OK that seemed to do the trick. I have Raw Refinery running now and will do some testing. Once again - thank you very much.

2 Likes

I had to do something like that to get CUDA to work with @agriggio python code to utilize SAM2. I could change the config file but I could only use cpu until I got the pytorch with cuda correctly installed. That was all on windows though…