Introducing a new FOSS raw image denoiser, RawRefinery, and seeking testers.

Wouldn’t that need an nvidia GPU? That would not be welcome from this AMD user.

Sorry to be thick…how did you get it to work…I didn’t follow…any and all attempts at installing PiDNG did work…I got some error about something being depreciated??

For the record, I am sure the devs would consider AMD as well in this hypothetical, I just see potential headaches for the devs trying to support ML models.

But, they know better than I do. Maybe it’s easy to export the model as ONNX and run it in openCL? I’m not sure.

I had to recompile pytorch from source for the 1060 since the pypi version does not support older hardware anymore. Unfortunately, my nvidia system is currently broken due to a very noisy fan

My thoughts:
I have my personal problems with the results of most AI denoising. See here:

I second @Terry 's answer, that maybe a combination of the results would be a smart way.

Interestingly @sillyxone demonstrated here as well the capabilities of ninddenoise:

Comparing it to the result of RawRefinery I prefer the look of ninddenoise. Even though the result here is kept darker and therefore isn’t a fair comparison, but for me, it looks like more details were kept. It still has a somewhat plasticly look, but it’s looks still less artificial compared to most other AI denoising tools

This said, I would anyway welcome FOSS AI denoising capabilities for dt. Unfortunately, ninddenoise and RawRefinery are not made for AMD GPUs.
I tried to run ninddenoise on my PC but it didn’t work. I’ll give RawRefineree a try the next days, if I get it to run on my CPU.

I have a feature request in that case! Have two grain sliders:

  1. “Exposure grain” - add grain in the same direction as the denoised color of the pixel to add grain to. Doing it that way makes the result white balance invariant which is pretty nice.
  2. “Chroma grain” - any remaining grain after removing the pixel color aligned component.

I think most users would put the first slider at around 10-50% to remove the plastic feeling and the second at 0% because we usually focus on denosing the chroma component. Oh and make the grain sliders interactive by not reprocessing the full model!

For the color space and white balance discussion. Clipping can become real bad once it happens and a strong argument for the camera space is that its guaranteed to not clip if you really need to keep vales positive. Another option is to opt for a even wider gamut like Aces AP0 that covers the entire spectral locus. You also seem keen on conditioning, maybe have slight random alterations of white point/balance as part of the training to make the model less reliant of assuming what light the scene was lit with? A second thought on that is that any transform or balance change will alter the expected noise of the image and kind of destroy the ISO conditioning…

4 Likes

very cool! great results and also nice that you provided training infrastructure. this looks like a very refined piece of code :slight_smile:

as to integration into photography programs: i’m absolutely no machine learning person, but i recently integrated a convolutional u-net without dependencies. this works directly on the cooperative matrix / tensor core extensions, so it will run on newer nvidia and amd hardware (potentially with some painpoints on intel too). i had a quick look over your model, and while it’s a fair bit more complicated, there are no crazy operations that would require implementation (pretty much sum, multiplication, and 1x1 convolution on top of the 3x3 i have). cuda and torch are more like 10GB (?) of extra dependencies, and even onnx runtimes aren’t tightly integrated, so for my part i’d like to avoid this (don’t know about darktable).

1 Like

Have you considered creating synthetic data for training? Darktable has a database of sensor noise profiles for most cameras. Would you be able to take example RAW files shot at base ISO and generate noisy version from them? Perhaps multiple randomisations to avoid over-fitting.

Another question, would it help if the noise profile information were passed to the denoise network as an extra parameter so that it has a better understanding of how much noise needs to be removed?

see david’s approach here. it’s not easy to teach the network about a noise level that you supply, but he managed to make it work.

1 Like

Thanks. Does that just work with a single value, or is it using the full three colour parameter set that dt uses?

to tell the truth it was a bit of a pragmatic decision to make the dt values coloured. actually these sensors should have no real difference in noise behaviour. pixels may be a bit darker due to CFA absorption, so the dng spec has NoiseProfile (51041) in colour, too, for the gauss+poisson parameters. in any case the dt denoising is done after black point subtraction and even white balancing, so these values are probably not meaningless but subject to some transformation before you can apply them somewhere else.

again, i’m no machine learning person, but as i understand @David_Tschumperle’s architecture, there is another small MLP involved trying to make sense of the input parameters before the values are handed to the next stage of the network. so i’m guessing yeah hand it whatever and the MLP will figure it out. also i believe he added a full-res buffer with per-pixel variances? these would be evaluations of the noise model, not the parameters.

There are also the various ACES color spaces, that might be better.

I used uv run rawrefinery. Since I already had the non-Python dependencies, that worked without issue.

It’s certainly “practical”, but there are a number of things that would have to be figured out. It would probably have to be based on ONNX, but apparently there’s something about the DT build system that makes it a little tricky at the moment. Search the AI issues on GitHub for a bit more on that. Then there’s the question of distributing the models and integrating that into the UI.

@Jens-Hanno_Schwalm is the expert on that, I believe.

I’m actually using the so-called FiLM technique: it just applies a simple linear modulation \alpha\;x + \beta on the feature-map with coefficients \alpha and \beta that are learnt with a simple MLP (having as input the estimated noise level). It works quite OK and allows to modulate the strength of the denoising process.

@Popanz

Unfortunately, ninddenoise and RawRefinery are not made for AMD GPUs.

I am thinking of making AMD instructions.

Comparing it to the result of RawRefinery I prefer the look of ninddenoise. Even though the result here is kept darker and therefore isn’t a fair comparison, but for me, it looks like more details were kept.

RawRefinery does allow the users to change the conditioned noise level, which roughly keeps more details in. I also have plans of producing a model that is more aimed at a “natural” look with more details and primarily chroma reduction. But one thing at a time!

@jandren

Good ideas. I tried something similar before working in a HSL space, but I like your idea. Basically, project the noisy pixel value onto the denoised pixel value, and then apply that as a magnitude for the denoised image?

You also seem keen on conditioning, maybe have slight random alterations of white point/balance as part of the training to make the model less reliant of assuming what light the scene was lit with?

Hmm, interesting. BTW, I mostly have conditioning to give the users some sort of control over the process.

However, I will be retesting the model in the camera color spaces for sure first.

4 Likes

@hanatos

Thanks! As a note, the training code is a tad out of date. It’s not correct, but it might be annoying for someone to adapt it ATM. That is also on the to do list to update, but between job applications and all…

as to integration into photography programs: i’m absolutely no machine learning person, but i recently integrated a convolutional u-net without dependencies

That’s interesting! I’ll look over your code.

@Toast
I experimented with synthetic data, but prefer real data when I have the chance! However, it’s quite possible that a sensor noise profile might help, and including some synthetic noise profiles from less represented cameras might help. I’ve already seen that the results can be subpar on some Pentax cameras for example. Do you have a link to the profiles?

I would avoid any intermediate color space if possible, otherwise some of the data will get clipped. Most cameras have relatively similar spectral response so I would find surprising if the native space of each camera gives different enough results to have bad results with the model.
This is one of the parts that I don’t like about black boxes, it’s hard to know what would have a big impact. But the results are clear that this model is doing something right.

I get a syntax error in utils.py, line 11:
print(f"Found CUDA arch {“sm_{major}{minor}”}. Using Cuda")
SyntaxError: f-string: expecting ‘}’

I guess this should be changed to
print(f"Found CUDA arch sm_{major}{minor}. Using CUDA")
Similar in line 14.

I am wondering why others do not get this error…

Both statements are sounding fantastic. Thanx for your brilliant work!

1 Like

@deekay

I apologize. I pushed an update that should fix the issue (to GitHub only, pip remains without the offending code). If someone could test, that would be great. I don’t actually have a gpu on hand.

@jandren

I liked the idea, so I implemented a mock up in a local version. It projects the noisy rgb vector onto the denoised vector.This results in a nice monochromatic noise… as long as you don’t adjust the color space or white balance further. I then use the grain slider to blend in the new

This ideally would be done post wb, and that could be added in, but I can show the results of the simple test prototype at different grain blending amounts (0, 20, 50), along with the original noisy version. I tried to upload more, but I can only embed 4 images as a new user haha.

Qualitatively, I like the results of the 20/50% level, although in some ways the grain actually makes some of the remaining splotch noise out of the denoiser more obvious in the 20% version. If people like it, I’ll push the update as a change to how the grain slider works until I have time to produce a new version.

Also, I am response limited, so sorry if I haven’t replied to you!




4 Likes

No need to apologize. This is why we test, right?
Actually, I installed from pip, so the error is still there.
When installing from Github it works fine.

I am really impressed by the results and I like the additional noise level which helps to achieve a more natural look.

Thanks a lot for your work!

1 Like