After your edit, the noisy image disappeared. Was that intentional?
You and I have a different definition of very noisy. So far, the images are generally well framed, lit and thought out. It would be nice to train on images that aren’t so ideal. That was what I was implying.
The noisy ones are on File:NIND MuseeL-Bobo ISO6400.jpg - Wikimedia Commons (and the “other versions” links there), I originally had one clean-noisy pair but I put more different ISO values and didn’t want to take too much space on this board since the noisy ones are already available there (and I haven’t found how to make a small gallery yet).
The noise values can (and go) pretty far; beyond the maximum ISO of this camera and to much greater values in the full frame images (I haven’t made one of those for the test set yet, because everything I can use for training is precious, but I could use some regular images and just rely on visual comparison without a metric …).
I’m not sure if the framing and such would help much, because training works on about 220 pixels patches which are then pretty random, but more variety definitely helps. The only limitation is that the scene can’t move at all, and that includes lighting which has to stay constant across all shots.
I tried to include some pleasant to look at images because I (and I expect other researchers and readers) are likely to look at these images a lot, so it makes the job more pleasant.
Would it be possible to use .pgm files in your software? Because when I tried DxO Deep PRIME yesterday it generated a dng.
dcraw -c -4 -E -j -t 0
I have used pgm2dng before and it would be interesting to try how much better the NR would be. pgm2dng needs to be written for each camera (black level) so starting with one camera model would be a good start.
Not out of the box but it shouldn’t be too hard to implement.
PGM files should be handled by OpenCV (which should now be used in all read/write operations) and ImageMagick (cropping).
I believe those are 1ch images so the networks would have to be adapted from 3 to 1. (the first, input, or second, output, parameter of torch.nn.Conv2d and torch.nn.ConvTranspose2d).
I think the second biggest issue (after non-generalization) is that exposure and alignment may not be equal across shots, but that could be a non-issue with a camera’s native ISO values and easy static scenes, and less time processing means more time can be spent acquiring data.
Meanwhile I have been trying to work with PGM files to be able to noise reduce before demosaicing and before black level. Got stuck with PGM files from 7D. When I try to create DNG files from PGM I get the wrong colour. Perhaps wrong cfa pattern? Works with M5, M6, R etc…
Will do, It might take me a while (a month or so?) because I’m currently focusing a lot on File:ElectricBoatDiagram.png - Wikimedia Commons and I will hopefully be moving soon. I’m excited to start working on raw/ish denoising though. Thank you for working on this