Exporting/importing photos in full 32-bit representation (to train a neural network for image denoising)

I think that’s fixed, thank you for testing! The imports were pretty random autocomplete typos I sometimes get using eclipse (and cats), and I had forgotten an “import numpy as np” in the shared pt_helpers.py library.

1 Like

Works! Some xcf files for anyone who wants to see the difference https://drive.google.com/file/d/1nAUmntZcIqAFGysX6LGQKCouuaCoSrL1/view?usp=sharing

I had to try CPU. 203 seconds with a Threadripper x1950 and 32 threads at around 40% load. Took GPU Nvidia 2080 13 seconds. It is not important to me due to the graphic card, but have you tried if less threads makes it faster?

About the colour space issue I mentioned before I think this is the problem ICC Profile tags not copied

I added all and icc_profile in denoise_image.py and have no issue anymore in GIMP or darktable.
cmd = [‘exiftool’, ‘-TagsFromFile’, args.input, args.output, ‘-all’, ‘-icc_profile’, ‘-overwrite_original’]

The image is from this PlayRaw:

1 Like

Thank you, I’ve added the icc exif argument.

I don’t know if processing time has increased from the previous model, the model file size is twice as big somehow. (my system is constantly working so I can’t test runtime reliable at the moment.)

The number of CPU threads can’t really be set in pytorch as far as I know, it probably depends on the underlying math library (which is often Intel’s unfortunately.) The batch_size is currently set to 1 so each crop is processed sequentially, that can be increased so multiple crops can be done in parallel. The code would need some adjustment to output something useful (args.batch_size != 1 yields an error at the moment), but the timing should be correct so we can get an idea of what’s the effect of more parallel jobs (faster here). This would be beneficial for GPU runs as well.
I’m adding a commit which should change the single image data loader’s number of threads when batch size > 1 so that we can have a better idea of timing that’s not influenced by i/o timing.

1 Like

batch_size > 1 (parallel processing of multiple crops or multithreading) is now fully functional :slight_smile:

If you still have a Canon camera, Magic Lantern and this lua script will make things easier Take a photo at each ISO

Taking a picture at each ISO in a few seconds.

That would be wonderful. Unfortunately there’s nothing open-source available on my Nikon and Fujifilm cameras. (The Canon was lent by my university but I don’t think I’ll get support on already published research.)

Canon 400D, ISO 1600 and pushed 1,80 Ev in darktable.

DxO Deep PRIME
NIND
No NR.



Going through the images quickly, the trained algorithm seems to do well in cleaning up relatively low noise images. It can handle relatively uniform pattern noise.

It doesn’t do a good job of recovering detail (maybe this isn’t something it does…?) and removing low frequency noise. E.g., Waves at Shore Acres Oregon, which may already have chunky noise is made even splotchier, having the appearance of stipple art after processing.

It would be great if we could train with a larger variety of noise scenarios and subjects.

I think it works well cleaning up very noisy images. See the example below from the test set of a recently trained model:

ISO2500 (noisy: File:NIND MuseeL-Bobo ISO2500.jpg - Wikimedia Commons ):


ISO6400 (noisy: File:NIND MuseeL-Bobo ISO6400.jpg - Wikimedia Commons ):

ISO"H1" (noisy: File:NIND MuseeL-Bobo ISOH1.jpg - Wikimedia Commons )

ISO"H2" (noisy: File:NIND MuseeL-Bobo ISOH2.jpg - Wikimedia Commons ):

The main issue with the Waves at Shore image is that there is heavy processing after the denoising takes place (heavy use of local contrast makes for a very different image), whereas it’s made to denoise an image that’s nearly done, minus some sharpening (because that should never be done before denoising).

It might help to train a model to denoise images before any processing is done at all (so that one’s processing method does not affect the result). That’s the current goal.

And of course more training data is helpful, especially with different sensors. I checked generalization across different sensors of the same size, but I know that it performs poorly on phone images which are wildly different, and it needed some additional training data for full frame.

edit: here is the whole denoised test set:
https://drive.google.com/drive/folders/1ynDCpufpGPQ78uromupPfkHa8yZMJW1h?usp=sharing
matching images are on Natural Image Noise Dataset - Wikimedia Commons

After your edit, the noisy image disappeared. Was that intentional?

You and I have a different definition of very noisy. :stuck_out_tongue: So far, the images are generally well framed, lit and thought out. It would be nice to train on images that aren’t so ideal. That was what I was implying.

The noisy ones are on File:NIND MuseeL-Bobo ISO6400.jpg - Wikimedia Commons (and the “other versions” links there), I originally had one clean-noisy pair but I put more different ISO values and didn’t want to take too much space on this board since the noisy ones are already available there (and I haven’t found how to make a small gallery yet).

The noise values can (and go) pretty far; beyond the maximum ISO of this camera and to much greater values in the full frame images (I haven’t made one of those for the test set yet, because everything I can use for training is precious, but I could use some regular images and just rely on visual comparison without a metric …).

I’m not sure if the framing and such would help much, because training works on about 220 pixels patches which are then pretty random, but more variety definitely helps. The only limitation is that the scene can’t move at all, and that includes lighting which has to stay constant across all shots.

I tried to include some pleasant to look at images because I (and I expect other researchers and readers) are likely to look at these images a lot, so it makes the job more pleasant.

Would it be possible to use .pgm files in your software? Because when I tried DxO Deep PRIME yesterday it generated a dng.

dcraw -c -4 -E -j -t 0

I have used pgm2dng before and it would be interesting to try how much better the NR would be. pgm2dng needs to be written for each camera (black level) so starting with one camera model would be a good start.

3 sets of Canon EOS D60 raw files for you to use https://drive.google.com/file/d/1dJU8glhauUzje7CAy2WjfA40UFlbHORS/view?usp=sharing

If anyone else wants to take pictures with old cameras without moving the camera I recommend gphoto2.

gphoto2 --list-all-config to list ISO of your camera.

For D60 I used:
ISO 100
gphoto2 --set-config=/main/settings/iso=1 --capture-image

ISO 200
gphoto2 --set-config=/main/settings/iso=4 --capture-image

ISO 400
gphoto2 --set-config=/main/settings/iso=7 --capture-image

ISO 800
gphoto2 --set-config=/main/settings/iso=10 --capture-image

ISO 1000
gphoto2 --set-config=/main/settings/iso=11 --capture-image

1 Like

7 sets of Canon EOS 7D raw files for you to use https://drive.google.com/file/d/1o6re7IQ0ZmEY85cTabFpfhvBniJDfMcC/view?usp=sharing

If anyone else wants to take pictures with 7D and use a script I recommend LUA together with Magic Lantern.

function test()
    camera.iso.apex = 5
    camera.shoot()
    camera.iso.apex = 6
    camera.shoot()
    camera.iso.apex = 7
    camera.shoot()
    camera.iso.apex = 8
    camera.shoot()
    camera.iso.apex = 9
    camera.shoot()
    camera.iso.apex = 10
    camera.shoot()
    camera.iso.apex = 11
    camera.shoot()
    camera.iso.apex = 12
    camera.shoot()
end

keymenu = menu.new
{
    name = "ISO photo bracket",
    select = function(this) task.create(test) end,
}


ISO-BRACKET-7D.zip (372 Bytes)

1 Like

Not out of the box but it shouldn’t be too hard to implement.

PGM files should be handled by OpenCV (which should now be used in all read/write operations) and ImageMagick (cropping).

I believe those are 1ch images so the networks would have to be adapted from 3 to 1. (the first, input, or second, output, parameter of torch.nn.Conv2d and torch.nn.ConvTranspose2d).

I think the second biggest issue (after non-generalization) is that exposure and alignment may not be equal across shots, but that could be a non-issue with a camera’s native ISO values and easy static scenes, and less time processing means more time can be spent acquiring data.

Thank you!

PGM is grey scale. Are we dealing with grey scale images like for medical or remote sensing imagery?

.CR2 >Grey scale .PGM > Noise reduction > pgm2dng > .DNG

.DNG will be in colour.

My dcraw keeps on crashing so I can’t check and I don’t remember what

dcraw -c -4 -E -j -t 0

does exactly. I assumed this command would make a colour image since it doesn’t specify any interpolation options and the default is to debayer.