Exporting/importing photos in full 32-bit representation (to train a neural network for image denoising)

After your edit, the noisy image disappeared. Was that intentional?

You and I have a different definition of very noisy. :stuck_out_tongue: So far, the images are generally well framed, lit and thought out. It would be nice to train on images that aren’t so ideal. That was what I was implying.

The noisy ones are on File:NIND MuseeL-Bobo ISO6400.jpg - Wikimedia Commons (and the “other versions” links there), I originally had one clean-noisy pair but I put more different ISO values and didn’t want to take too much space on this board since the noisy ones are already available there (and I haven’t found how to make a small gallery yet).

The noise values can (and go) pretty far; beyond the maximum ISO of this camera and to much greater values in the full frame images (I haven’t made one of those for the test set yet, because everything I can use for training is precious, but I could use some regular images and just rely on visual comparison without a metric …).

I’m not sure if the framing and such would help much, because training works on about 220 pixels patches which are then pretty random, but more variety definitely helps. The only limitation is that the scene can’t move at all, and that includes lighting which has to stay constant across all shots.

I tried to include some pleasant to look at images because I (and I expect other researchers and readers) are likely to look at these images a lot, so it makes the job more pleasant.

Would it be possible to use .pgm files in your software? Because when I tried DxO Deep PRIME yesterday it generated a dng.

dcraw -c -4 -E -j -t 0

I have used pgm2dng before and it would be interesting to try how much better the NR would be. pgm2dng needs to be written for each camera (black level) so starting with one camera model would be a good start.

3 sets of Canon EOS D60 raw files for you to use https://drive.google.com/file/d/1dJU8glhauUzje7CAy2WjfA40UFlbHORS/view?usp=sharing

If anyone else wants to take pictures with old cameras without moving the camera I recommend gphoto2.

gphoto2 --list-all-config to list ISO of your camera.

For D60 I used:
ISO 100
gphoto2 --set-config=/main/settings/iso=1 --capture-image

ISO 200
gphoto2 --set-config=/main/settings/iso=4 --capture-image

ISO 400
gphoto2 --set-config=/main/settings/iso=7 --capture-image

ISO 800
gphoto2 --set-config=/main/settings/iso=10 --capture-image

ISO 1000
gphoto2 --set-config=/main/settings/iso=11 --capture-image

1 Like

7 sets of Canon EOS 7D raw files for you to use https://drive.google.com/file/d/1o6re7IQ0ZmEY85cTabFpfhvBniJDfMcC/view?usp=sharing

If anyone else wants to take pictures with 7D and use a script I recommend LUA together with Magic Lantern.

function test()
    camera.iso.apex = 5
    camera.iso.apex = 6
    camera.iso.apex = 7
    camera.iso.apex = 8
    camera.iso.apex = 9
    camera.iso.apex = 10
    camera.iso.apex = 11
    camera.iso.apex = 12

keymenu = menu.new
    name = "ISO photo bracket",
    select = function(this) task.create(test) end,

ISO-BRACKET-7D.zip (372 Bytes)

1 Like

Not out of the box but it shouldn’t be too hard to implement.

PGM files should be handled by OpenCV (which should now be used in all read/write operations) and ImageMagick (cropping).

I believe those are 1ch images so the networks would have to be adapted from 3 to 1. (the first, input, or second, output, parameter of torch.nn.Conv2d and torch.nn.ConvTranspose2d).

I think the second biggest issue (after non-generalization) is that exposure and alignment may not be equal across shots, but that could be a non-issue with a camera’s native ISO values and easy static scenes, and less time processing means more time can be spent acquiring data.

Thank you!

PGM is grey scale. Are we dealing with grey scale images like for medical or remote sensing imagery?

.CR2 >Grey scale .PGM > Noise reduction > pgm2dng > .DNG

.DNG will be in colour.

My dcraw keeps on crashing so I can’t check and I don’t remember what

dcraw -c -4 -E -j -t 0

does exactly. I assumed this command would make a colour image since it doesn’t specify any interpolation options and the default is to debayer.

I think this model delivers more details :slight_smile:


Use it with
python denoise_image.py --arch UtNet --model_path generator_650.pt --input <yourimage.ext> --output <destination.tif>

I trained it 1% of the time using clean-clean images from Wikimedia Commons Featured pictures with ISO <= 200.

If you are using demosaiced images for training, the demosaicing algorithm will affect noise quality/character. EG some algorithms increase noise more than others.

I also suspect that the demosaicing differences between x-trans sensors and bayer sensors will affect performance.

I’ve made two sets of raw images at various ISOs. It’s the same still-life scene under different lighting conditions.


Unfortunately they are from the ‘vintage’ Nikon D40 which is noisy even at base ISO 200. I have taken 4 shots at ISO 200 so they can be stacked to get equivalent to ISO 50 (I think).

Images are placed into the public domain.

I can make more sets of still life scenes if you want.

1 Like

Indeed more details but it failed to denoise underneath the bird. I tried a couple of other pictures and no issues.

09902 - latest one - python denoise_image.py --input in/*.tif --output out/*.tif --model_path ../../models/4/generator_650.pt --arch UtNet -i in/${i} -o out/$i

09901 - the last one before - python denoise_image.py --input in/*.tif --output out/*.tif --model_path ../../models/4/generator_734.pt --network UNet --cs 660 --ucs 470 -i in/${i} -o out/$i

0990 - source file

ISO 12800 with Canon EOS M5.

CC0 IMG_6990.CR2 (43.0 MB)

Meanwhile I have been trying to work with PGM files to be able to noise reduce before demosaicing and before black level. Got stuck with PGM files from 7D. When I try to create DNG files from PGM I get the wrong colour. Perhaps wrong cfa pattern? Works with M5, M6, R etc…

Skärmbild från 2021-06-27 14-37-25

For my M5, there were no issue.
dcraw -4 -E -j -t 0 IMG_6990.CR2
IMG_6990.zip (38.2 MB)

./pgm2dng IMG_6990.pgm

IMG_6990.DNG (48.7 MB)
CR2 to the left, DNG from pgm2dng to the right.
After that I copied all metadata with
exiftool -tagsFromFile IMG_6990.CR2 -all -icc_profile IMG_6990.DNG

pgm2dng is avalible from Magic Lantern http://a1ex.magiclantern.fm/bleeding-edge/pgm2dng.c
Compile it with gcc -o pgm2dng -m32 pgm2dng.c /path/to/magic-lantern/src/chdk-dng.c -I/path/to/magic-lantern/src -lm

Or use the one I compiled for M5.
pgm2dng.zip (8.9 KB)

Thank you @Iain ! I will try to incorporate these in the dataset asap. The base ISO looks quite good.

@Peter The model indeed performs very poorly on this bird :confused: Hopefully something that is fixed by training from unprocessed images.

But works great with the next bird ^^

Could you please try to train these PGM files? Just one set, but that is what I have from M5 right now and this is only for testing. https://drive.google.com/file/d/1Q9XV7XiszRZ43ur_LesTJeJDPeA853O5/view?usp=sharing

After that, try to noise reduce IMG_6990.pgm in the post above.

Will do, It might take me a while (a month or so?) because I’m currently focusing a lot on File:ElectricBoatDiagram.png - Wikimedia Commons and I will hopefully be moving soon. I’m excited to start working on raw/ish denoising though. Thank you for working on this :slight_smile:

1 Like

Meanwhile I will take some new sample series.

Got 7D to work with pgm2dng. Checked rawspeed/cameras.xml at develop · darktable-org/rawspeed · GitHub and cfa order for 7D. Used 0x01000201 for 7D.
Skärmbild från 2021-07-03 16-14-45

Skärmbild från 2021-07-03 16-14-16