a while ago I made a dataset of clean/noisy images to train a neural network for image denoising, published under https://commons.wikimedia.org/wiki/Natural_Image_Noise_Dataset (there is also a paper associated, http://openaccess.thecvf.com/content_CVPRW_2019/papers/NTIRE/Brummer_Natural_Image_Noise_Dataset_CVPRW_2019_paper.pdf )
The images were processed in darktable with all the steps applied except for sharpening (which amplifies noise), and aligned with Hugin tools, and the result was rather excellent. All the code is posted under https://github.com/trougnouf/mthesis-denoise , a bit messy I apologize. (update: https://github.com/trougnouf/nind-denoise is sanitized by a few GB, I will go from there and keep mthesis-denoise for historical reasons)
My goal has always been to integrate this into an image development software, such as darktable. For this, I think it would be better reprocess the raw images and apply a minimal number of processing steps; namely demosaicing (because the network should work with any type of sensor), exposure correction (because the various ISO shots vary in exposure, especially at insanely high ISO), and alignment (with an external tool, because no tripod/remote setup is perfect).
Unfortunately there doesn’t seem to be a way to export images from darktable with the full representation available and get them back exactly the same way. This would be needed as the network would be trained to take in a 32-bit floats and output corrected ones in the same format (and with less processing I would likely get more values outside of the [0,1] range unless I purposefully compress the histogram). Some 32-bit float options are available in darktable, but I can never get the same histogram I had at export. I can’t readily get the same image back with EXR and PFM, and TIFF does not seem to save negative values (clipped histogram). Maybe it’s just a matter of color profile? Otherwise I could work with other softwares since the processing steps needed are minimal (demosaic and exposure).