Aaaand that’s why full beans denoising isn’t my main priority right now. The model does some denoising and does it well enough, but if I bump the parameters too much I get unpleasant non-uniform patches of noise.
also, how do you tonemap when mastering for hdr output? so far i think i got best results by simply disabling the display referred curve and pushing exposure up by 3-5 stops or so.
I reimplemented OpenDRT in a WGSL shader, but changed middle grey from 0.11 to 0.18 (tn_Lg parameter). Previous “best” approach was to plop an HLG curve and let the display handle the rest, but I got very inconsistent exposure in the output and it only renders correctly on macOS while on Windows and SDR displays it looks too washed out.
By the way, I’m currently experimenting with ML highlight reconstruction and it’s rather promising:
This is without any additional reconstruction on top. Here’s the mask view:
Doing inpaint opposed + segmentation based gets me 90% there but produces harsh artifacts in the boundary regions, while NN output looks pleasant already (epoch 3, mind you).
You are getting some very nice results.
I had to try one of my torture test images. It is from the 6-megapixel Nikon D40.
It’s not a bad result for such a challenging image.
Regarding the highlight recovery, the image you posted is a tough one for inpaint opposed and segmentation based highlight recovery because a lot of the pixels that surround the clipped regions are not part of the same object. Both methods work by looking at pixels next to the clipped area to estimate the what the colour of the clipped area should have been.
I look forward to what your ML highlight recovery can do.
Thank you!
About the shirt – as far as I understand, moiré on Bayer sensors is caused by OLPF and Fuji cameras aren’t as susceptible due to absence of one. Haven’t yet tried tweaking the model to reconstruct that, but I suspect it’ll be tough and require a separate head and further increase in processing time.
Speaking of highlight reconstruction. Here’s a comparison of the same blown out sky pulled down to -2.7EV:
Not too bad, tonal balance is preserved, no pink/green cast. And this is the worst case:
That’s the HL reconstruction head with a wider attention field. I think it wasn’t the best approach. Needs more tweaking.
Great initiative and I hope that one day something like this can be integrated into darktable. A great demosaicing method is missing for Fuji and this hits the spot.
If you need more data for training, I have a X100V and X-T3 and am open to share RAWs if need be.
Plus I think the license of PlayRaw images would be sufficient to use them
Yeah, licensing is what concerns me here. I use my own photos only because of this. Was looking into RAISE dataset, but eventually discarded that idea. And besides, it’s not exactly useful due to a lot of clipped highlights which confuse the model.
I’m traveling until next week and my goal is to build a decent dataset of underexposed images – clouds, sunsets, fog, ocean, trees, all that stuff, so I should be good on RAWs (in addition to existing thousands I already have).
Yeah I understand. They would of course be completely open with CC Attribution-NonCommercial or similar.
It’s the other way round, an optical low pass filter avoids moiré and other aliasing artefacts. Unfortunately camera manufactures often make them underpowered or leave them out altogether.
I think you should consider that highlight reconstruction can be divided into two different problems. If only 1 or 2 channels are clipped then the problem is determining the relationship between the unclipped and clipped channels. If you can determine the proper relationship then the unclipped channel can inform the reconstruction of the clipped channels.
The other problem is when all channels are clipped. Then you have to invent something plausible to replace the clipped pixels.
If you are interested here are some of the highlight recovery test images @hannoschwalm and I used to test inpaint opposed and segmentation based highlight reconstruction.
https://drive.google.com/drive/folders/1SmiQ7E01RaflZxIFpfi5FMCeZGpHirj-?usp=drive_link
All images in the public domain.
Estimating a blown channel from two intact channels feels like reconstruction, but filling in three channels is just guessing.
And that guessing is available in segmentation based algo.
A bit offtopic probably but - Is there a possibility to extend this to QB sensors?
I wished Darktable, RT or any other project added a way to demosaic QB data .
If you can have a look here .
I don’t have the technical know how to create something myself but these smartphone sensors are crippled by these OEMs and This one has been on the wish list for a while. Omnivision has a nice algorithm/process but other OEMs have issues like worms and so on so if there’s an open way to handle this data better then a lot of issues can simply be bypassed. Not that I would be shooting unbinned all day every day but given enough light these sensors can do good!
@naorunaoru did you experiment with simply augmenting the training data for the demosaicing/denoising networks with some clipped highlights? along the lines of simply multiplying by a constant or three random wb multipliers and clipping the channels at 1.0 (keeping training data multiplied but unclipped)? i mean the number of weights seems to be expressive enough for some local filtering… why not hl reconstruction while we’re at it?
[edit] seems to reduce overall error (??) but the results i get only restore the highlights in purple. maybe i’ll need to set the one-hot channel encoding to some value indicating “please disregard, this pixel is blown”.





