Mean/Median Stack of multiple dark frames before demosaicing.

Ok, so I noticed the same issue mentioned with black levels when I used an original raw file, and the issue is present but hot pixels get fixed regardless of if the subtraction file is the ‘real’ one or the averaged ‘cludged raw’ file, so my intuition tells me that it is an inevitable quirk of the darkframe subtraction with the BMPCC, based on elevated effective black levels. If this is true, then I should just continue to add the negative black point compensation in the raw tab after enabling subtraction. But the other less plausible but still possible option is that the black point in metadata manually inserted for the subtraction frame is not correct and I should continue fiddling with it. I was just curious if you had any opinions any one way or the other.

Nonetheless, I got the pseudoraw darkframe effectively working, in large part thanks to your much appreciated guidance, and question posed above is more theoretical minutia about the most optimal way of doing things than technical obstacle of necessary surmountation.

Hmm… There might be a bug in RT regarding black level handling when DFS is performed.

This looks suspicious:

If someone can provide example files from a camera with a black level > 0 I can check what’s wrong…

Edit: iirc at this stage of RT pipeline the data is still before black level subtractions, means each file is something like BL + data. If we now subtract image1 (the darkframe) from image 2 we would get image2 - image1 (means after subtracting black level). For this reason the blacklevel is added to the result in the code I posted above).

For examples you’ll need to talk to @Waveluke, although at least reading that - that should be re-adding the black offset so it remains after subtraction.

Unless W/H are mismatching for some reason, in which case the black level correction wouldn’t be applied.

But then the darkframe subtraction also would not be applied.

ERROR, insufficient caffeine at this time.

Good call.

My latest PlayRaw NEF is from a camera with metadata black > 0, Nikon Z6=1008:
https://glenn.pulpitrock.net/Individual_Pictures/DSZ_4168.NEF

I also need a dark frame…

Care about shutter speed, ISO, etc?

shutter speed and ISO should be the same as the other image. Lens and aperture should not matter in this case.

Here’s one, 1/3sec, ISO 400, same as DSZ_4168.NEF:
Lens cap on, dark room, camera wrapped in a black towel to avoid leakage…

https://glenn.pulpitrock.net/Individual_Pictures/DSZ_4437.NEF
This file is licensed: Creative Commons, By-Attribution, Share-Alike.

I call it “Black Cat Eating Licorice in a Coal Mine”. As a PlayRaw, I’d be interested to see what folk can do with it… :smile:

3 Likes

About dark frame subtraction in RT:

1 Like

Love that…

OK, here are some tests of the issue. Linked below are three raw files, from or derived from the Blackmagic Pocket Cinema Camera OG. According to the metadata, the black level on them is supposed to be 256.

The raw files include a raw file with image content but exposed near the noise floor, and thus has prominent hot pixels and amp glow, named “Image in need of dark frame subtraction.dng”. Then there is an authentic BMPCC dark frame, labeled “Authentic camera dark frame.dng”. Then there is a fake dark frame with less random noise, created following the advice of @entropy512 as seen earlier in the thread, with the metadata imported from a real BMPCC raw file, including the black and white levels, and all relevant raw interpretation metadata.

Also included is the output of Rawtherapee with various permutations of dark frame subtraction. One with no dark frame subtraction and two with the real dark frame, and two with the kludge dark frame with consistent black levels in metadata. Each darkframe output set has one as is, showing the bug with near complete blackness, despite excessive exposure comp, and the other shows that -512 black point compensation, not so coincidentally, twice the value as found in the metadata, seems to fix the problem.

https://drive.google.com/open?id=1sHx5F0WFi1-jyifwxTDFLx6x1E-IbX6v

Just posted below, some details of the problem with more test raw files. Anywhere else I should post the writeup and Google Drive link, ie, did you open an issue for the bug on github? The github link you posted looks like an optimization, not bug issue, so I wanted to ask your thoughts.

Some of you may squawk that I was using an out of date version of Rawtherapee, so I quickly downloaded and installed 5.7, the latest stable release, straight from https://rawtherapee.com/, and noticed the results were the same with the update, with the pure black result from just the dark frame, and the expected result from -512 black point compensation, so no point in re-export of example jpegs.

1 Like

Update, just added .pp3 profiles in the google drive link with self explanatory titles.

1 Like

I’ll try to take a look later this week.

Positives of vacation: Not distracted by work.

Negatives: Frequent unpredictable family distractions. :slight_smile:

I too would love to come up with a way to pre-average multiple dark frames into a single file for RawTherapee to use. I have been using RawTherapee and Hugin for astrophotography processing with good results. I assume the “hugin_stacker” command line utility would work perfect to average dark frames. This tool can take 16 bit tiff files. What I’m not sure about is what settings in RawTherapee to use to convert the dark frame raw files to TIFF before stacking. Obviously we don’t want demosaicing turned on… or pretty much anything else… but… still unsure the best ways to make sure nothing else happens.

None of this would be necessary if there was a proper way to select multiple dark frames in RawTherapee. :slight_smile:

Thanks for remembering to include a license with your PlayRaw. :wink: