I’m beginning with photography, with a Nikon D3200 i’ve just bought. I’m using siril for bias/dark/flat processing, stacking and registration.
My flats aren’t working properly, so i decided to study my frames a bit. I begun with bias frames, and some questions arose here:
Siril says my NEF images are 6034x4012 pixels. If i use libraw (https://www.libraw.org/) i see 6080 x4012, and the last 46 columns corresponds to pixels in the 248-263 level, most of them at 256. Does anybody know what are those columns in the original NEF data used for (by siril) before discarding them?
My camera manual says “large” format is 6000x4012 pixels, but as said above i get 34 extra columns. Is this a documenting issue on nikon side or are there any “special purpose” pixels that should be processed and discarded in some way?
After keeping just 6034x4012 pixels (as siril does) i get bias frames with pixel values less than 26. Non-zero values seem to form a “concentric circles + center grid” pattern. Have anyone with this kind of camera observed this? Does it correspond to shutting noise? My bias are taken at 1/4000s, the lowest exposure time possible with my nikon d3200.
I stack my bias frames selecting “Median stacking” and “No normalization”. After that i read the resulting fits image (from python) and value range seems to change from integer in the original bias FITS frames to float32. Is this expected? It seems to me that subtracting this 0.00000xxx floats from my darks (integers) would do nothing…
So, it seems that appart from shutter noise, this camera gives non-biased images, could this be? should i stop using bias frames?
Welcome!
I will only have a part of the answer: it is common that sensors are larger than the pictures coming out of DSLR cameras. Libraw has access too all pixels, but some are trimmed to keep image quality high for some reason.
In siril, stacking with average will give more precision if the result is kept as floating point values. During preprocessing, siril will convert the input images to float [0, 1] to do the correct maths, leave the output as float as well.
The additional columns contain data relevant at the time of capture, for instance optical black pixels which indicate the correct Black Level for the given taking conditions (in this case about 256DN it seems).
The D3200 subtracts the BlackLevel from raw data before writing it to file. This can be good or bad depending on the application (typically bad for Astro).
The concentric circles you are seeing are most likely due to raw data pre-processing (e.g. WB, vignetting, microlens/CA corrections, etc.)
Thank you Jack, i supposed something like that was going on (those high pixels are reference, bias in the rest of the matrix being subtracted onchip and high columns left for reference)…
After reading you i really think i will go without bias frames, but well, that part of my question wasn’t really about siril and the answer would be maybe too much subject to opinion.
I’ll check what happens when subtracting a float image to an int fits in siril, which is the only remaining question i’ve got, and now i’ve got the toolset to check for myself.
Ah, ok! Just misinterpreted you (though you mean when stacking with average only, now i see you meant that’s the rationale behind always working with float values)… Thank you.