Check your settings and you may find 12 and 14 bit options as well as uncompressed, lossless compressed and compressed. You may have compression turned on.
Compression greatly reduces battery life and increases write time. And with a 128GB memory card you can shoot through many battery packs. And, you have 2 memory ports. You can’t put in a bigger battery. And, you will find that 7zip with LZMA2 run on a real processor provides more efficient compression and can be done before archiving!
Here is the C Code:
// =====================================================================
// Read NEF’s Bayer data (including junk) and stuff into matrix provided
int read_nef_bayer_rgjb(char *fname, int bayer_offset, int xres, int yres,
uint16_t **nrbmat, EV_TIME_STR *tsa, int debug)
{
int fsr=0; // FSeekRtn value, s/b=0. FSize from OS in B
long sread=0; // SHORTs read from raw file
FILE *stream; // FILE pointer to input file
int brow; // Bayer_ROW
if(tsa) time_event(E_READ_NEF_BAYER_RGJB, tsa, E_TIME_EVENT, 1, debug);
if((stream = fopen(fname, "rb")) == NULL) { // READ BINARY
printf("RNBM: Cannot <open Input file \"%s\"!\n", fname); exit(-3); }
if(bayer_offset && (fsr=fseek(stream, (long) bayer_offset, SEEK_SET))) {
printf("RNBM: FSeek -> %d, terminating\n\n", fsr); exit(-12); }
for(brow=0; brow < yres; brow++) {
// Read from input file 1 row of xres uint16 at a time
// The total number of elements successfully read are returned as a
// size_t object, which is an integral data type.
sread += fread(nrbmat[brow], sizeof(uint16_t), xres, stream); // -> XRes!
}
fclose(stream);
printf("RNBM: Read %d USHORTs from BAYER file %s\n", sread, fname);
return(sread); // Return number UINT16s read, NOT BYTES!
} // End Read_Nef_Bayer_Rgjb().
The IEEE 754 spec for float (32) includes 24 significand bits so you are correct that it can absorb all 16 bits of a uint16_t with perfect fidelity.
The problem arise after the floating point calcs are done and a uint48_t TIFF pixel (actually a uint16_t[3]) gets written to disk. How do you smash all 7.22 float significant figures into a 16 bit storage variable which can hold no number larger than 65535? You truncate or round which is irreversible.
This is why working with the Pristine Bayer data has inherent advantages over working with the TIFF data which has already been processed once.
Here is a trivia question for you:
the float spec allows for only 1 sign bit. How do you get a negative sign on the exponent if the only sign bit has already been used for the number itself? Ex: -123E-04 ? There are 2 negative signs! Where is the second one stored?
If you look at the PPM file Dcraw creates, you will find a 4 line header that looks a lot like this:
head -n4 pf-269361.ppm =>
P6
7360
4912
65535
After which you will find 48 bit TIFF pixels encoded as uint16[3] RGB triplets.
Do a size analysis:
lsr -sa pf-269361.ppm => 216913939 < Total disk file size
head -n4 pf-269361.ppm | wc => 4 4 19 < 19 bytes of header
216913939 - 19 = 216913920 < bitmap size
mult 216913920 /7360 /4912 == 6 < Divide bitmap size by XY Res…
Looks like 6 bytes per pixel which === 48 bits!
The Dcraw TIFF file is quite similar except that its bloated header is 1376 bytes
#define DCRAW_TIF_HEADER_SIZE 1376 // Tif overhead above raw, rgb uint16s
And, you can peek into Dave’s TIFF header to get the XY res at these offsets:
#define HDR_GRAY_XRES_IDX 16 // uint_t_ara[15] → GRAY_Image_XRES
#define HDR_GRAY_YRES_IDX 22 // uint_t_ara[21] → GRAY_Image_YRES
Yes! Think of 2 completely new pixels between pixels (0, 0) and (0, 1) at (0, ⅓) and (0, ⅔). Of course you would scale your output matrix by a factor or 3 so the new pixels would appear at (0, 1) and (0, 2) between the old pixels remapped to (0, 0) and (0, 3).
This would provide a 3x3 matrix of pixels where each original pixel was. When aligning multiple frames off by a non-integer number of pixels, you would have 9 points to select from to find the best fit.
If you can conjure up Red and Green on a Blue pixel at (1, 1), how hard would it be to apply a similar process to prestidigitate all 3 values at (1, ⅓) and (1, ⅔)?
For a single frame, this entire Quixotic exercise would be no better than an ImageMagick resize. But, when merging many layers via transparency, the data from all of the frames could be added with accurate alignment.
This should average digital errors toward zero and provide super-resolution.