Pixel-shift is not a possible solution outside of a studio. It requires exact alignment between frames. As stated, I need a Monopod-only (hand held?) solution.
The awesome Google Research results are useful if I want to upgrade to a Google Pixel go get them.
From the Google paper referenced by Lain,
"The input must contain multiple aliased images, sampled at different subpixel offsets. "
They seem to be using the âsubpixel offsetsâ inherent in hand held shots in much the same way the sensor shift trick works; you try to position Red, Green, Blue and Jade pixels from various frames over each âmoleculeâ of the subject to obviate the need for demosaicing. If you have more than 1 sample per channel per molecule, average them to increase the signal to noise ratio. Brilliant!
From the Google paper referenced by Lain,
âwe refine the block matching alignment vectors by three iterations of Lucas-Kanade [1981] optical flow image warping.â
This is a CGI fix to warp the image to force a fit.
Google is attacking the much more general problem of aligning pictures taken out the window of a moving bus. Useful for perspective changes which I donât see. Vastly more complicated than my situation and akin to General Relativity.
This is not necessary in my case when aligning 2 frames at a time taken from an almost perfectly stationary monopod a tenth of a second apart. I am working in a non-accelerating inertial reference frame akin to the Relatively simpler Special Relativity. Other than leaf wobble, my scenes are practically stationary. I shoot mountains, not moving traffic.
From the Google paper referenced by Lain,
âMobile devices provide precise information about rotational movement measured by a gyroscope, which we use in our analysis.â
I donât know if I had my NikroscopeÂŽ gyroscope energized.
This blows this approach right out out of the water!
And, I didnât notice a link to the actual code. But, they supplied all of the formula and the process.
How long will it take you to code it? Forever? Me too!
The assertion that this is a âcurrently available technical solutionâ does not appear to be supported by the available evidence.
I wrote that years ago to directly dump RGBJ dfrom a NEF:
USAGE: nef2rgjb NEF [-DH] [-I] [-O output_dir] [-M mult]
[-P] [-S] [-T Right_Row_Trim_Pixels] [-U] [-V] <enter>
And then a program to extract individual channels as PGM files including Green, Jade and (Green+Jade)/2:
USAGE: bay2rgb Bayer.rgbj.raw [-D] [-A] [-B basename]
[-G 1*|2|3] [-I] [-M Mult] -x X_Res [junk_spec] [-S]
[-T trim_top_bytes] [-V] <enter>
Is 64 seconds very fast to directly extract RGBJ uint16s from a 36.6 MPix .NEF?
It could be somewhat faster if I nuked Firefox and some other processes.
bb_a Run 2020-05-09 18:10:48
N2P: FSize=77.398016MB, bay_off=4286464, bmsize=73111552
Wrote 73.111571MB to PPM file pf-2017.0620-269369.pgm
TE: Report: Accum time for 5 events=63.56 ms Run 2020-05-09 18:10:48
Time= 0.011 sec = 17.008%, READ_BAYER_RGBJ , hits=1
Time= 0.052 sec = 82.046%, SYS_WRITE_TO_DISK , hits=1
Time= 0.065 ms = 0.103%, ALIGN_ALLOC_MATRIX , hits=1
Time= 0.170 ms = 0.267%, INIT_MEM_MAIA_STR , hits=1
Time= 0.190 ms = 0.299%, FREE_BB_MEM_ALLOC , hits=1
identify pf-2017.0620-269369.pgm
pf-2017.0620-269369.pgm PGM 7424x4924 7424x4924+0+0 16-bit
Grayscale Gray 69.7246MiB 0.090u 0:00.089
Itâs very dark but it has not been engammarated! And you can see the speckling effect if you zoom in, characteristic of viewing Bayer data directly.
As predicted, the Bayer bitmap offset can be calculated directly from the EXIF XY res:
int xres= D800E_BAYER_X_RES ; // 7424 Nef_Raw_XRes with 46 JUNK uints on end
int yres= D800E_BAYER_Y_RES ; // 4924 YRes
fsize = file_size(neffn);
bayer_offset = fsize - 2 * xres * yres;
mbyte = 2 * D800E_BAYER_X_RES * D800E_BAYER_Y_RES; // Bitmap size
It takes ImageMagick 263 ms to convert the PGM to TIF
timeit magick pf-2017.0620-269369.pgm pf-2017.0620-269369.tif
TI: Elapsed time = 263.148 ms
ImageMagick 7.0.8-20 Q16 x86_64 2018-12-25
Dcraw nukes the 46 Junk shorts on the end of each row (ignoring the EXIF data). Daveâs row length agrees with my channel split lengths. Good on Mr. Coffey!
#define D800E_BAYER_RIGHT_FUNKY_UINT16 46 // Garbage on R side of Bayer GRBJ
timeit dcraw -v -4 -D pf-2017.0620-269369.nef
Loading Nikon D800E image from pf-2017.0620-269369.nef ...
Building histograms...
Writing data to pf-2017.0620-269369.pgm ...
TI: Elapsed time = 444.447 ms
identify pf-2017.0620-269369.pgm
pf-2017.0620-269369.pgm PGM 7378x4924 7378x4924+0+0 16-bit
Grayscale Gray 69.2926MiB 0.060u 0:00.059
And, it can be split into 4 individual channels quite easily, again with purely raw data:
nef2rgjb_d pf-2017.0620-269369.nef -d -P
Run 2020-05-09 16:34:38
PAV: -P -> Making RGJB Portable_Gray_Map PGMs
PAV: NEF Size=77.398016MB -> Bayer at 4286464, FQP= pf-2017.0620-269369.nef
PAV: Final_Bayer_XYRes=(7378,4924), chan_xy_res=(3689, 2462)
R2PGM: Wrote 19 Hdr_B, 18164655 tot_B to pf-2017.0620-269369.red.3689x2462.pgm
R2PGM: Wrote 19 Hdr_B, 18164655 tot_B to pf-2017.0620-269369.gre.3689x2462.pgm
R2PGM: Wrote 19 Hdr_B, 18164655 tot_B to pf-2017.0620-269369.jad.3689x2462.pgm
R2PGM: Wrote 19 Hdr_B, 18164655 tot_B to pf-2017.0620-269369.blu.3689x2462.pgm
Elapsed Nef2RGJB time 157.395 mSec
I am not going to wait for the demosaic folks to implement the new pixels. It appears to be far more difficult than I had hoped. I can do it entirely in float now on the rawest data but without the AMaZE process.
It would be interesting to compare the results with and without demosaicing.