In a Nikon camera raw, each Red/Green/Blue/Jade (2nd green) pixel has only 1 of the 4 colors as a 16 bit int. The missing 2 colors to make a 48 bit TIFF pixel are interpolated from the neighboring values with methods such as AMaZE, LMMSE or IGV.
If 2 of the 3 channel values can be conjured up at a given XY position, it should be possible to make up all 3 channels at some nearby point using similar interpolation.
I would like to introduce 2 brand new points between every pair of adjacent points. This would nearly triple both the X and the Y resolutions resulting in almost 9 times as many pixels.
“You would be better off using Bi-Cubic interpolation in Photoshop/ImageMagick/Gimp/… to make your picture seem larger”.
Would it not be better to use the rawest raw data once rather then to reprocess the TIFF which has already been demosaiced and rounded to uint16 quanta? The calculations could be performed in float or even double just once and then rounded to uint16 saving a secondary processing and another round off error. It would not look like a Xerox of a Xerox.
“Your picture won’t be any sharper. There is only so much sensor data to work with. You can’t cheat physics!”
With just 1 image, that is correct. But with 2 or more images from a burst taken by hand or with a Monopod, you now have 9 times the number of points at which to align them. This should greatly help the registration alignment process.
Is there already a tool to do this?