Browsing some sites I don’t visit very often, I came across the following on the ‘other’ fuji x forum. https://wilecoyote2015.github.io/VerySharp/
I don’t know if it’s of interest to anyone, but it’s open source.
I agree… I’ve been thinking about this during the past few days, and come up with a possible algorithm to test.
The idea of pixel-shift techniques is, as far as I know, to extract each RGB channel of a given pixel from the shot where the corresponding color is directly measured, thus “undoing” the artefacts introduced by the Bayer interpolation.
So we should talk about super-sharpness more than super-resolution, because the native pixel size is not changed.
My idea would be the following:
for each shot, output the interpolated image as well as an image showing the Bayer pattern, with pixels that are either fully red, green or blue depending on the corresponding Bayer color
apply to both images the same lens corrections, sub-pixel alignment and, eventually, upscaling
use the corrected and aligned bayer images to decide from which shot a given RGB channel of a given pixel should be taken, in order to minimize the effect of color interpolation
From what I understand super resolution techniques work by using the fact that what the cameras capture is not band limited properly so there is some aliasing (if that’s true super resolution techniques should work a lot better with cameras with a weak / no low pass filter). In that case it should be able to increase the resolution beyond the limitations imposed by pixel size.
I don’t have the theoretical background to go much beyond that. So I’ll have some studying to do if I want to look into this more deeply. Which I might want to. I have some plans to get a DJI mavic which has an essentially horrible camera and I’d be curious how much I can squeeze out of that with clever image processing. It’s an interesting problem for sure, and there has been a lot of research in that area to look into.
It looks like their goal is mostly to deal with the noise and (resulting) limited dynamic range of mobile phones in near real time. Still, it looks like a very interesting paper.
I was initially excited because this is a QT-Python app and I thought I might be able to run it on Android. Then I was disappointed that it depends on Open CV, which is a beast to wrangle on any system, let alone Android. This would be great to deploy on a smart phone to increase the sharpness and decrease noise from their small sensors…