Quad-Bayer demosaicing

The QHY294C camera I own mounts the sony “Quad Bayer” CFA, and offers the opportunity to unlock the 4.6 um pixels, that turn each to four 2.3 um pixels. Apparently this offers the opportunity to match this camera with both short focal length (for example 400mm) in 47M mode, and with medium focal length instruments, in normal mode.
Unfortunately the bayer matrix is necessarily the same in both modes. So, in normal mode you have a RGGB matrix, but in 47M mode, when you split each pixel in an array of 2x2 pixels, you are left with a weird RRGGRRGGGGBBGGBB bayer matrix.
The point is that there is currently no debayering algorithm available to demosaic such pattern. Siril is no exception (AFAIK). Bottom line: I spent two nights outside only to discover during post processing that my images are worthless.

May i ask if there is some plan to introduce some algorithm in the future, that allows such “Quad Bayer” CFAs to be debayered? As they are currently used in mobile phones, some debayering algorithm must exist somewhere.

The fun part of this pixel arrangement is that a square of 4 pixels with the same filter can have two different gain settings for the two pixels in each diagonal. The goal of this was to make HDR 4k daylight cameras, with half of the pixel shooting the bright areas and half shooting the darker areas.

Hello and welcome.

Hence this is not a Bayer pattern anymore.

Algorithm probably exist, but I don’t think they are opensource.

In Siril we use the library librtprocess to debayer our images. And I don’t think they are such a plan to debayer these sensors.
(ping @heckflosse).

Cheers,

@Renato_Talucci, welcome to the forum.

Can you post a raw file of this format? Preferably, a picture of something easily regarded, not a star field.

1 Like

Hello Glenn,
here I’m posting one exposure (M31, SW evostar 72Ed, N.I.N.A. capture software, more details in the FITS header).
I thought that to save something out of those exposures I could “downsample” the raw data, averaging each 2x2 pixel in one pixel.
60.000000.fits (90.8 MB)

I’m not an expert of debayering & Co and maybe I misunderstood it, but as far as I can tell, this could be such an algorithm:

The approach is based on artificial intelligence. No idea if something like that could be applicable to night astrophotography, as it is of course dedicated to daylight cell-phone photography.

Converted the FITS image to TIFF with ImageMagick’s convert, observed the quad pattern. Assuming:

RRGG
RRGG
GGBB
GGBB

I’m endeavoring to write a quadtiff2rgb.cpp that walks each quad, collects sum of the Rs, G1s, G2s, and Bs, then divides those sums by 4 to get averages. Put all that in a 1/4-sized RGB image and write a TIFF. Not working in an un-intuitive way right now…

1 Like

Ha! got it to work. Here’s the demosaic part:

unsigned qfarray[4][4] = {
			{0,0,1,1},
			{0,0,1,1},
			{3,3,2,2},
			{3,3,2,2}
		};

		int arraydim = 4;

		std::vector<pix> quadimage;
		quadimage.resize((h/4)*(w/4));

		for (unsigned y=0; y<h-(arraydim-1); y+=arraydim) {
			for (unsigned x=0; x<w-(arraydim-1); x+=arraydim) {
				unsigned Hpos = (x/4) + (y/4)*(w/4);
				float pix[4] = {0.0, 0.0, 0.0, 0.0};
				for (unsigned r=0; r<arraydim; r++) {  //walk the 16x16 image subset, collect the channel values 
					for (unsigned c=0; c<arraydim; c++) {
						int pos = (x+c) + (y+r) * w;
						pix[ qfarray[r][c] ] += image[pos]; 
					}
				}
				
				for (unsigned i=0; i<4; i++) pix[i] /= 4.0;  //calculate quad average
				
				pix[1] = (pix[1] + pix[3]) / 2.0; //make a single green of G1 and G2

				//put the result in the appropriate place in the halfsize image:
				quadimage[Hpos].r = (unsigned short) pix[0];
				quadimage[Hpos].g = (unsigned short) pix[1];
				quadimage[Hpos].b = (unsigned short) pix[2];
			}
		}

I’ll clean up the whole program and post it somewhere this evening…

Edit: tried to post a separate entry for this, pixls.us complained about three consecutive posts…

Okay, I made a github repo with the program and a makefile:

https://github.com/butcherg/quadtiff2rgbtiff

Something like this should maybe end up in librtprocess, but probably not this simple iteration…

2 Likes

Oh, and here’s the RGB image:

foo.tif (17.0 MB)

1 Like

Hello Glenn,
I binned the original images (factor 2) with IRIS and recovered the color. In fact, binning the 2,3 um pixel matrix is the same operation that this sensor implements if operated in 12Mp (4,6 um pixels) mode.
Nevertheless, neither your approach nor mine can be considered a real debayering solution for the Quad Bayer CFA.
I complained with QHYCCD for the way they advertise this camera, as it sounds like it would be able to take images with two different resolutions at user choice. In absence of a demosaicing agorithm for the Quad Bayer this is not true and the 48 Mp mode gives only useless data.

Not quite - if binning is performed on the sensor in the charge domain, there is an additional gain in SNR to be benefited from.

This is, for example, why some mobile sensors can now output 12-bit RAW in quad binning mode instead of traditional 10-bit…

Yeah, I never thought of this as the final solution, it simply is a framework within which to try something better than simple average. Within the r an c loops the 4x4 pixel array can be referenced, and munged to one’s satisfaction.

I just skimmed over literature I could find yesterday, but I wouldn’t give up on the camera just yet. It’s just a little ahead of the software… :laughing:

So, If I have this right, what’s being presented in this mode are 2x2 ‘bins’ of the same channel, arranged in a regular RGGB mosaic of 2x2 bins. So, if I just delivered an RGGB mosaic of “de-binned” pixels, couldn’t this be fed to any “regular” demosaic routine?

I’m thinking of what would be effective to add to librtprocess; maybe it’s just a debin routine…

I contacted the technical assistance of QHYCCD, and according to them the issue is actually quite tricky:

This needs to solve the debayer algorithm for quadbayer under sdk. This part of the work is planned, but because quardbayer is a new type of bayer format, conventional algorithms cannot be implemented. We are researching new algorithms, which are difficult and require a certain amount of time. .

I have asked if existing (probably proprietary) algorithms implmented in cellphone cameras could be applied to astrophotography as well. If I get an answer I will share it.