By-passing demosaicing when no Bayer filter

I think the illustrations at Wikipedia are quite good.

If you switch demosaic to passthrough (monochrome), you’ll see something similar to illustration #2 from Wikipedia (something like R = G = B = value of pixel from the raw file, but with the per-channel multipliers from the white balance module still applied, and then processed further by the input colour profile). By switching to photosite color, you’ll get #3 (for a pixel covered by a red filter patch, only R will be set, with G = B = 0; the same logic applies to the other colour components; white balance and input colour profile are still applied). I think these are great ways to understand the colour filter arrays - and to appreciate what real demosaic algorithms do.

And as you (or someone else) pointed out, doing so also removes the microlenses, which leads to inferior results compared to a dedicated-monochrome sensor (which has microlenses but no color filters).

Dedicated-monochrome cameras are extremely rare outside of industrial computer vision applications.

Not so rare in astrophotography, where stacking images is de riguer. The stacking is mostly done to lay flat the noise floor, but with monochrome cameras is also used to integrate images captured with various bandpass filters, both vis and not…

In my little business of making spectral profiles, I bought a monochrome camera for the Raspberry Pi to make a spectrophotometer. Alas, it has a 600nm IR filter, and after reading up on all the scraping/melting alternatives to remove overlay filters, I just gave up and will eventually buy an i1Studio… :laughing:

Just for grins I set demosaic to passthrough (monochrome) for a normal raw file from a Bayer sensor. Of course, I know that this option isn’t meant for this case, but I was curious to see what would happen. The result was somewhat unexpected though. I expected a monochrome image, but since 50% of sensels were covered by a green filter, 25% by a red filter, and 25% by a blue filter I thought there might be an almost undetectable tone difference between monochrome pixels sometimes. That isn’t what I get though. What I get is a checkerboard monochrome image instead. I don’t understand why though. Maybe someone here can explain? Thank you.

The monochrome checkerboard is due to the difference in intensities of the different wavelengths of light passed by the mosaic filters.

Think about it, in a developed image there’s a three-channel pixel at each location. Unless the pixel is displaying a neutral (gray) value, the channel values will be different. If you were to change that image to only display one of the three channels at each location (the bayer mosaic), the resulting pattern looks “checkerboardy”.

And Leica M10 Monochrom, or? And how about the Sigma Foveon sensor? It’s color, but as far as I know there is no bayer filter in front of the sensor - and foveon sensors make excellent b/w images.

here is an image which i hope explains it

Top: Original Image
Middle: Image filtered by a colour filter array
Bottom: Intensities of each pixel captured by a camera.

2 Likes

Yes, I expected there might be some of that, but actually I am seeing very little of that. What I am seeing is a bunch of black squares. The inside of the squares (which had been covered with red, green, or blue filters) has very little tone difference between adjacent pixels. It is especially easy to see when the subject is white or very light, even tone.

Yes, that bottom image is more like what I see. Notice all the black lines. A monochrome sensor, I think, is made up of a bunch of sensels also. A color sensor is the same with a Bayer filter array on top, right? I assume that a monochrome sensor image that is not demosaiced does not have a bunch of lines in the image. I am just sort of confused about what is going on. I can probably post an image, but I assume anyone who converts a color raw with demosaic set to passthrough (monochrome) will easily be able to see the same thing. Zoom in to 100% or 200% to make it easier to see.

All sensors are monochromatic. Cameras have a colour filter array so only certain pixels respond to certain colours. The way to get a colour image is for software to interpret which pixels have which coloured filters and demosaic the image.

If you remove the colour filter array, then the software does not know that and will still be applying white balance.(IE scaling the pixels it thinks are red, green and blue by different amounts) This will create strange patterns as you describe.

Yes, I think if you read my posts in this thread you will see that I understand that already.

Hmm, so it is the white balance that creates the perfect squares or lines? So white balance is applied when using passthrough (monochrome)? Even if the image is from a real monochrome sensor? Here is a 100% crop of a color raw that I demosaiced with passthrough (monochrome) and no sharpening. To see it better you will need to zoom in.

Imagine you photograph a homogeneous surface. If there is no colour filter array, under perfect conditions, all sensor pixels will be illuminated equally, for example, they’d all hold the value 12345 (of course there’s noise and vignetting, and the illumination will probably not be perfectly even, and neither will the surface – but taking any say 10x10 square from the sensor will have nearly identical values).

White balance, if you apply it, without knowing about the lack of filter, will think of sensor locations as being ‘R’, ‘G’ or ‘B’, and, if you don’t set the multipliers to the same value, will apply a different gain to each subgroup. With the Bayer pattern and multipliers 1.5 (for R), 1 (for G), 2 (for B):

R G R G R G
G B G B G B
R G R G R G
G B G B G B

this means your

12345 12345 12345 12345 12345 12345 
12345 12345 12345 12345 12345 12345 
12345 12345 12345 12345 12345 12345 
12345 12345 12345 12345 12345 12345 

values will become

18517 12345 18517 12345 18517 12345 
12345 24690 12345 24690 12345 24690 
18517 12345 18517 12345 18517 12345 
12345 24690 12345 24690 12345 24690 

producing a grid pattern.
So set your RGB multipliers to 1, or disable the white balance module.

And you should probably also set the input color profile to Rec2020 linear (or whatever your working profile is), as normally the conversion it applies depends on the properties of the camera’s colour filter. (If it’s a known monochromatic camera, maybe there’s a suitable profile available, but for ‘converted’ (stripped) cameras, I think the input profile will be invalid).

An image from a real monochrome sensor should not produce a pattern like the one you are seeing if the software is designed to work with the camera.

The pattern in your image is expected behaviour for a colour image when using ‘passthough’.

The brick wall in your image is slightly redish, that means the red pixels are brighter than the green and blue ones that are right next to it. This is what produces the pattern.

1 Like

Okay, I tried that. I turned off the White Balance module and for good measure I turned off the Color Calibration module too. Then in the Input Color Profile module I selected linear Rec2020 RGB for the input profile and the working profile. Here is the result. The squares/lines are even more distinct now.

Maybe what I get is what should be expected, but it was just not expected by me. :grinning: I am still having trouble understanding why this happens though.

Does your raw file come from a sensor that has no CFA? Can you post it here?

1 Like

And you are saying the since the software is darktable 3.6 that it is designed to work with monochrome cameras so it won’t produce squares/lines? Maybe I will try to find a monochrome raw file somewhere to play around with.

I still have some trouble understanding this, but okay.

That wall is gray cinder blocks. Note the white or whitish walls also along the top. Maybe I will later take a photo of a plain, evenly illuminated white surface. Would you expect to have no squares/lines in this case?

See my first post about this subject above. This is a normal color raw file from an Olympus PEN-F. I suspect just about any color raw file from any company will display similar things.

I just now tried it with a Fuji raw file, a Nikon raw file, a Sony raw file, a Panasonic raw file, and a Canon raw file. They all do the same thing.

OK. Then, if it’s the output of a camera with a CFA, an evenly illuminated area will most certainly NOT produce identical values. For a given mix of photons of different frequencies, the R filter will filter some of them, let others (those with longer wavelength) through. G will mostly let those of ‘middle’ wavelength through. B will let mostly the short-wavelength photons through.
So, the sensor readout will not be 12345 for all, but rather (arbitrary numbers)

16789 12345 16789 12345 16789 12345 
12345  8765 12345  8765 12345  8765 
16789 12345 16789 12345 16789 12345 
12345  8765 12345  8765 12345  8765 

In theory, you could use white balance to reduce the pattern somewhat, but highlights in your image will be hit by light with a different spectral distribution (e.g. direct sunlight) than your shadows. Therefore, when you tune the WB coefficients to get R = G = B in one part of the image, you’ll still get a pattern in the other, and vice versa.

If you download a monochrome raw (e.g. from https://the.me/leica-m-monochrom-raw-files-for-download/), you’ll find that darktable does not allow you to apply:

  • demosaic
  • white balance