By-passing demosaicing when no Bayer filter

Yes, I expected there might be some of that, but actually I am seeing very little of that. What I am seeing is a bunch of black squares. The inside of the squares (which had been covered with red, green, or blue filters) has very little tone difference between adjacent pixels. It is especially easy to see when the subject is white or very light, even tone.

Yes, that bottom image is more like what I see. Notice all the black lines. A monochrome sensor, I think, is made up of a bunch of sensels also. A color sensor is the same with a Bayer filter array on top, right? I assume that a monochrome sensor image that is not demosaiced does not have a bunch of lines in the image. I am just sort of confused about what is going on. I can probably post an image, but I assume anyone who converts a color raw with demosaic set to passthrough (monochrome) will easily be able to see the same thing. Zoom in to 100% or 200% to make it easier to see.

All sensors are monochromatic. Cameras have a colour filter array so only certain pixels respond to certain colours. The way to get a colour image is for software to interpret which pixels have which coloured filters and demosaic the image.

If you remove the colour filter array, then the software does not know that and will still be applying white balance.(IE scaling the pixels it thinks are red, green and blue by different amounts) This will create strange patterns as you describe.

Yes, I think if you read my posts in this thread you will see that I understand that already.

Hmm, so it is the white balance that creates the perfect squares or lines? So white balance is applied when using passthrough (monochrome)? Even if the image is from a real monochrome sensor? Here is a 100% crop of a color raw that I demosaiced with passthrough (monochrome) and no sharpening. To see it better you will need to zoom in.

Imagine you photograph a homogeneous surface. If there is no colour filter array, under perfect conditions, all sensor pixels will be illuminated equally, for example, they’d all hold the value 12345 (of course there’s noise and vignetting, and the illumination will probably not be perfectly even, and neither will the surface – but taking any say 10x10 square from the sensor will have nearly identical values).

White balance, if you apply it, without knowing about the lack of filter, will think of sensor locations as being ‘R’, ‘G’ or ‘B’, and, if you don’t set the multipliers to the same value, will apply a different gain to each subgroup. With the Bayer pattern and multipliers 1.5 (for R), 1 (for G), 2 (for B):

R G R G R G
G B G B G B
R G R G R G
G B G B G B

this means your

12345 12345 12345 12345 12345 12345 
12345 12345 12345 12345 12345 12345 
12345 12345 12345 12345 12345 12345 
12345 12345 12345 12345 12345 12345 

values will become

18517 12345 18517 12345 18517 12345 
12345 24690 12345 24690 12345 24690 
18517 12345 18517 12345 18517 12345 
12345 24690 12345 24690 12345 24690 

producing a grid pattern.
So set your RGB multipliers to 1, or disable the white balance module.

And you should probably also set the input color profile to Rec2020 linear (or whatever your working profile is), as normally the conversion it applies depends on the properties of the camera’s colour filter. (If it’s a known monochromatic camera, maybe there’s a suitable profile available, but for ‘converted’ (stripped) cameras, I think the input profile will be invalid).

An image from a real monochrome sensor should not produce a pattern like the one you are seeing if the software is designed to work with the camera.

The pattern in your image is expected behaviour for a colour image when using ‘passthough’.

The brick wall in your image is slightly redish, that means the red pixels are brighter than the green and blue ones that are right next to it. This is what produces the pattern.

1 Like

Okay, I tried that. I turned off the White Balance module and for good measure I turned off the Color Calibration module too. Then in the Input Color Profile module I selected linear Rec2020 RGB for the input profile and the working profile. Here is the result. The squares/lines are even more distinct now.

Maybe what I get is what should be expected, but it was just not expected by me. :grinning: I am still having trouble understanding why this happens though.

Does your raw file come from a sensor that has no CFA? Can you post it here?

1 Like

And you are saying the since the software is darktable 3.6 that it is designed to work with monochrome cameras so it won’t produce squares/lines? Maybe I will try to find a monochrome raw file somewhere to play around with.

I still have some trouble understanding this, but okay.

That wall is gray cinder blocks. Note the white or whitish walls also along the top. Maybe I will later take a photo of a plain, evenly illuminated white surface. Would you expect to have no squares/lines in this case?

See my first post about this subject above. This is a normal color raw file from an Olympus PEN-F. I suspect just about any color raw file from any company will display similar things.

I just now tried it with a Fuji raw file, a Nikon raw file, a Sony raw file, a Panasonic raw file, and a Canon raw file. They all do the same thing.

OK. Then, if it’s the output of a camera with a CFA, an evenly illuminated area will most certainly NOT produce identical values. For a given mix of photons of different frequencies, the R filter will filter some of them, let others (those with longer wavelength) through. G will mostly let those of ‘middle’ wavelength through. B will let mostly the short-wavelength photons through.
So, the sensor readout will not be 12345 for all, but rather (arbitrary numbers)

16789 12345 16789 12345 16789 12345 
12345  8765 12345  8765 12345  8765 
16789 12345 16789 12345 16789 12345 
12345  8765 12345  8765 12345  8765 

In theory, you could use white balance to reduce the pattern somewhat, but highlights in your image will be hit by light with a different spectral distribution (e.g. direct sunlight) than your shadows. Therefore, when you tune the WB coefficients to get R = G = B in one part of the image, you’ll still get a pattern in the other, and vice versa.

If you download a monochrome raw (e.g. from https://the.me/leica-m-monochrom-raw-files-for-download/), you’ll find that darktable does not allow you to apply:

  • demosaic
  • white balance

Just a reminder this is my first post in this thread.

What you describe is what I expected. See way up above in one of my earlier responses. I would expect that the inside of each of the squares might have slightly different gray tonal values since some of them had a green filter, some had a red filter, and some had a blue filter. But, what I see and what you can see is that in adjacent pixels there isn’t much change in the gray tone. But there are distinct squares/lines. Sharp, distinct squares/lines. That is what I still don’t understand. Why are these distinct squares/lines? I will say again: The inside of the adjacent squares have pretty much the same gray tone.

Here is your image, as you posted it, properly demosaiced.

Here is your image with the colour filter array assigned to the pixels.

Here is another way to look at it. In a colour image, each colour is represented by 3 values. One for red, one for green and one for blue. Let’s say you take a picture of a tomato. The colour values might be 200 for Red, 50 for Green and 30 for Blue.

On a bayer sensor these values are not stacked on top of each other in one pixel, they are next to each other. That means the vlaues of the pixels fo rthe tomato would be like this

[200]  [50] [200]  [50] 
 [50] [30]  [50] [30]
[200]  [50] [200]  [50] 
 [50] [30]  [50] [30]

This produces a checker-board pattern if you just consider values, which is what happens in ‘passthough’

Edit: Fixed bayer pattern so that it is actually a bayer pattern

You have red-, green- and blue-filtered pixels, arranged in a square:
rgrgrgrgrgrgr
gbgbgbgbgb
rgrgrgrgrgrg
gbgbgbgbgb

if you send e.g. pure red light on this, the pixels with an x will get a signal, o is no signal:
xoxoxoxoxo
ooooooooo
xoxoxoxoxo
ooooooooo

Or, with different symbols:
░█░█░█░█
████████
░█░█░█░█
████████

So, black lines where pixels did not receive light through the colour filters, gray or white where the pixels did receive light.

You can get continuous gray in this situation, but only when the incoming light is composed of equal amounts of red, green and blue light (equal after correction for filter characteristics, sensor sensitivity, etc.)

When you say ‘pixel’, do you mean raw pixel value? In Iain’s post above, the 200 values would show up as white lines, the 30 values as rather dark lines. Our brains are very good at recognising patterns and continuing lines. That’s why we see shapes ‘that are not there’ in images like
image

As I said - extremely rare

Industrial monochrome sensors are far more common (in relation to other industrial sensors) than Leica Ms and Foveons are in non-industrial applications.

Worth noting - due to the nature of silicon photodiode response (peak response/efficiency is green light illuminating the green CFA sites), “neutral grey” for most sensors corresponds to something that is quite the magenta tint to a human eye before applying white balance scaling.

Which is also why for most sensors, if you treat “neutral grey” from the sensor as being the D65 white point (such as one of @Iain 's examples) such as if you assume the sensor data is sRGB gamut/linear transfer instead of camera-native, it looks like a heavy green tint because what looks like white to us is significantly greenish in relation to the sensor’s actual white point.

Another example would be fluorescence microscopy. Due to the overall low light intensities and knowing which wavelength will arrive at the camera due to the used fluorophores there is no filter array in front of the sensor, but “proper” exchangeable interference filters in front of the camera.

(Link to insanely expensive Zeiss low megapixel cameras)

Here’s the deal: you won’t get an image captured from a camera with a CFA to behave as monochrome until you convert the RGB channels to a single value representing intensity. Really, you can’t erase the CFA with white balance pet tricks. That’s why there’s a sub-culture of CFA scraping enthusiasts, sharing the lore and ways of removing the pesky thing.

“Passthru” just turns off the process of encoding color, which leaves you with the uneven mosaic of measurements. That cannot ever be monochrome, with the smoothness of textures you seek, until something is done resembling demosaic to turn each pixel into a true representation of the luminance at that point.

Simple as that.

4 Likes