By-passing demosaicing when no Bayer filter

Does your raw file come from a sensor that has no CFA? Can you post it here?

1 Like

And you are saying the since the software is darktable 3.6 that it is designed to work with monochrome cameras so it won’t produce squares/lines? Maybe I will try to find a monochrome raw file somewhere to play around with.

I still have some trouble understanding this, but okay.

That wall is gray cinder blocks. Note the white or whitish walls also along the top. Maybe I will later take a photo of a plain, evenly illuminated white surface. Would you expect to have no squares/lines in this case?

See my first post about this subject above. This is a normal color raw file from an Olympus PEN-F. I suspect just about any color raw file from any company will display similar things.

I just now tried it with a Fuji raw file, a Nikon raw file, a Sony raw file, a Panasonic raw file, and a Canon raw file. They all do the same thing.

OK. Then, if it’s the output of a camera with a CFA, an evenly illuminated area will most certainly NOT produce identical values. For a given mix of photons of different frequencies, the R filter will filter some of them, let others (those with longer wavelength) through. G will mostly let those of ‘middle’ wavelength through. B will let mostly the short-wavelength photons through.
So, the sensor readout will not be 12345 for all, but rather (arbitrary numbers)

16789 12345 16789 12345 16789 12345 
12345  8765 12345  8765 12345  8765 
16789 12345 16789 12345 16789 12345 
12345  8765 12345  8765 12345  8765 

In theory, you could use white balance to reduce the pattern somewhat, but highlights in your image will be hit by light with a different spectral distribution (e.g. direct sunlight) than your shadows. Therefore, when you tune the WB coefficients to get R = G = B in one part of the image, you’ll still get a pattern in the other, and vice versa.

If you download a monochrome raw (e.g. from https://the.me/leica-m-monochrom-raw-files-for-download/), you’ll find that darktable does not allow you to apply:

  • demosaic
  • white balance

Just a reminder this is my first post in this thread.

What you describe is what I expected. See way up above in one of my earlier responses. I would expect that the inside of each of the squares might have slightly different gray tonal values since some of them had a green filter, some had a red filter, and some had a blue filter. But, what I see and what you can see is that in adjacent pixels there isn’t much change in the gray tone. But there are distinct squares/lines. Sharp, distinct squares/lines. That is what I still don’t understand. Why are these distinct squares/lines? I will say again: The inside of the adjacent squares have pretty much the same gray tone.

Here is your image, as you posted it, properly demosaiced.

Here is your image with the colour filter array assigned to the pixels.

Here is another way to look at it. In a colour image, each colour is represented by 3 values. One for red, one for green and one for blue. Let’s say you take a picture of a tomato. The colour values might be 200 for Red, 50 for Green and 30 for Blue.

On a bayer sensor these values are not stacked on top of each other in one pixel, they are next to each other. That means the vlaues of the pixels fo rthe tomato would be like this

[200]  [50] [200]  [50] 
 [50] [30]  [50] [30]
[200]  [50] [200]  [50] 
 [50] [30]  [50] [30]

This produces a checker-board pattern if you just consider values, which is what happens in ‘passthough’

Edit: Fixed bayer pattern so that it is actually a bayer pattern

You have red-, green- and blue-filtered pixels, arranged in a square:
rgrgrgrgrgrgr
gbgbgbgbgb
rgrgrgrgrgrg
gbgbgbgbgb

if you send e.g. pure red light on this, the pixels with an x will get a signal, o is no signal:
xoxoxoxoxo
ooooooooo
xoxoxoxoxo
ooooooooo

Or, with different symbols:
░█░█░█░█
████████
░█░█░█░█
████████

So, black lines where pixels did not receive light through the colour filters, gray or white where the pixels did receive light.

You can get continuous gray in this situation, but only when the incoming light is composed of equal amounts of red, green and blue light (equal after correction for filter characteristics, sensor sensitivity, etc.)

When you say ‘pixel’, do you mean raw pixel value? In Iain’s post above, the 200 values would show up as white lines, the 30 values as rather dark lines. Our brains are very good at recognising patterns and continuing lines. That’s why we see shapes ‘that are not there’ in images like
image

As I said - extremely rare

Industrial monochrome sensors are far more common (in relation to other industrial sensors) than Leica Ms and Foveons are in non-industrial applications.

Worth noting - due to the nature of silicon photodiode response (peak response/efficiency is green light illuminating the green CFA sites), “neutral grey” for most sensors corresponds to something that is quite the magenta tint to a human eye before applying white balance scaling.

Which is also why for most sensors, if you treat “neutral grey” from the sensor as being the D65 white point (such as one of @Iain 's examples) such as if you assume the sensor data is sRGB gamut/linear transfer instead of camera-native, it looks like a heavy green tint because what looks like white to us is significantly greenish in relation to the sensor’s actual white point.

Another example would be fluorescence microscopy. Due to the overall low light intensities and knowing which wavelength will arrive at the camera due to the used fluorophores there is no filter array in front of the sensor, but “proper” exchangeable interference filters in front of the camera.

(Link to insanely expensive Zeiss low megapixel cameras)

Here’s the deal: you won’t get an image captured from a camera with a CFA to behave as monochrome until you convert the RGB channels to a single value representing intensity. Really, you can’t erase the CFA with white balance pet tricks. That’s why there’s a sub-culture of CFA scraping enthusiasts, sharing the lore and ways of removing the pesky thing.

“Passthru” just turns off the process of encoding color, which leaves you with the uneven mosaic of measurements. That cannot ever be monochrome, with the smoothness of textures you seek, until something is done resembling demosaic to turn each pixel into a true representation of the luminance at that point.

Simple as that.

4 Likes

Sharp distinct squares/lines? I’m not sure what you mean.

image

This is 400% zoomed in portion of your passthrough image. The dot-like pattern you see is due to the different absorptions of the different filters in the CFA. Please clarify if there is something else you have difficulty to understand.

I consider that to fall into the “industrial” bucket, although I guess it’s better to describe it like many ecommerce sites categorize related products, “Industrial and Scientific”

@Thanatomanic I think that’s exactly what he meant - I consider those to be distinct squares/lines. It’s exactly what I would expect from a CFA that was illuminated with something other than the camera-native white point, which as I mentioned elsewhere, happens to appear to be magenta to the human eye due to the nature of bayer-on-silicon spectral responses. (which is why white balance is a required step for any Bayer-CFA camera.)

1 Like

Is there some way to ‘neutralize’ the filter value offsets on a pass-through image?

Okay, thanks to all of you. While I still have some trouble understanding why the squares/lines are there I am sure you are right.

For example, with this image:

I had (I guess wrongly) expected something like this:

You would only get a flat grey if the sensor was illuminated with light at exactly its native “white” point. As mentioned, due to the properties of color filter arrays, the actual native “white” point (e.g. where the red, green, and blue photosites are registering the same value) of most Bayer-on-silicon sensors looks magenta-ish to us, because sensors are more sensitive to green light than blue or red, so will register a higher value of green/more photons when illuminated by white light.

(and yeah, the definition of “white” can vary too, but most typical lighting lies roughly on the Planckian locus, and anything along that will result in higher G values than R or B recorded by the sensor)

That is pretty cool. How did you do that to get the color back with a monochrome JPEG? Here it is using RCD.

A little while ago I went and took a look at that wall. It is rather old and made from gray concrete blocks, but the color is no longer pure gray anymore. My photo above is closer to the right color. In bright sunlight it can appear a bit more gray.

Using ‘passthrough’ means that the pixel values represent one colour value only. All I had to do was apply the correct colour filter pattern to get the colour, then demosaic it. It’s the same process that Darktable (or any RAW conversion software) uses.

In other words, you passed the sensor data through to me.

In software, the term pass-through means “do nothing”.