Why does turning off the white balance module give a green image?

I don’t mean in terms of peak sensitivity to any given wavelength, I mean in terms of total white-light-energy passed by the filter.

Here are some measurements.

That’s just it, the bandpass filter doesn’t allow any wavelengths out of the band bounds to pass, at a given photosite.

So, in a given exposure, assuming the white light has equal power at all the wavelengths (no such thing, so that’s calibrated-out in practice), then the ability to measure that power drops off as the wavelengths go to UV on the low end, IR on the high end. The Nikon D70 measured spectral sensitivity chart on page 53 of Christian Mauer’s thesis, figure 4.5 shows this, I think. So, looking at that chart, wouldn’t it be the “money chart” in answering @zerosapte’s question?

This is a good discussion, it’s helping me to try to sort out and assemble all the things one would need to measure spectral sensitivity to make better camera profiles. I hope we’re not losing @zerosapte in all this…

I think the most important thing that people fail to understand is this:

Camera RGB is only metaphoric RGB.

A Bayer sensor splits a light spectrum into 3 spectral slices, in a fashion that is inspired by the physiology of the retina. So each of these slices is called R, G or B by convention.

But the actual spectral sensitivity of the camera RGB is not the same as the one of your display RGB, let alone the spectral sensitivity of your fovea’s cone cells.

To say it otherwise, the “G” channel of your camera is not green at all. Same applies to the 2 other channels.

Actually, I wonder if we shouldn’t stop calling that “RGB” at all. It’s light spectrum tier 1, light spectrum tier 2, light spectrum tier 3.

And then, depending on how wide these tiers are in the target color space, compared to the input color space, you need to rescale the intensities accordingly. Which is done in a not-so-accurate way by white balance scaling, so you divide the camera RGB by the RGB numbers of your light source in the same space to make the light source achromatic (R = G = B).

1 Like

I do get that confusion, but “Red”, “Green”, and “Blue” do have spectral anchors, so I think if we prepend them with that adjective, .e.g., “spectral red”, we don’t confuse them with the other renditions, so to speak, and we can also then discuss them in digital camera terms, that is, the Bayer or XTrans color filter array (CFA) mechanisms

Unless of course you get into Foveon… “White, Yellow, and Red” don’t have the same ring as “Blue, Green, and Red” (from top to bottom).

From one sensor to another:

  • the spectral bandwidth varies,
  • the peak wavelength/anchor varies,
  • the spectral overlapping varies.

Then, there is no “blue”, “red” or “green” wavelengths, unless you look at lasers, but the resulting color is a consequence of the spectral distribution as a whole, taking into account metameric colours.

So all in all, it’s as much RGB as my a** is chicken wings.

1 Like

There are individual wavelengths that, when presented to most humans, will compel them to utter “red”, “green”, or “blue”, thus the distinction between spectral and metameric colors.

These wavelengths exist only in lasers beams, not in natural setups, not under standard illuminants or black body radiations. Everything is only spectra, and the injection spectra → colours is not reducible to individual wavelengths, but is integrating a distribution S through a transmittance \bar{x}, so X = \int_{400}^{800} \bar{x}(\lambda) \cdot S(\lambda) d\lambda.

So whenever we are talking about spectra, all notion of colour should be removed. Otherwise, people start searching meaning in names instead of searching it in the concepts those named point at.

1 Like

While this is strictly true of course, it may still be useful to have a simple term to indicate what the spectral response of your color filter mostly refer to. It’s always been my impression that the range of the spectra belonging to the ‘R’, ‘G’ and ‘B’ color filters is clustered around a wavelength that - in pure form - would be considered red, green or blue. Right?

Edit: if you want to distinguish a dummy from an expert in these matters, which question would you ask them?

Beware that the [EDIT: some, but not all] response curves in Christian Mauer’s thesis are normalised so each one peaks at 1.0. If we are asking why G values are greater than B or R, we should look at unnormalised graphs, eg in https://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W25/papers/Prasad_Quick_Approximation_of_2013_ICCV_paper.pdf . The area under each curve represents the value recorded in that channel when the received light has equal power across all wavelengths.

From those graphs, we see that cameras vary, but the G channel encloses an area much larger than B or R. This is why, if we apply no multipliers to the channels, we get a green image.

It isn’t because we have twice as many green sensors as blue or red.

On “whenever we are talking about spectra, all notion of colour should be removed”, hmm, well. Grass in my garden is green. True, it isn’t monochromatic, it is a spectrum. But I will still call it green.

2 Likes

So, when we talk about the CFAs, we should then refer to the low-bandpass, mid-bandpass, and high-bandpass filters? Okay…

In measurement terms relevant to the mechanisms that produce renditions we interpret as “color-full”, those wavelengths exist in both the incident, and more important, reflected light that the camera captures. They’re out there, wavelengths of light, and need to be teased apart to produce renditions that are faithful to our perception. I get what you’re after, not confusing light with color. But we need to mindful of the mechanisms that help us map that light to color…

To have an example image corresponding to the spectral response of a colour sensor, in astronomy we often have these with cameras:

3 Likes

Figure 3.4 does, “scaled to unity”. Figure 4.5 is “normalized to 1.0”

No, I really think mixing single-wavelength-related concepts with spectra-related concepts messes up people’s mind. Proof is here. Call that ABC space or lightspectrum tier 1-2-3, but RGB hurts. It’s 2 different mindsets with uncorrelated effects on colour perception. A spectrum with a large bump around the single-wavelength locus of green will not necessarily be perceived green.

Not sure.

You are completely missing the point. Green is what happens when you recombine the tristimulus in your brain. The point is about stopping to call one component alone of that tristimulus “green”. Because, even it that could make sense if that tristimulus is LMS or XYZ cone response, it absolutely doesn’t if you are in random CFA output space.

And yet they all fail at predicting a lot of things, especially in HDR, so all we can do is map light to light and forget about colour.

If I took my camera apart, and looked closely at the CFA with a daylight back-lit source, I suspect the filters would look red, green or blue. Wouldn’t they?

2 Likes

While I do appreciate I sparked this amazingly scientific conversation, what I think I should have asked is if there is a better workflow of referencing between my edits and the camera’s white balance, than going to the drop-down and switching between “camera/as shot” and “custom”, as I find that to be a little cumbersome.

Thanks for the correction. I hadn’t got that far in Christian Mauer’s thesis. True, only some of the graphs are normalised.

If I look at a stopped clock at the right moment, it still looks accurate, doesn’t it ?

Beware of what you think you see. Beside, looking at a filter is quite different than looking through it.

1 Like

Unfortunately not. You can’t escape white balance scaling, it’s a fundamental part of image processing, whether you hide it in the soft or expose it to the user.

1 Like