Why does turning off the white balance module give a green image?

From one sensor to another:

  • the spectral bandwidth varies,
  • the peak wavelength/anchor varies,
  • the spectral overlapping varies.

Then, there is no “blue”, “red” or “green” wavelengths, unless you look at lasers, but the resulting color is a consequence of the spectral distribution as a whole, taking into account metameric colours.

So all in all, it’s as much RGB as my a** is chicken wings.

1 Like

There are individual wavelengths that, when presented to most humans, will compel them to utter “red”, “green”, or “blue”, thus the distinction between spectral and metameric colors.

These wavelengths exist only in lasers beams, not in natural setups, not under standard illuminants or black body radiations. Everything is only spectra, and the injection spectra → colours is not reducible to individual wavelengths, but is integrating a distribution S through a transmittance \bar{x}, so X = \int_{400}^{800} \bar{x}(\lambda) \cdot S(\lambda) d\lambda.

So whenever we are talking about spectra, all notion of colour should be removed. Otherwise, people start searching meaning in names instead of searching it in the concepts those named point at.

1 Like

While this is strictly true of course, it may still be useful to have a simple term to indicate what the spectral response of your color filter mostly refer to. It’s always been my impression that the range of the spectra belonging to the ‘R’, ‘G’ and ‘B’ color filters is clustered around a wavelength that - in pure form - would be considered red, green or blue. Right?

Edit: if you want to distinguish a dummy from an expert in these matters, which question would you ask them?

Beware that the [EDIT: some, but not all] response curves in Christian Mauer’s thesis are normalised so each one peaks at 1.0. If we are asking why G values are greater than B or R, we should look at unnormalised graphs, eg in https://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W25/papers/Prasad_Quick_Approximation_of_2013_ICCV_paper.pdf . The area under each curve represents the value recorded in that channel when the received light has equal power across all wavelengths.

From those graphs, we see that cameras vary, but the G channel encloses an area much larger than B or R. This is why, if we apply no multipliers to the channels, we get a green image.

It isn’t because we have twice as many green sensors as blue or red.

On “whenever we are talking about spectra, all notion of colour should be removed”, hmm, well. Grass in my garden is green. True, it isn’t monochromatic, it is a spectrum. But I will still call it green.

2 Likes

So, when we talk about the CFAs, we should then refer to the low-bandpass, mid-bandpass, and high-bandpass filters? Okay…

In measurement terms relevant to the mechanisms that produce renditions we interpret as “color-full”, those wavelengths exist in both the incident, and more important, reflected light that the camera captures. They’re out there, wavelengths of light, and need to be teased apart to produce renditions that are faithful to our perception. I get what you’re after, not confusing light with color. But we need to mindful of the mechanisms that help us map that light to color…

To have an example image corresponding to the spectral response of a colour sensor, in astronomy we often have these with cameras:

3 Likes

Figure 3.4 does, “scaled to unity”. Figure 4.5 is “normalized to 1.0”

No, I really think mixing single-wavelength-related concepts with spectra-related concepts messes up people’s mind. Proof is here. Call that ABC space or lightspectrum tier 1-2-3, but RGB hurts. It’s 2 different mindsets with uncorrelated effects on colour perception. A spectrum with a large bump around the single-wavelength locus of green will not necessarily be perceived green.

Not sure.

You are completely missing the point. Green is what happens when you recombine the tristimulus in your brain. The point is about stopping to call one component alone of that tristimulus “green”. Because, even it that could make sense if that tristimulus is LMS or XYZ cone response, it absolutely doesn’t if you are in random CFA output space.

And yet they all fail at predicting a lot of things, especially in HDR, so all we can do is map light to light and forget about colour.

If I took my camera apart, and looked closely at the CFA with a daylight back-lit source, I suspect the filters would look red, green or blue. Wouldn’t they?

2 Likes

While I do appreciate I sparked this amazingly scientific conversation, what I think I should have asked is if there is a better workflow of referencing between my edits and the camera’s white balance, than going to the drop-down and switching between “camera/as shot” and “custom”, as I find that to be a little cumbersome.

Thanks for the correction. I hadn’t got that far in Christian Mauer’s thesis. True, only some of the graphs are normalised.

If I look at a stopped clock at the right moment, it still looks accurate, doesn’t it ?

Beware of what you think you see. Beside, looking at a filter is quite different than looking through it.

1 Like

Unfortunately not. You can’t escape white balance scaling, it’s a fundamental part of image processing, whether you hide it in the soft or expose it to the user.

1 Like

Other programs like Lightroom or Capture One allow the use to do this by just bypassing the white balance module. That’s why I sort of expected things to be the same in RT and DT.

darktable uses the white balance as shot by the camera by default, I don’t know what you are talking about. It’s taken from the EXIF metadata if Exiv can read it.

1 Like

Have you tried using the editing history and/or snapshots to compare different settings?

I don’t quite understand how they process it since WB correction needs to happen… do you have a link to video where such thing is demonstrated? I want to see what’s actually happening.

Yeah, but it’s back-lit. The light that comes through a “green” filter looks green. I don’t see what’s wrong with calling it green.

Sure, “green” (like any other colour) is a highly imprecise term, and subjective, and so on. But if a bandpass filter lets through wavelengths that we call green, and massively cuts wavelengths that we call blue and red, then deliberately not calling this a “green” filter seems weird.