Why does turning off the white balance module give a green image?

While this is strictly true of course, it may still be useful to have a simple term to indicate what the spectral response of your color filter mostly refer to. It’s always been my impression that the range of the spectra belonging to the ‘R’, ‘G’ and ‘B’ color filters is clustered around a wavelength that - in pure form - would be considered red, green or blue. Right?

Edit: if you want to distinguish a dummy from an expert in these matters, which question would you ask them?

Beware that the [EDIT: some, but not all] response curves in Christian Mauer’s thesis are normalised so each one peaks at 1.0. If we are asking why G values are greater than B or R, we should look at unnormalised graphs, eg in https://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W25/papers/Prasad_Quick_Approximation_of_2013_ICCV_paper.pdf . The area under each curve represents the value recorded in that channel when the received light has equal power across all wavelengths.

From those graphs, we see that cameras vary, but the G channel encloses an area much larger than B or R. This is why, if we apply no multipliers to the channels, we get a green image.

It isn’t because we have twice as many green sensors as blue or red.

On “whenever we are talking about spectra, all notion of colour should be removed”, hmm, well. Grass in my garden is green. True, it isn’t monochromatic, it is a spectrum. But I will still call it green.

2 Likes

So, when we talk about the CFAs, we should then refer to the low-bandpass, mid-bandpass, and high-bandpass filters? Okay…

In measurement terms relevant to the mechanisms that produce renditions we interpret as “color-full”, those wavelengths exist in both the incident, and more important, reflected light that the camera captures. They’re out there, wavelengths of light, and need to be teased apart to produce renditions that are faithful to our perception. I get what you’re after, not confusing light with color. But we need to mindful of the mechanisms that help us map that light to color…

To have an example image corresponding to the spectral response of a colour sensor, in astronomy we often have these with cameras:

3 Likes

Figure 3.4 does, “scaled to unity”. Figure 4.5 is “normalized to 1.0”

No, I really think mixing single-wavelength-related concepts with spectra-related concepts messes up people’s mind. Proof is here. Call that ABC space or lightspectrum tier 1-2-3, but RGB hurts. It’s 2 different mindsets with uncorrelated effects on colour perception. A spectrum with a large bump around the single-wavelength locus of green will not necessarily be perceived green.

Not sure.

You are completely missing the point. Green is what happens when you recombine the tristimulus in your brain. The point is about stopping to call one component alone of that tristimulus “green”. Because, even it that could make sense if that tristimulus is LMS or XYZ cone response, it absolutely doesn’t if you are in random CFA output space.

And yet they all fail at predicting a lot of things, especially in HDR, so all we can do is map light to light and forget about colour.

If I took my camera apart, and looked closely at the CFA with a daylight back-lit source, I suspect the filters would look red, green or blue. Wouldn’t they?

2 Likes

While I do appreciate I sparked this amazingly scientific conversation, what I think I should have asked is if there is a better workflow of referencing between my edits and the camera’s white balance, than going to the drop-down and switching between “camera/as shot” and “custom”, as I find that to be a little cumbersome.

Thanks for the correction. I hadn’t got that far in Christian Mauer’s thesis. True, only some of the graphs are normalised.

If I look at a stopped clock at the right moment, it still looks accurate, doesn’t it ?

Beware of what you think you see. Beside, looking at a filter is quite different than looking through it.

1 Like

Unfortunately not. You can’t escape white balance scaling, it’s a fundamental part of image processing, whether you hide it in the soft or expose it to the user.

1 Like

Other programs like Lightroom or Capture One allow the use to do this by just bypassing the white balance module. That’s why I sort of expected things to be the same in RT and DT.

darktable uses the white balance as shot by the camera by default, I don’t know what you are talking about. It’s taken from the EXIF metadata if Exiv can read it.

1 Like

Have you tried using the editing history and/or snapshots to compare different settings?

I don’t quite understand how they process it since WB correction needs to happen… do you have a link to video where such thing is demonstrated? I want to see what’s actually happening.

Yeah, but it’s back-lit. The light that comes through a “green” filter looks green. I don’t see what’s wrong with calling it green.

Sure, “green” (like any other colour) is a highly imprecise term, and subjective, and so on. But if a bandpass filter lets through wavelengths that we call green, and massively cuts wavelengths that we call blue and red, then deliberately not calling this a “green” filter seems weird.

The camera records a set of white balance multipliers in the metadata, based on the settings you chose. What goes on behind those settings can be a bit of black magic, something you set, or in some cameras you can actually measure a neutral patch in the scene lighting and the camera will calculate the multipliers. However they are determined by the camera, those numbers are what most refer to as “as-shot”. Then, the software can read them out of the metadata and apply them to the image.

I don’t know this for certain, but it appears a lot of software automatically applies the “as-shot” multipliers. You are then left to regard the image and apply whatever corrections you deem necessary. You might be given the opportunity to modify the “as-shot” numbers, or do a separate, subsequent correction with other numbers.

In my software, the whitebalance tool is in the tool chain just like any other tool, and available for modification, so I can modify the original numbers. But, what I really try to do is to shoot the scene with the best set of numbers I can determine then, so I use the camera’s WB measurement tool.

I meant what actually happens in lightroom or capture one mentioned? I know full well how WB in darktable works and it does indeed set “As Shot” by default, but disabling it just spews non-modified data (post demosaicing) so it’s never good idea to disable it :slight_smile:

If user wants to quickly move between “as shot” and his own - @hannoschwalm recently created a feature where that’s possible :slight_smile:

Sorry, I was probably speaking more to the audience than you… :slight_smile: