Why does turning off the white balance module give a green image?

If I took my camera apart, and looked closely at the CFA with a daylight back-lit source, I suspect the filters would look red, green or blue. Wouldn’t they?

2 Likes

While I do appreciate I sparked this amazingly scientific conversation, what I think I should have asked is if there is a better workflow of referencing between my edits and the camera’s white balance, than going to the drop-down and switching between “camera/as shot” and “custom”, as I find that to be a little cumbersome.

Thanks for the correction. I hadn’t got that far in Christian Mauer’s thesis. True, only some of the graphs are normalised.

If I look at a stopped clock at the right moment, it still looks accurate, doesn’t it ?

Beware of what you think you see. Beside, looking at a filter is quite different than looking through it.

1 Like

Unfortunately not. You can’t escape white balance scaling, it’s a fundamental part of image processing, whether you hide it in the soft or expose it to the user.

1 Like

Other programs like Lightroom or Capture One allow the use to do this by just bypassing the white balance module. That’s why I sort of expected things to be the same in RT and DT.

darktable uses the white balance as shot by the camera by default, I don’t know what you are talking about. It’s taken from the EXIF metadata if Exiv can read it.

1 Like

Have you tried using the editing history and/or snapshots to compare different settings?

I don’t quite understand how they process it since WB correction needs to happen… do you have a link to video where such thing is demonstrated? I want to see what’s actually happening.

Yeah, but it’s back-lit. The light that comes through a “green” filter looks green. I don’t see what’s wrong with calling it green.

Sure, “green” (like any other colour) is a highly imprecise term, and subjective, and so on. But if a bandpass filter lets through wavelengths that we call green, and massively cuts wavelengths that we call blue and red, then deliberately not calling this a “green” filter seems weird.

The camera records a set of white balance multipliers in the metadata, based on the settings you chose. What goes on behind those settings can be a bit of black magic, something you set, or in some cameras you can actually measure a neutral patch in the scene lighting and the camera will calculate the multipliers. However they are determined by the camera, those numbers are what most refer to as “as-shot”. Then, the software can read them out of the metadata and apply them to the image.

I don’t know this for certain, but it appears a lot of software automatically applies the “as-shot” multipliers. You are then left to regard the image and apply whatever corrections you deem necessary. You might be given the opportunity to modify the “as-shot” numbers, or do a separate, subsequent correction with other numbers.

In my software, the whitebalance tool is in the tool chain just like any other tool, and available for modification, so I can modify the original numbers. But, what I really try to do is to shoot the scene with the best set of numbers I can determine then, so I use the camera’s WB measurement tool.

I meant what actually happens in lightroom or capture one mentioned? I know full well how WB in darktable works and it does indeed set “As Shot” by default, but disabling it just spews non-modified data (post demosaicing) so it’s never good idea to disable it :slight_smile:

If user wants to quickly move between “as shot” and his own - @hannoschwalm recently created a feature where that’s possible :slight_smile:

Sorry, I was probably speaking more to the audience than you… :slight_smile:

I was a Lightroom Classic user for many years and I can’t remember it working any differently from DT and RT. The WB defaults to As Shot and you can then choose to change to a different profile like Daylight, Tungsten, Cloudy, etc. if you want.

Why does turning off the white balance module give a green image?

The answer is simple: because a non-whitebalanced image looks like that.

Problem is the understanding of the term ‘white balance’ which is ambiguous:
Many people open an image in JPG format with an editor like Photoshop or Gimp. Now they see a part of the image that should be grey, but has a green tint. They say 'this image is not white balanced yet".
Then they use tools like a RGB-curve or RGB-levels to make the grey areas neutral and say ‘now this image is white balanced’.
But what these people actually have done is a ‘color correction’.

‘White balance’ is something completely different:
Let’s say you make a photo of a grey wall. Every photosensor (equals one pixel) gets the same amount of light and should have recorded the same amount of brightness. Converting this into a grayscale image should theoretically result in a picture with a constant level of brightness.
But in practice the resulting image is not a flat grey arey, it looks like a raster. Every pixel switches from dark to light (see figure 5 in the link below).

The reason is the color filter array above the photosensors. They are covered with red, green and blue overlays which are more and less responsive to different spectrums of light. The red filters are more responsive for light in the red spectrum, green for green spectrum and blue for the blue one.

This array is needed to estimate colors in the image (demosaicing)
But before that the white levels need to be balanced (see figure 6 in the link below)