White balance and jpeg

Thanks Morgan.
Last question and then I close here :slightly_smiling_face:

About the

compensate for the different photosite count under the color filter

statement , then the WB module knows how the CFA is made; could be it Bayer,Xtrans or other, right?

@dafrasaga the implementation differs between programs. In RawTherapee I assume it’s CFA-agnostic and just balances the channels in some space, probably RGB. @heckflosse @jdc would know more.

I suppose you want to know the connection between the various white balance tags (likeWB_RGGBLevels, RedBalance, BlueBalance) and what actually happens. Well again I’m just guessing, and my guess is that there is no connection - the tags are ignored because most raw formats don’t have them.

From my Nikon D7000, with exiftool -G:

[MakerNotes]    WB RB Levels                    : 2.09765625 1.31640625 1 1
[MakerNotes]    WB GRBG Levels                  : 256 537 337 256

The RB tag has the numbers as muiltipliers, Red, Blue, and Green, Green for completeness, I guess

The GRBG tag has what I’ll call for lack of a better term ‘relative references’, using Green as the anchor. If you divide the R and B numbers by 256, you’ll get multiplier values like in the RB tag.

The multpliers are what the camera asserts it takes to multiply the R and B channels by to make a correct reference to ‘white’. Green is usually used as the ‘reference’, but you could make mulitipliers that use either of the other channels values and get a decent, if exposure-shifted, image.

Really the actual temperature of the light is only in play until the scene is measured; after that, it’s R, G, and B, and what it takes to shift them into some notion of white. When you play with temp/tint in a raw processor, it has to be turned into RGB multipliers to be inflicted upon the image. In post-processing, temp/tint is just an abstraction that I think confuses what you’re really after, which is to make R=B=G in any pixel that is supposed to be neutral. To my myopic thinking, the only good assertion of white balance comes from a patch in the scene that you want to be white. After that, it’s just coarse assumptions and messing-around…

1 Like

Why should white balance be influenced by the number of pixels per channel? The signal level in each photo site does only depend on the illumination level and not on the number of neighbouring photo sites. It is the histograms that are, to my understanding, being shifted to make their center of “gravity” the same for each channel, at least for one algorithm of WB (grey world).

Hermann-Josef

…do these multiplications occur before or after the demosaicization :confused:

Either, depending on the software. They need to be applied before the image data is tone-mapped to something other than its linear relationship.

I recently did just this experiment, for another thread:

Let’s go deeper. When light hits the sensor, it can be measured via emissions caused by the photoelectric effect. A measurement is made at every photosite representing a pixel or subpixel. It doesn’t matter how the sensor and circuitry were designed, all you get is a set of signals that could mean anything until the hardware and firmware translates it into something meaningful; even then we would need things like white balance, profiling, calibration, bias, transforms, etc., to get the colours we want.


At the simplest level, the Bayer pattern is usually RGGB. If we have more Gs, then it would make sense for us not to multiply it to get a larger value. We have less of Rs and Bs, so we need to make them brighter to match the Gs in intensity. Of course, we don’t just multiply haphazardly: we want to do so so that the outcome will have some semblance of the colour we are trying to replicate with our cameras.

Ninja edit: this might have come out the wrong way. Read on to find out why. No more writing from me!

…and if the CFA is Xtrans the WB module would consider the 2,5 green/blu-red ratio, right?

I am unfamiliar with X-Trans. Maybe it is abstracted so we treat it the same as RGGB in terms of WB multipliers. It also depends on the raw developer. WB can be controlled differently in each app. E.g., some expose 3-4 multipliers while others use temp / tint. Even temp / tint is differently handled among apps.

This is not correct! Imagine that demosaicing creates three separate grey-scale images, one for each channel. The signal you measure in the G-band is not stronger than in the other bands because there are more pixels. It may be stronger because of the spectral sensitivity. The latter has to be corrected for by WB, not the number of pixels!

Hermann-Josef

Oops, I didn’t mean it that way. Maybe I should get some rest first.

What I meant was that the photosite wells fill up at different rates due to sensitivities. Also, the count and location of filtered pixels does matter but not in a straightforward way as affecting the WB. I suppose the pixel discrepancy is corrected by the demosaicing and other raw processing steps. Anyway, you should probably do the talking as you may know more. :wink:

Auto white balance calculation from the raw data needs to take the relation into account when running over the raw (cfa) data. White balance itself does not. Maybe that caused confusion…

2 Likes

Just to make sure we were making hay, I went back and read your original question. It’s probably worth describing a few elemental things, then re-visit your question in those terms.

For all the hoo-ha about WB, it is simply about adjusting the red, green, and blue values of the image so that something that we want to be neutral, is. Specifically, for that thing’s R,G, and B values to conform to R=G=B.

So, if you take the pixel at a place you want to be white, and you see the following values (we’ll use 8-bit JPEG values for simplicity): R=122, G=256, B=194, multiplying R x 2.097 and B x 1.316 will make all three channels of that pixel 256=R=G=B. White!

Now, the camera puts some multipliers in the metadata it thinks will do the job for the camera data. They may not be perfect, but they need to be applied to the raw data before it is moved out of “linear”, it’s original energy relationship. Really, it can be done either before or after demosaic, but most folk that know better than I advocate before. If they’re not perfect, most raw software lets you change them to achieve that goal, R=G=B for some neutral thing in the scene.

To your original question, camera-determined white balance is rarely “perfect”. So one can either change those numbers, or do a separate set of multipliers on top of the originallly applied ones. This is what you’d be doing to a JPEG or TIFF, correcting a less-than-optimal original camera white balance application. I prefer to modify the original numbers if possible, as doing all these back-and-forth multiplications starts to mess with the image’s depicted colors. But, if you need to…

Hope this helps.

You mean B not G. I took a nap, so I am a little sharper now.


White balance is a tricky thing. Our eyes white balance differently than the camera, so even with advanced post-WB, it will likely not match the image we saw with our eyes+mind at every brightness and every point of the scene.

One thing to note is that a camera may employ any number of strategies in finding the WB. It could be based on various technologies and it could be based on the previous frame(s). Choice of innards and branding could be factors as well. The later is a biggie. There is the Leica, Nikon, Canon, Fuji, Sony, etc., etc. look that people love or love to hate. Some cameras have more subtle biases such as giving warmer tones to certain skin types. Though not related to WB per se, it is part of the what happens in-camera to generate a JPG.

You mean R and B :wink:

Ha ha ha! Read it again. B not G is right… I hope.

R&B it is. I also corrected my use of the asterisk for multiplication; discuss.pixls.us ate them…

@heckflosse What do you mean by “relation” in this context? I thought that auto white balance does an analysis of the histogram in each channel and then adjust highlight point and shadow point accordingly.

I do not have raw camera data available but only scans from my slides. The principle should be the same. Here is what I mean for an image with a strong blue cast:
Histograms of the raw data:
grafik
Histogram after auto color correction:
grafik
And here are the corresponding images:
grafik
grafik
Am I totally wrong with this interpretation?

Hermann-Josef

@Jossie If you don’t take into account that there (for bayer) are twice as many green pixels as red or blue pixels, your green histogram will be to high.

@heckflosse Of course, the occupation in all bins will be doubled if you double the number of pixels, but the “center of gravity” of the histogram will not be changed by that.

Hermann-Josef