I don’t know is the question is stupid but I’m investigate the WB these days and trying to understand when it’s applied… OK What I think to have understood is WB is applied to RAW data; it’s simply an amplification of the RAW datas to compensate the green pixels red/blue ratio plus the color of light.
Why I find WB tool actived when I open a Jpeg or Tiff in RT??
White balance is a simple multiplicator, you can apply it to mosaiced and non-mosaiced data.
Two examples from Canon and Sony are here:
Structure is: ‘manufacurer’ ‘model’ ‘red’ ‘green’ ‘blue’ ‘green’
…Structure is: ‘manufacurer’ ‘model’ ‘red’ ‘green’ ‘blue’ ‘green’…
Hi , can you explain me better this statement?
Most cameras do have a ‘bayer filter’ on their sensors
The pixels are covered with colored filters which are grouped by 4 pixels: red, green, blue green.
These are needed to estimate colors from the sensor data which only can measure lightness.
This is ‘demosaicing’
Before demosaicing the camera corrects factors for the 3 (or 4) color channels, this is the white balance.
White balance is a simple multiplicator, you can apply it to mosaiced and non-mosaiced data.
What I understood ( I think) is : on Bayer sensor for compensate the red-blue/green ratio of 1/2 the luminance of red and blue phototosite are multiplied by two and after ( but I don’t know where in the pipeline) is considerated the contribute of the light (tungsten,sun,cloud) and it is substracted for a neutral white.
If I look to Exif of RAW files I find the WB RB Levels tag where there are four parameters: one for RED,BLUE,GREEN and GREEN.
Hence I can’t image it applied to the three channels of a JPG.
What do I miss?? Or I am fully outroad??
White balancing is done not only to compensate for the different photosite count under the color filter array but also because human perception compensates for white while an electronic sensor does not. For the latter reason you might want to re-balance the channels.
Last question and then I close here
compensate for the different photosite count under the color filter
statement , then the WB module knows how the CFA is made; could be it Bayer,Xtrans or other, right?
I suppose you want to know the connection between the various white balance tags (like
BlueBalance) and what actually happens. Well again I’m just guessing, and my guess is that there is no connection - the tags are ignored because most raw formats don’t have them.
From my Nikon D7000, with exiftool -G:
[MakerNotes] WB RB Levels : 2.09765625 1.31640625 1 1 [MakerNotes] WB GRBG Levels : 256 537 337 256
The RB tag has the numbers as muiltipliers, Red, Blue, and Green, Green for completeness, I guess
The GRBG tag has what I’ll call for lack of a better term ‘relative references’, using Green as the anchor. If you divide the R and B numbers by 256, you’ll get multiplier values like in the RB tag.
The multpliers are what the camera asserts it takes to multiply the R and B channels by to make a correct reference to ‘white’. Green is usually used as the ‘reference’, but you could make mulitipliers that use either of the other channels values and get a decent, if exposure-shifted, image.
Really the actual temperature of the light is only in play until the scene is measured; after that, it’s R, G, and B, and what it takes to shift them into some notion of white. When you play with temp/tint in a raw processor, it has to be turned into RGB multipliers to be inflicted upon the image. In post-processing, temp/tint is just an abstraction that I think confuses what you’re really after, which is to make R=B=G in any pixel that is supposed to be neutral. To my myopic thinking, the only good assertion of white balance comes from a patch in the scene that you want to be white. After that, it’s just coarse assumptions and messing-around…
Why should white balance be influenced by the number of pixels per channel? The signal level in each photo site does only depend on the illumination level and not on the number of neighbouring photo sites. It is the histograms that are, to my understanding, being shifted to make their center of “gravity” the same for each channel, at least for one algorithm of WB (grey world).
…do these multiplications occur before or after the demosaicization
Either, depending on the software. They need to be applied before the image data is tone-mapped to something other than its linear relationship.
I recently did just this experiment, for another thread:
Let’s go deeper. When light hits the sensor, it can be measured via emissions caused by the photoelectric effect. A measurement is made at every photosite representing a pixel or subpixel. It doesn’t matter how the sensor and circuitry were designed, all you get is a set of signals that could mean anything until the hardware and firmware translates it into something meaningful; even then we would need things like white balance, profiling, calibration, bias, transforms, etc., to get the colours we want.
At the simplest level, the Bayer pattern is usually RGGB. If we have more Gs, then it would make sense for us not to multiply it to get a larger value. We have less of Rs and Bs, so we need to make them brighter to match the Gs in intensity. Of course, we don’t just multiply haphazardly: we want to do so so that the outcome will have some semblance of the colour we are trying to replicate with our cameras.
Ninja edit: this might have come out the wrong way. Read on to find out why. No more writing from me!
…and if the CFA is Xtrans the WB module would consider the 2,5 green/blu-red ratio, right?
I am unfamiliar with X-Trans. Maybe it is abstracted so we treat it the same as RGGB in terms of WB multipliers. It also depends on the raw developer. WB can be controlled differently in each app. E.g., some expose 3-4 multipliers while others use temp / tint. Even temp / tint is differently handled among apps.
This is not correct! Imagine that demosaicing creates three separate grey-scale images, one for each channel. The signal you measure in the G-band is not stronger than in the other bands because there are more pixels. It may be stronger because of the spectral sensitivity. The latter has to be corrected for by WB, not the number of pixels!
Oops, I didn’t mean it that way. Maybe I should get some rest first.
What I meant was that the photosite wells fill up at different rates due to sensitivities. Also, the count and location of filtered pixels does matter but not in a straightforward way as affecting the WB. I suppose the pixel discrepancy is corrected by the demosaicing and other raw processing steps. Anyway, you should probably do the talking as you may know more.
Auto white balance calculation from the raw data needs to take the relation into account when running over the raw (cfa) data. White balance itself does not. Maybe that caused confusion…
Just to make sure we were making hay, I went back and read your original question. It’s probably worth describing a few elemental things, then re-visit your question in those terms.
For all the hoo-ha about WB, it is simply about adjusting the red, green, and blue values of the image so that something that we want to be neutral, is. Specifically, for that thing’s R,G, and B values to conform to R=G=B.
So, if you take the pixel at a place you want to be white, and you see the following values (we’ll use 8-bit JPEG values for simplicity): R=122, G=256, B=194, multiplying R x 2.097 and B x 1.316 will make all three channels of that pixel 256=R=G=B. White!
Now, the camera puts some multipliers in the metadata it thinks will do the job for the camera data. They may not be perfect, but they need to be applied to the raw data before it is moved out of “linear”, it’s original energy relationship. Really, it can be done either before or after demosaic, but most folk that know better than I advocate before. If they’re not perfect, most raw software lets you change them to achieve that goal, R=G=B for some neutral thing in the scene.
To your original question, camera-determined white balance is rarely “perfect”. So one can either change those numbers, or do a separate set of multipliers on top of the originallly applied ones. This is what you’d be doing to a JPEG or TIFF, correcting a less-than-optimal original camera white balance application. I prefer to modify the original numbers if possible, as doing all these back-and-forth multiplications starts to mess with the image’s depicted colors. But, if you need to…
Hope this helps.
You mean B not G. I took a nap, so I am a little sharper now.
White balance is a tricky thing. Our eyes white balance differently than the camera, so even with advanced post-WB, it will likely not match the image we saw with our eyes+mind at every brightness and every point of the scene.
One thing to note is that a camera may employ any number of strategies in finding the WB. It could be based on various technologies and it could be based on the previous frame(s). Choice of innards and branding could be factors as well. The later is a biggie. There is the Leica, Nikon, Canon, Fuji, Sony, etc., etc. look that people love or love to hate. Some cameras have more subtle biases such as giving warmer tones to certain skin types. Though not related to WB per se, it is part of the what happens in-camera to generate a JPG.