The short answer is it searches for combinations of Red and Blue multipliers that produce the most variance in the Hue channel.
More detail.
- Throw out pixels that are likely to be clipped at the high or low end and throw out some more to make things faster
- Convert the remaining pixels from RGB to HSV, keep the Hue channel
- create a copy of the hue channel and rotate the hues 180 degrees so that we have one image that does not have a sharp transition between values in the reds (from 360 degrees to 0 degrees).
- Find the variance of those two images and take the minimum variance. This is the reference hue variance
- Apply some Red and Blue channel multipliers to the image and find the hue variance in the procedure above. If the variance is greater then these Red and Blue multipliers are better.
- Repeat this procedure using a sensible searching strategy to find the best Red and Blue channel multipliers
- Apply the multipliers to the original image.
I also have a version that changes the ‘a’ and ‘b’ channels in Lab and then measures the Hue as above.
I’m not sure how the variance is calculated I just use a built-in feature of GMIC.
Edit: The idea comes from the fact that when the white balance is really off, like this film negative, images have a very narrow range of hues. In this case it is orange. But we know that the correctly white balanced image will have lots of hues.