As I understand it the auto WB method in RT tries to estimate the type of illuminant in the scene to get the correct white balance. My method, which is really a colour enhancement that works as white balance in many cases, does not care about the Illuminati, er, I mean illuminant but rather tries to maximise the range of hues in the image by searching for Red and Blue channel multipliers that produce the most hue variance.
This does not try to get accurate colour, but many scenes have their most hue variance with correct or nearly correct white balance. Some images result in looking whites that are quite off-white.
The advantage of not caring about the illuminant or having a white patch means that it can work for images with uncertain or mixed illumination with no white reference.
Of course, there are fialue cases. Particularly in scenes which do not have a wide range of hues.
No it does not tries to estimate the type of illuminant, he assumes that the illuminant is correct (Color Rendering Index)
Here part of what I wrote on the other thread
Instead of RGB channels we use xyY, which is more relevant in terms of colorimetry,
And instead “variance”, we use a comparison of samples on the one hand within the image, on the other hand from defined spectral colors, this comparison is realized dy a “Student” test.
In the case of “autowb”, I compare a sufficient number of samples from more than 150 areas on the image and 200 reference colors
This comparison is relized by changing “Temp” : this variations make a change in xyY values of image, and in xyY values of spectral datas . This algorithm is complex and needs a lot (200) of spectral data in the visible domain
The best result is for Student minimum.
Unfortunately, this is all very new to me and I don’t have a firm understanding of what you have said. I need to do some homework, and come back and read it again.
In fact, the algorithm works with any illuminant, the matrix product of the spectral data of the illuminant, the spectral data of the colors and the observer 2 ° is produced, and calculation is iterated several times with xyY (image), xyY (spectrals datas), for xy (red blue) and Y (green).
Hence the name of the “Itcwb” algorithm (Iterate temperature correlation white balance), I called it that because I think this acronym is closest to what the algorithm does
If illuminant is not with a good CRI (some fluorescent, some LED), calculations are done as if CRI is good.
We can see the effectiveness of the algorithm that will depend of course on the CRI, but also the number of possible patches in the image (when the colors of the image have very few variations), by examining the display of the “Student Itcwb”
This coefficient is a traditional coefficient in probabilities, values less than 0.05 give a large confidence interval
I prefer to use the white balance to find a starting point with the most acceptable colors, and then proceed to selectively tone out the colors I don’t like. I typically use daylight unless the colors are too saturated because the camera’s WB can be a bit overzealous.