I find automatic white balance useful in Darktable, especially when I can find a white/gray surface in the image. Even when I do not end up with the automatic setting, it serves as a useful starting point.
However, sometimes the algorithm gets over-enthusiastic and chooses the “custom” model, the one with the hue and chroma sliders. This happens especially subjects under foliage. I recognize that these things need correction, but I would prefer to do them elsewhere, not in color calibration.
So what I am looking for is getting automatic white balance, but constrained to the daylight model, with the temperature slider. Mathematically, a constrained optimization problem. But I am not sure how (if) the GUI exposes this.
IIRC you can manually change to a different illuminant and darktable will attempt to transfer the custom settings to the new illuminant (though won’t be able to do so precisely)
Thanks, but that’s barely less UI interaction than just fooling around in the slider. Maybe it would be nice to have an option for not giving up the daylight illumination so easily.
Incidentally, the (AI) detect from image surfaces and (AI) detect from image edges are not showing up for me in master. I am using the color picker-like icon to automate. Did those disappear, or is this something in my config?
“Just select daylight and adjust the slider” means you don’t have to select the zone for the calculation, so can be faster than picking illuminant, selecting the picker and then selecting the zone to use…
An automatic detection of the “best” daylight illuminant needs a good metric to decide what is the “best” daylight temperature. Not sure that’s all that evident if you deal with strong green tints (foilage, old-style luminiscent lighting…).
And it means yet another control on an already complex module (or yet another config option, which people then will forget about and “suddenly color calibration stopped working”)
I am not sure it is evident either, but at this point it is just hardcoded:
I think it gives up to easily, at least for the images I am processing.
Note that foliage is already handled in the heuristics.
This is a generic argument against any new feature. But I am not sure that hardcoded values are better — in my experience the algorithm is not so robust.
An alternative solution would be a higher (hardcoded) threshold. But maybe for some users it would be worse.
Erm… That code sample with DT_SOLVE_OPTIMIZE_FOLIAGE is from the function “_extract_color_checker()”, not from the code that determines the illuminant.
The funtion _check_if_close_to_daylight() checks to see if the current illuminant is close enough to daylight, and it uses the result to set some UI parameters. That function does not change the chromaticity values x and y (they are const parameters!). So whatever happens in that funtion does not change the white balancing. (and yes, only two parameters are needed: for the whitebalance, we are only interested in ratios between the three colour channels)
Not against any new feature, but yes, it is one argument when the module concerned is already fairly complex. The more so when feasability and usefulness haven’t been shown yet.
Those “hard-coded values” you refer to have nothing to do with setting or estimating the whitebalance, see at the start of my post.
I am not sure if I am accidently missing the point of the OP, but couldn’t just disabling the Color calibration module and resorting to the white balance module alone be a suitable solution. I sometimes do this because it brings up white balance presets like found in the camera, temp and tint sliders and even the channel coefficients.
That is one route to simplify things for sure… In any state of the CC module I think if you hit the drop down and select custom you see the current settings represented as hue chroma… I find in general the CAT wb can sometimes look a bit strong but its easily corrected. I think most times the hue is correct or very close so really its just about more chroma for stronger correction and less for pulling back the correction… but that is just me…
Generally, that’s my experience too. The OP is not about a major issue, it is something I can easily fix, I was just curious if there is a possibility of a minor improvement.
My understanding is that demosaicing works better when performed on the “raw” sensor data, and white balance precedes it, so it is not something I would prefer (well, maybe I could reorder modules, but all other workarounds are better).
And I actually like the interface of color calibration very much, including the generic “custom” mode. It’s just sometimes it overshoots.
Incidentally, could someone explain the algorithm now used by automatic correction? If it is no longer the “AI”, how does it work? Sorry I could not find it in the docs.
Can you explain what you mean here in the context of simply using legacy wb vs wb and the CAT added when you use CC module?? I’m not sure how disabling CC and just using legacy forces a change of module order etc… but I for sure could be missing the point being made
Do you mean when you select the pipette or the default calculation out of the gate…
If the later then in the recent updates “as-shot” wb is used for some intial steps and then the reference values later to do the CAT…
In this mode I understand that DT isn’t doing a true wb but more of an equi-appearance color space transform… using by default CAM16 or it could use Bradford or a couple of other modes if I recall…
THis article is a pretty decent summary of this form of perceptual preservation of scene color…
Well, the reason the white balance module is placed before demosaic is that the demosaicing algorithms need a “somewhat correct” white balance. This is explained in the manual.
The article you link mentions algorithms for estimating the capturing illuminant (sec 5.2) but does not go into details. I guess I will have to look at the source code.
No, it is my bad for being unclear. The explanation is in the manual page @rvietor linked — demosaicing (and, if I understand correctly, denoise, and raw CA) are designed to work in a reference color space, which white balance corrects first using the camera reference. Perceptual/aesthetic adjustments should come later, doing them prematurely would be incorrect.
I think you are wrong, there. There was a recent change, switching from ‘reference’ to ‘as shot to reference’ exactly to get closer to the real white balance, improving (among others) demosaicing (emphasis mine).
Some modules in the pipe would like to have “perfect white balance” correction - the rgb channels all have the same value for any greytone.
Examples:
Some hightlights reconstruction algos take the other channels or surrounding data into account and modify data “towards white”.
raw chromatic aberration correction also iterates from channel differences. Good “white is white” coeffs help significantly here.
Some demosaicers also have slightly improved output on pixelpeeping level.
The math is over my head…its been too many years since I had to mess with any complex math whereas you are a pretty astute mathematician…
But the default for the CC module is a CAT16 transform and there is a fair bit of the math broken down in these articles… I think section 3.6 in the one paper is at least a version of the CAT16…it seems like there were 1 and 2 step versions …
Anyway there might be some useful information here if you are trying to sort out the DT code vs the theory…