Color calibration giving poor results for white balance

Oh, I see… I hadn’t heard the background to that. interesting! :+1:

At least some brief discussion here around 40-42 min …but I think he has made other references to those modes on occasion… [EN] darktable 3.4 new module : color calibration, get your Christmas lights back in gamut ! - YouTube

I don’t know what you expect those automatic modes to do, but all they can try is to get the selected area as close to neutral (gray) as possible.
On a lot of images, using the whole image to get the white balance works fine.
On images like the one above, with one dominant colour, less so, as the algo will try to get rid of the excess green. Exactly what it is supposed to do…

@n01r : The text from the manual you quoted is the “default” setting, i.e. what’s used when you do nothing but activating the module. A bit lower (on the page you quoted from) it goes on to say:

CAT tab workflow

The default illuminant and color space used by the chromatic adaptation are initialised from the Exif metadata of the RAW file. There are four options available in the CAT tab to set these parameters manually:

Use the color picker (to the right of the color patch) to select a neutral color from the image or, if one is unavailable, select the entire image. In this case, the algorithm finds the average color within the chosen area and sets that color as the illuminant. This method relies on the “gray-world” assumption, which predicts that the average color of a natural scene will be neutral. This method will not work for artificial scenes, for example those with painted surfaces.
(…)

@priort 's method of dropping the saturation of the illuminant after using the eyedropper usually works for me as well. And trying to pick a patch you want to be neutral in your image also helps.

there’s no AI in at all - it’s just evaluation the Color mood of the selected area. So if the available colors are well spread over the spectrum it might give good results, if you select neutral grey areas it gives good results. In other cases it might identify an illumination resulting from the most widely uses colors.
The camera white balance can be better because camera manufactures spends a lot of effort in their automatic exposure evaluation as well as in their image evaluation to optimise whitebalancing.

2 Likes

Yes, got it now. I wonder if the “AI” could be a sort of translation error…? Or just a slightly different usage.
image

I agree to about the camera auto WB being better - although it depends on the camera obviously. I’ve seen good and bad…!

The (AI) options are not real artificial intelligence, but it was shorter than spelling “optimization by machine-learning” in the list, and AI is all the rage now so pretty much anything a bit advanced qualifies for that name these days. They perform an auto-detection of the best illuminant in image.

Aurélien has said (in one of his videos, or written somewhere) that a result close to 5000 K usually means the ‘AI’ method failed to determine the illuminant (set the correct white balance).

1 Like

Thanks very much for the clarifications! All makes sense :grinning:

For me I find that with most of my shots the camera has done a good job with white balance or selecting from the drop down options in white balance module provides a good white balance. I feel the color calibration module just adds extra work to my processing with little or no reward. So I use legacy rather than modern processing. I am willing to stand corrected on this if someone can explain the advantage of adding extra work to my processing.

However, I like that I can do multiple instances of color balance which I can not with white balance alone. This helps in mixed lighting. I also like the spot color mapping options.

Yes, I wonder what it does differently. I would have thought it could only do something like the eyedropper does, and only over the whole picture, but still it gets it far closer to being correct.

Starting to feel similarly. The color calibration module does all sorts of stuff, but I feel, for WB, it isn’t actually helping much.

I have not actually switched back to legacy, but that is an option. I just feel I might get left behind as darktable moves on.

As others have mentioned - the easiest approach is to have a neutral object in the frame.
I have a good success with using the following

  • very cheap gray card
  • street signs
  • clothing (can vary)
  • eyes / hair (can vary)

I am learning to treat them as a “starting point”

The CC module also does offer calibration based on a color checker (or calibrite - used to be xrite) card. This is still on my wish list but I do appreciate the functionality (even I am not utilizing it yet).

At the end - WB is something that is not that strict. It is rather perceptual. I remember @s7habo mentioning in his videos (or here on the forums) that while we can use the tools (this is what they are for) at the end it is what we see with our eyes - what we personally like.

Have you tried selection based on the feathers of the bird (earlier in the post)? Maybe the beak or the foot? Maybe some of these areas would be a good starting point? I don’t think there is a magic solution that would work all the time unless there is a reliable reference (like a calibration card).

I actually knew about grey cards, neutral objects etc, but somehow the camera is getting it more correct than the color calibration module, and the camera doesn’t have the advantage of cards, or selective choices of what parts of the image to use. That’s all I meant when I said color calibration wasn’t helping much (over the “legacy” methods and using the camera settings)

But “more correct” is extremely subjective without something neutral in the scene. That’s probably why nobody is able to give you a greay answer, because its subjective.

The better thing to do would be to learn how to get the color balance you want using the CC module.

I think the statements are drifting a bit and at times I could be wrong but you quoted what DT does when first opening the image in the modern workflow as being what it does when you hit the dropper… So on opening it reads the data as you quoted and starts you off in as shot. This result when your D65 ref values are good is essentially a dead match for what the WB module does ie starting with as shot WB.

There can be slight differences if the D65 values are not a good reference.

After that pushing the dropper is meant for a neutral reference spot. You can of course hope that the average of the whole image if you don’t pick a spot is going to be neutral and so you will get a good result and at times you will other times not…but again this behaviour is the same as if you do it in the legacy WB module ie if you use the picker…

Are you suggesting that there is a vast difference between the modern and legacy from the starting point , or after you do some manipulation with the eyedropper…

I was saying that the manual makes it seem as if it is using the camera WB by default. Indeed, looking at the results, this seems to be correct. When I talked about “automatic” I was not referring to the default settings. By “automatic” I meant, the subsequent use of the eyedropper and the AI modes.

Agreed, but then, that’s what the camera is dealing with as well, only it does a better job.

I am saying that the starting point - which is based on the camera WB, is usually good, but using the eyedropper (if you don’t have a grey card etc) or the AI modes, invariably makes it worse, not better. I also find that manual adjustments seem more difficult that just altering the colour temperature in the old WB module.

So in effect, I was not seeing much benefit in using the cc module for wb (with a slight downside of being more complicated). Even when based on the camera wb, the result might be more “perceptually accurate”, but in practice, as has been pointed out, it is subjective.

Ya it was a collective comment on your original post and then a couple of the replies and responses that followed… you were talking about introducing a change using the “auto” WB which was one of the AI modes or the picker and the reply to you was using automatic in the way that DT initially opens the image using camera as shot reference values…

My point was that comments like the one quoted above maybe don’t support the dialog or topic of the post and that is what I was trying to point out…

“The camera getting it right” is passing along as shot data to the WB module. In the case of the color calibration a CAT applied to D65 WB values and defaults to as shot WB as well. In most cases these are similar to identical so not a poor result.

Now the main part that I was trying to clarify is that you should see the same things using the picker in either workflow, ie nothing too specific to the CC module. So your assertion as I read it is that you are happy with the default as shot more often that what you get by correcting using the picker or the AI modes. The later you can only use in the CC module but in the case of the picker the results again should be similar and so should not be associated solely with the use of CC module, ie picking a spot or the whole image will correct pretty much the same overall esp for daylight images.
I am not suggesting that your observations are not valid just that this is not related solely to CC except in the case of the AI modes…

You feel what you see when correcting WB with the picker is not as pleasing as the as shot wb provided by the camera… this should be common to both workflows if you select the whole image or the same spot so again not something CC specific.

Some will say that WB should be technical and things that are white should be white after WB is applied. This might not leave you initially with the most pleasing image but then you color grade to bring back color to the look you are after either for reproduction or artistic purposes. I think most people just want it to look good out of the gate. Without a true grey reference all that is left is to take a shot with a spot or the overall image and do as you have been doing … discard it if it doesn’t improve what comes from the camera

cameras aren’t more correct. Such discussions about Canon Colors Vers Sonys etc. indicates that there’s a lot of more then just delivering correct colors.
The camera manufacturers spend a lot of effort in evaluating the whole image e.g. the common used matrix exposure measurement, the used AF points, near and far object and compares those results with a whole bunch of reference scenarios. So they can give different white balance if you’re shooting in different conditions - shooting a landscape or portrait in the same illumination won’t result in an equal white balance setting. Landscape (if the camera can detect that scene) will be optimised to give pleasant colors while portrait will be optimised for skin tones…
All those spectacular sundown shots - optimised to the perception but never to a proper whitebalance resulting in grey being grey :wink:

Fair enough. I never used the picker much in the wb module either. I just took what it defaulted to, and adjusted the colour temp manually. You can’t do that always in the cc module unfortunately as it often doesn’t show a control for it. More often you get hue/chroma or gamut - just a bit harder to manipulate (for me anyway)

Mostly, I want it to look like it did at the time I took the shot - as well as memory allows.

Agree. Thanks.

1 Like

Well, that’s right at the crux of it. I am finding the opposite however, i.e. the camera, much more often than not, is delivering what I perceive as an image close to what I saw when taking it.

A big part of it is probably the type of images, yes. Mostly naturally lit shots of nature, birds animals, landscapes.

This is an aside really, but do you think cameras like my Canon 80D, and even Panasonic FZ80 for that matter, actually do this in the normal modes like manual, apperture priority, or shutter priority? I expected they would avoid any suppositions about what was being shot, and do a plain vanilla job of it.

There are numerous auto wb algorithms … None are perfect :).

I haven’t found an ‘auto wb’ that has been reliable 100% of the time.

The picker (be it in white balance or color calibration ) is meant to select a neutral grey patch . If almost the whole image is selected , you are basically assuming the 'grey world ’ ‘algorithm’. Being that the average of all colors is grey.

Take a shot or only grass, and that assumption is already wrong :).

The other modes in color calibration to something like (i maybe wrong and i am probably oversimplifying it) that only the edges are grey or only te surfaces are grey.

Adobe acr always goes way too warm . I tried capture one once, it seemed more reliable but when it’s wrong it’s really wrong :). ON1 photo raw seems to 99% agree with whatever the camera has done , even if it’s really wrong … And my Sony is more often wrong then right .

Setting a white balance is one of the most frustrating things or photography to me.

If I’m really desperate , i can load a simple ‘nothing done’ edit of the file into something like gmic , and try the color balance and auto temperature things in some filters there. I even scripted a bit of my own to do the grey world assumption , but only on highlights (find the top 99% pixels , calculate their average color in oklab, get the modifications needed to turn those pixels to an average of grey, then apply those modifications to the entire image).

If i find something I’m happy with , i load it into DT and use the color calibration picker in measure mode to set my target. Then load the real raw file and use the picker in correct mode in correction mode to set the color.

Convoluted, yes. So often i don’t bother . But since i can’t always throw a grey card in the scene for every photo chance i see , wb is still a struggle for me .

If you aren’t shooting a plain grey area they need to evaluate the scene: there are just several millions of red, green, blue Pixel related luminance data…
What’s the difference between a green landscape illuminated with daylight and grey rocks illuminated with green light? It’s just the knowledge that the last scenario is quite uncommon …