White (dis)balance and Darktable

The journey from capturing the scene to the displayed image or exported jpg is all about controlling the loss of information.
It starts with selecting an exposure where you need to decide if you want to ditch shadows or highlights and continues through the digital process.
It’s all about control - if you use limited tools then you might not be able to keep information you want to keep. So it’s worth to have the algorithms that aren’t limited if you need to squeeze out something the limited functions can’t provide.
That’s the difference between darktable and several other tool: dt gives you the option to use it; it doesn’t expect you to be satisfied with tool that’s just ok for a 80% case
But it doesn’t make sense to worry about the third decimal place when rounding to an integer in the end.

1 Like

My Fuji camera tends to give strangely greenish renditions in daylight if I use WB+CC. With WB alone, everything looks normal, in the sense of “similar to the JPG”, and “similar to other raw developers”.

This is easy enough to counteract with a little bit of magenta color balance, or indeed a tweak to the CC white point, but it is annoying.

Then in general it doesn’t make sense to use cc for wb in your case.
Maybe it’s useful if you need to match a colorchecker.
(But then you‘re losing the Fuji color rendition :wink: )

But even then: Quote from APs introduction post:
I guess the whole question is : is 12 to 20% extra color precision worth a whole new complex module ?

The answer is yours

But remember the highest precision bonus (20%) was given in the worst lighting conditions, and that’s usually when you need it most. Make from that what you want.

2 Likes

You can argue if it does or does not accomplish this but the reason falls along these lines… this taken from the article I provided above…

1 Like

If its consistent then I think that assuming your camera is calibrated and your screen then perhaps the D65 values for your camera are off at least for your display.

Have you tried a preset for those created the way that AP demonstrated in his discussion about using the module?? Just a thought or stick to legacy if if works as it might not be worth the bother

Rawtherapee now has some of the same tools… its just not presented as “White balance” its packaged as color appearance model.

DIscussed a bit here and I think in much greater detail in a couple of other threads…

Perhaps the implementation and examples here in RT docs will make some sense of it to you…

https://rawpedia.rawtherapee.com/CIECAM02#Color_Appearance_&_Lighting_(CIECAM02/16)_et_Color_Appearance_(Cam16_&_JzCzHz)_-_Tutorial

If I were to write a book about image processing, this would be the title of the first chapter… :laughing:

Really, any rendition you make, JPEG, TIFF, display on computer, print to paper, is the result of information loss relative to the light at the scene. Most raw processors control this for you so you don’t have to worry much of it. My software, rawproc, does no such thing; you are responsible for the presence and ordering of each and every information-losing operation from raw file load to JPEG export. A bit of a PITA to work with sometimes, but it has afforded me the opportunity to learn the aspects and implications of information loss of every step. Been using it about 5 years now, still learning…

Yep, sounds about right to me. Thing is, globally in the image, what you do about WB for one part of a scene will only be “right” for parts of the scene lit by that particular illumination.

Your snow pictures may seem tame in that regard, but think it through, the lighting of the scene is nowhere near “black-body”, that is, full-spectrum. You’ve got clouds performing as a really large bandpass filter, and the shadows are then affected by that AND how the light was reflected into them. Forget about decent skin tones here, which require significant representation of most of the spectrum.

White balance in post is essentially a global operation, and what you do to make “white=white” for one part of the scene will not be optimal for other parts with different illumination. There are two fundamental ways to affect white balance in post, 1) a set of three RGB multipliers, numbers that are literally multiplied with every corresponding channel value in the image to skew the data around to “white=white”, and 2) chromatically, where the image colors are shifted to a particular color temperature using the same transform as that for gamut conversion. Most software does only #1; darktable apparently now can do either. In terms of information loss, #2 intuitively sounds better to me but I have no analysis to support that. I’ve actually done #2 with custom camera color profiles, seems to look better but is a real PITA to do.

The only time I get bothered by doing anything other than just accepting “as-shot” is for interior images with low-temp interior lighting which also contain windows looking out on daylit areas. There’s no winning globally with these, balance for inside and the daylit parts go cobalt-blue, balance for the outside and the interior goes fire-engine red. Time for masks…

I think the overall strategy for “global” is to pick the part of the scene you want to look “white=white” and anchor WB to that, let the rest go to where it might. That for me provides the best renders in most situations. If I want it to be good all-over, I worry the scene’s lighting when taking the captures.

Right now, I’m doing a lot of “engineering photography” supporting a vintage railcar restoration. The thing is housed in a large tent made of coated canvas-sort of fabric, and the interior lighting is a garish orange-yellow. To make matters more interesting, they’ve strung a set of LED worklights in the interior, so I get shots like this:

White balance, fuggetaboutit…

4 Likes

As I understand it from the warning displayed by Color calibration when old WB (in other mode than set illuminant @ 6500K) is in use and CC is not in pass-through (WB applied 2 times), in term of interface, would it be smart to add Old white balance options as an additiona “legacy” in the list of “adaptation modes” of CC ?

That is the pass through mode no?? THere is a recent change in the workflows forcing the CAT upon module acitvation…this is a concious decision even when the workflow is set to none so now you have to create a preset to pass through so that when you manually activate CC that you don’t get the double wb…but it would be the mode… not sure how legacy would be different unless you mean CC automatically sensing a non D65 in WB?? Or do you mean adding temp and tint??

Taken from the manual… I think it would be clear that this would not make sense for calculations in the CAT colorspace… esp given the statement in the last line

"Chromatic adaptation aims to predict how all surfaces in the scene would look if they had been lit by another illuminant. What we actually want to predict, though, is how those surfaces would have looked if they had been lit by the same illuminant as your monitor, in order to make all colors in the scene match the change of illuminant. White balance, on the other hand, aims only at ensuring that whites and grays are really neutral (R = G = B) and doesn’t really care about the rest of the color range. White balance is therefore only a partial chromatic adaptation.

Chromatic adaptation is controlled within the Chromatic Adaptation Transformation (CAT) tab of the color calibration module. When used in this way the white balance module is still required as it needs to perform a basic white balance operation (connected to the input color profile values). This technical white balancing (“camera reference” mode) is a flat setting that makes grays lit by a standard D65 illuminant look achromatic, and makes the demosaicing process more accurate, but does not perform any perceptual adaptation according to the scene. The actual chromatic adaptation is then performed by the color calibration module, on top of those corrections performed by the white balance and input color profile modules. The use of custom matrices in the input color profile module is therefore discouraged. Additionally, the RGB coefficients in the white balance module need to be accurate in order for this module to work in a predictable way."

2 Likes

That’s a very nice example!

2 Likes

From what I remember from Aurélien Pierre’s explanation, i’ts a “chicken and egg” problem. What it boils down to is:
you can only do correct white balancing after you have colours, so after you set the input profile, but you need some kind of white balance to do your demosaicing. And the demosaic module comes before the input profile…

So he used the original WB module to set a standard white balance so the demosaicing works, then he added the color calibration after the input profile.

5 Likes

Funny, I found you can apply WB before demosaic, just need to respect the mosaic organization. I had to write a separate image walk algorithm for it, but for a few years now my default toolchain has WB before demosaic, as-shot, and all is well. Now, can’t do patch selected WB, but that’s only because I haven’t written a mosaic-aware patch algorithm yet. I haven’t picked through the demosaic algorithms to see the dependency, so for now I’ll just respect the assertion…

What I really meant is :

  • When using any of CC the “adaptation mode” you’re not supposed to touch to WB (an let it to its default state) so exposing it’s interface is kind of useless.
  • If you want to use WB you’re not supposed to use any “adaptation mode” in the CC module so why not, then, show the WB interface in the CC module with an additional choice in “adaptation mode”.

Choosing ‘legacy’ or ‘normal’ white balance instead of other adaptation mode could pixel pipeline wise be equivalent to the current CC pass-through + using regular WB module but interface-wise it would make the CC∕WB module the one stop to adapt WB/Adapt color.

It would not in anyway change the actual working of the stuff, it’s just a superficial interface idea as I know I’ve been puzzle by the 2 modules to kind of set WB at first.

The problem is that “legacy” and “modern” would need to be located at different positions on the pixelpipe, as WB and CC are, so you can’t use a single module for both (in darktable the position of the module on the screen maps its position on the pixelpipe).

1 Like

Ok, so the interface and it’s module concept being representation representative of the actual pipeline enable us to change module order but prevents the interface trick I suggested…

Ok I get it !
I didn’t realised that or more like I did not thought this through before asking. Thanks !

That’s what darktable does with the “legacy” white balance module, it comes before the demosaicing module.
But my understanding was that a correct white balance needed to come after the input color profile (and thus after the white balance module). Note the “correct” there.

Yes, doing only one white balancing step before demosaic has worked for ages, like display-referred editing has worked for ages (it’s what dt did until fimic arrived, end of 2018). But with current knowledge and current computers we can do (somewhat?) better…

The way I understand it:

  • white balance works on raw data. Each pixel location represents the sensor reading for one component. White balance multiplies ‘green’ pixels by the green multiplier, ‘red’ pixels by the red multiplier, ‘blue’ pixels by the blue multiplier. All of those are done independently, without regard to the other components. It is an approximation (that works pretty well).
  • color calibration does more than just applying independent multipliers to the red, green, and blue components of the RGB pixel by a number. Internally (CAT - chromatic adaptation transform) it uses a model based on human vision (LMS - LMS color space - Wikipedia). That is also an approximate model, but a more sophisticated one.

The difference is (without filmic/sigmoid, color balance rgb, etc):


(left: white balance = camera reference + color calibration = as shot, without gamut compression (but the default gamut compression gives pretty much the same result), right: white balance = as shot without color calibration).

1 Like

Yes, but what is the nature of “correct”?

I’ve done a bit of experimentation with white balance placement in the toolchain, and I see no difference in the final renders between applying it pre-demosaic vs post. What I do see is when I change one of white balance parameters in the pre-demosaic placement it takes a couple of seconds longer to see the render, as more downstream tools need to do their thing in sequence.

My surmise is that it’s a UI thing. White balance is one of the tools frequently fiddled with, and doing it later takes less time to see the result. One of the reasons I’m more and more using vkdt; 18ms from file ingest to final render on my NVIDIA GeForce GTX 1660 Ti :zap:

I think but I am not 100% its because when being done using the CAT approach it needs to come after the input profile… I could be wrong but I think this is also why I believe there is a note that when using the CAT to ensure the best results that custom input profiles are discouraged… Which makes you wonder how many people are either using the CAT because they think its better or for some other reason but then are also using some custom input profile that perhaps misguides the result :slight_smile:

Oh, indeed, the CAT needs to be done in conjunction with the input profile, which requires RGB. I’m thinking through the multiplier-based way…

The multiplier-based WB is pretty easy to understand, but the CAT is a different animal. You really need to understand color management at a technical level to know what’s going on there. But that shouldn’t prevent one from using it, once the controls are understood. And I’m a fan of less munging of the data, using a CAT inline with the color transform seems to me to be less-destructive than the per-channel flinging of values hither and yon…

1 Like