Best practice for "unexpected white balance coefficients" in darktable

Many modules need a white balance that is at least approximately correct. That should be provided by the white balance module camera reference mode. color calibration then tries to set the correct white balance.
You can use your camera’s white balance by turning off color calibration, and setting white balance to as shot, as I have written above.
white balance works simply by multiplying the camera’s red, green and blue values. That is an approximation.
color calibration is also an approximation, but a more involved one: it tries to take the properties of human vision into account. You can read more here: Introducing color calibration module (formerly known as channel mixer rgb). See also Chromatic adaptation - Wikipedia and Chromatic Adaptation (you may ignore the warning you get because of the unencrypted connection, as you won’t be submitting passwords).

If for your camera the reference (D65) multipliers are wrong, color calibration will base its work on incorrect input. I believe that is the case for many Sony cameras. When that happens, you can either revert to legacy white balance, or replace the ‘reference’ values with those you measure by taking a shot of a screen calibrated to D65, as described here: darktable 4.2 user manual - color calibration.

If your reference values are correct, the two setups give pretty close results in most cases.
Natural light:


Artificial light:
image

Okay, so Is the idea that all of the pipeline stages between the white balance module and color calibration expect the image to have been chromatically adapted to a D65 illuminant? If so, would the coefficients in the white balance module have to be different depending on the lighting conditions?

Let’s say we follow the suggested workflow, of white balance set based on a photo of a 6502K light source and color calibration set in terms of a color chart shot in sunny 5000K lighting conditions after applying the previously recorded white balance settings (from the 6502K photo).

Now let’s say I photograph a red square under 5000K lighting conditions. In this case, I would expect the color to be almost exactly perfect. After all, my color checker chart almost certainly contains a red square, and I’ve already calibrated it for 5000K lighting after applying the exact same white balance. So far so good. I could chromatically adapt this photo to 5000K to print it and get something close to the original square I photographed.

Now let’s say I photograph the exact same red square under 6502K lighting conditions, but apply the exact same white balance and color correction as for the 5000K red square photo. At this point I would expect darktable to give me incorrect results. After all, if the lighting conditions change the input but the processing pipeline stays the same, of course I will get a different output; if I were to print the new output I would presumably get a red square that does not match the color of the original square I photographed.

Now let’s say further that my camera is smart enough to know that the 5000K photo was taken under 5000K lighting conditions and that the 6502K photo was taken under 6502K lighting conditions. What I’d like to do, while editing the 6502K photo, is to set the color calibration illuminant to “as shot in camera” and magically get something that I can print out and will match the original square. Unfortunately, when reporting white balance, my camera has no way of knowing that I already calibrated my RGB matrix in color correction based on a color chart shot at 5000K.

What could theoretically save me is the fact that when I created my color calibration preset, I also set illuminant to “as shot in camera”. So maybe this allows the matrix generated from the color checker chart to be adapted to different illuminants by the white balance settings at the top of the color calibration CAT tab. Unfortunately, this doesn’t seem to be working so far for me, or even seems kind of random like I prefer the results when I calibrate to a color checker photo taken with a different lens from the photo I’m editing. That said, I have not yet had the opportunity to shoot the color chart in sunny conditions, which would be the ideal experiment. But if my camera correctly reports white balance, should that matter? Shouldn’t a color checker chart shot on a cloudy day and used for calibration with an “as shot in camera” cloudy white balance also be close to okay?

Am I on the right track to understanding?

One more piece of the puzzle: If I set the illuminant in color calibration to “as shot in camera,” there is no perceptible difference between my (attempted) D65-calibrated white balance preset and the default “camera reference” settings. Also, if I’ve previously loaded a color calibration preset generated with a color checker, the color calibration module seems to forget about it if I set the illuminant to “as shot in camera.” [Actually this last point might not be true in further testing. I’m going to have to report back after more experiments.]

Other settings of white balance do make a visible difference to an “as shot in camera” illuminant, if I go crazy with the coefficients. Moreover, the difference between “camera reference” and D65-calibrated white balance does make a big difference to a color calibration set from a color checker or to the default one with a daylight illuminant, just not when I set illuminant “as shot in camera.”

From this I conclude that:

  1. If I want to make use of my camera’s white balance detection, I should use the “as shot in camera” illuminant in color checker as a starting point and nothing else in this thread matters–in particular, I’ll get what I get and no useful calibration can be done.
  2. I’m as confused as ever about how to do color calibration and the procedure in the AP video as well as the color calibration section of the darktable user manual.

The idea of an objective color correction style for my camera separate from reusable creative color styles was appealing. However, I don’t know if that’s really realistic for the example photos I posted, which involve nighttime and multiple low-CRI lighting sources. Now that I understand how crazy subjective and context-dependent color can be, maybe I just need to fiddle with each photo until I like the result.

It’s still the case that the D65-calibrated white balance looks better than the “camera reference” for a daylight illuminant in color calibration. So if I actually trusted any of my D65 light sources maybe I would still use that. But now that I’ve bought a color checker, maybe I should use that without the D65-calibrated white balance. As one data point, my average and max delta-E are 1.31 and 3.61 when I calibrate with the default “camera reference” white balance, and 1.29 and 1.354 when I calibrate with my attempted D65 adaptation white balance. Is that tiny improvement worth the risk that I might not have a good D65 light source?

If you want to hurt your head some more, have a go at this playraw:

:crazy_face:

It can go on and on … Very simply as I understand it

First you fix your D65 coefficients… if you believe they are better …good… save as a wb preset…this is your base…

Then if you have a color checker…do the calibration in CC module from your test shot for that scene lighting.

This will produce a correction matrix of channel mixer coefficients and tell you the proper exposure correction to apply to have the best match.

To use this as a general profile/wb for similar lightings before saving it as a preset change the drop down to as shot and then save this as a preset to be applied for similar conditions. This will save the correction coefficients…

Now for similar shots you will apply this as your preset for color correction

I guess you will have to experiment and see if you need to do this for different lenses in the same light and for sure this correction will get less useful the more different the lighting becomes…

I believe this is the protocol as demonstrated by AP in his video…

Otherwise in general use where you don’t have a colorchecker for calibration and any existing ones that you have created are not suitable then use your base wb preset and just see what as shot in CC gives you for a wb and perhaps modify further with a neutral selection roi if that proves off…

1 Like

It is my understanding that the Color Checker features of color calibration are there to fine-tune the module’s settings for one specific lighting condition, and the results are not reusable (unless you work in a studio with a fixed lighting setup).
Having correct D65 coefficients in white balance is essential, as far as I know.

Take both of those only as my understanding, not as definitive advice. Maybe one of the developers can explain the correct way to use those modules
@Pascal_Obry, perhaps?

@ggbutcher Could those D65 coefficients be calculated from spectral sensitivity data, by performing the integration of (the D65 illuminant’s spectral power density function) x (the camera’s SSF)?

The discussion above slightly confuses me :slight_smile: Probably didn’t get it all but these are not the the same???

You could, but you’d only be correcting the camera deficiency in its ability to uniformly measure the spectrum.

SSF data is measured using a “full-spectrum” illuminant that is passed through a prism or diffraction grating to split out the spectrum. That illuminant is usually some tungsten-halogen light, so the raw measurements on the lower end of the spectrum need to be biased-up to make a uniform assertion. Once that is done, the resulting data exposes the camera bias, and that is what you’d be correcting with SSF-derived numbers.

But the light in the scene has its own power distribution, and that then has to be “made-uniform”. Well, maybe, I personally think there are some aesthetic considerations that would argue differently, and some scenes are just a mish-mash of both direct and reflected light of sometimes wildy different color temperatures, so YMMV.

In 2018 I did a bit of experimentation with using the camera color profile to correct white balance. Easy enough, just don’t white balance the Colorchecker target shot, make the profile, and apply - ta-da, both gamut is compressed and white balance corrected, what’s not to like? Here’s a link to the post of my two comparison images:

https://discuss.pixls.us/t/gimp-2-10-6-out-of-range-rgb-values-from-cr2-file/9532/192

At the time, I thought the white balance from the profile looked better, but now I don’t think so. The green hues are better-separated from their adjacent yellow ones in the separate color profile/white balance rendition.

I never took up the method in practice, seemed to be too much trouble for the “improvement”. Still, the application of white balance in a chromatic operation still just seems less-destructive than those rather harsh channel multipliers; makes me want to compile my darktable master and play with the new tool…

Random question or remark, that may or may not have an answer already…

The whole fixing your white balance coëfficiënts - so find a good reference for your camera - is only a cosmetic thing , right ?

CC will show ‘invalid’ if the current correction can’t be expressed correctly ik the current colorimetry setting. But it will not make the module perform less or anything. Right ?

"Invalid ’ is just an indication that the temperature value is an estimate , from what i understood.

I’m not sure I understand your question well.

I think it’s not just ‘a cosmetic thing’. If the module thinks its input currently has good D65 multipliers applied, and tries to bring it to the pipeline’s D50 white point after using the ‘as shot’ multipliers or after picking a point, I assume it’ll end up with wrong colours if the input multipliers do not, in fact, belong to D65.

https://docs.darktable.org/usermanual/4.2/en/module-reference/processing-modules/color-calibration/#caveats

The ability to use standard CIE illuminants and CCT-based interfaces to define the illuminant color depends on sound default values for the standard matrix in the input color profile module as well as reasonable RGB coefficients in the white balance module.

If I manually change the multipliers in a shot, I get different colours, so color calibration cannot just compensate:

Right:
https://docs.darktable.org/usermanual/4.2/en/module-reference/processing-modules/color-calibration/#cat-tab-workflow

When the CCT is followed by “(invalid)”, this means that the CCT figure is meaningless and wrong, because we are too far from either a daylight or a black body light spectrum. […] the current illuminant color is not accurately tied to the displayed CCT. This tag is nothing to be concerned about – it is merely there to tell you to stay away from the daylight and planckian illuminants because they will not behave as you might expect.

I think it gets listed as invalid when the value is too far away from the curve to be corrected to one of the values on the curve and so its just listed as invalid?? I think there was a comment at one time how far away that was ie what the threshold was…

image

1 Like

Sorry if I added to the confusion, two separate issues indeed. I was referring to reading Sony black & white levels from the raw file that was added to rawspeed recently and wanted to take that out of the equation…

‘within reason’, if I change the white balance to something else, and then reset CC to ‘as shot’, I get the same output.
That’s basically how I have been using the CC method (if I have been using it). By first correcting the white balance, or setting it ‘somewhat correct’ and then using CC and selecting a spot.

Aurelien once said - in his special way - shit in is shit out, regarding CC.

Anyway, from your response I read that the ‘invalid’ meaning is indeed nothing to be worried about (and I read often that people see that, and then start thinking about ‘oh I need the correct multipliers’).

But the correct daylight multipliers can still help in getting more accurate results, apparently.

( Still won’t help in case where the camera matrix completely mangles the colours before CC gets a chance to fix them, which is my issue with the modern approach ).

‘invalid’ (in color calibration) has nothing to do with the multipliers (a setting of white balance). It means that color calibration found that the illuminant is not close to the ‘daylight’ or ‘black body’ models. For example, Tungsten lighting is ‘black body’, but fluorescents and LEDs have spikes in their spectra.

Compare here:
https://www.researchgate.net/figure/Emission-spectra-of-different-light-sources-a-incandescent-tungsten-light-bulb-b_fig1_312320039

The Tungsten lamp has a smooth spectrum, even though the distribution is not uniform; sunlight varies, but not abruptly; it’s the LEDs and the fluorescents that have huge spikes.

2 Likes

Thanks to everyone for their example sidecar files and all of the links to videos and other resources. One very useful tip in one of the recommended videos was to import the jpeg and take a snapshot. At this point, I’ve been able to get the colors of the reflected lights where I want, at least for the second image I posted (building at night in Lisbon). To get what I wanted, I ended up adding a second instance of color balance rgb and used a drawn mask to apply it only to the sidewalk.

Unfortunately, now the problem I’m having is that the building façade looks terrible compared to the jpeg no matter what I seem to do in darktable. Here’s an example, with a snapshot of the jpeg on the left and my darktable edit on the right. Somehow the right side looks way less crisp, like there is color noise or something. On the jpeg, the dirt in the mortar between the stones really pops. I’ve tried a bunch of things, from local contrast to sharpening to denoising, but I no matter what I do I can’t get the stone to look nearly as good as the jpeg:

I’m not posting a sidecar file only because it looks like this in all of my attempts to edit the file, so I don’t think it’s necessary and don’t know which I would post. I will post my final edit and what I learned from this whole process. (This particular screenshot is from a version with 8 iterations of 512-pixel guided-laplacian highlight reconstructions, which does a great job of recovering what’s in the windows, but you can just open the raw file and see the same undesirable haziness of the building wall.)

Does anyone have suggestions on how to get darktable to make the wall look as crisp as the jpeg? Thanks.

P.S. I should add that I did try haze removal, and it made the mortar pop, but then gave the stone even more weird color noise.

1 Like

Try the dehaze preset of diffuse and sharpen…see if you like that… if you have room to boost it then you can try extra iterations. There are other ways to tweak D&S and other presets but it should give you what you need… local contrast in bilateral mode with a high strength ie 200 300% or even higher with very small settings for coarse and contrast can also raise a lot of details…

Thanks. The problem with all of these things is that the more I make the mortar pop, the more weird color artifacts I get in the stones. Here’s what I get with the dehaze preset of diffuse and sharpen:

It looks to me almost like there are splotches of yellow or something in the pink stone, whereas the hue looks nice and even in the jpeg.

very hard to say but looks like color noise… maybe if you use the denoise profiled… adjust the protect shadows slider and see where that leaves you…