struggling with "modern" white balance in darktable

I’m using darktable 4.0 with pixel-workflow = scene-referred and chromatic adaptation = modern

Every image shot against the sun has extremely weird artefacts, which I did not notice using dt 3.8.

It’s much better with chromatic adaptation = legacy OR filmic v5 OR preserve chrominance = no

What’s the recommended way to achieve smooth transitions?

20220808_0103.CR2 (22.9 MB)
20220808_0103.CR2.xmp (5.8 KB)

This file is licensed Creative Commons, By-Attribution, Share-Alike.

New filmic v6 behaves differently . Its discussed here…

If you have the CC module set to as shot …the WB should be identical or very similar to legacy so if you see a significant change I suspect that is not the case or your D65 values for your camera are not a good set currently but I don’t think that should change between 3.8 and 4.0 but I suppose it could…

Abends, @quovadit!

By all means, I am not especially skilled re blown highlights,
but here is at least a swift step in the right direction:

20220808_0103.CR2.xmp (7.7 KB)

PS: This guy has a good tutorial on blown highlights: Fixing broken digital photography with "Filmic". Part 1. - YouTube

Have fun!
Claes in Lund, Schweden

1 Like

My try. Only worked with tone-eq and filmic on the highlights. And yes, handling such extreme highlights in DT is tricky and never gets boring…

–20220808_0103.CR2.xmp (9.5 KB)

1 Like

This is another one of those images with the wrong WP…looking in the exif data… the 2 canon values are 14800 for normal white and 15300 for specular… DT is using like 15950… if you use the canon values CC behaves much better… this seems to be an issue for a lot of cameras lately…



@priort But his camera is trusty old EOS 5D Mark II,
so no new invention…?
… and …
setting WB to As shot
makes it a bit better.

The question really outstanding for me is that in the past most times… i see that as shot legacy is very very close if not visually identical to D65 legacy + CC set to as shot… In this case it was not. It seems to be amplified by the wrong WP being used by DT when using the modern WB.

Does the legacy WB clip some how so that the values extending beyond what should be white if the correct WP was used aren’t so apparent??

20220808_0103.CR2.xmp (16.1 KB)

1 Like

The issue here I think is not whether we can fix or get the highlights its more about why using legacy is less impacted by filmic and the color preservation mode than if you use color calibration .

I have just tried this with the sigmoid tone mapper as well.

Basically if you set legacy WB to as shot check your result. Now add filmic. By default it will pull in the highlights without the massive artifacts.

Do the same and set WB module at D65 and CC set to as shot. This is not a perfect match for legacy but close. Now add filmic and you get strong blue artifacts. You can mitigate it by changing the norm but still it doesn’t happen with that norm using legacy WB.

Sigmoid shows the same thing to a lesser degree as its default per channel norm must be less like the maxrgb used by filmic. If you change the mode in sigmoid to rgb ratio… I am assuming that means preserve the hue by preserving the rgb ratio you see the same blue artifacts…not the same as you would see with filmic ie not quite as strong but there… But again you only see this using CC module not when you use the legacy WB module… You can also use the global picker in CC and this helps ie not leaving it set to as shot…but this is yet another step required to deal with this…

So this also correlates with advice that AP gave in his video that you could change the norm or let the highlights clip a bit so that you don’t see this…

I think I understand why this happens… and his advice etc but why does this only happen when using CC module and not the legacy WB module…

This is only an n of 1 talking about what happens in this image but I suspect its the same in others…

Someone familiar with the code and CAT might be able to quickly say why this might be???


I checked out your edit…that is quite an array of modules… can you explain the use of the color calibration CAT on top of the legacy WB as shot using XYZ colorspace conversion and then the unbreak profile… it seems like a strategy …just wondering what it achieves??

Slow internet today is making it hard to download the image file, but I will. However, I must say that I am not 100% convinced about selecting the modern chromatic adaptation in the preferences. For the most part my variety of cameras including canons produces a nice white balance as shot by camera. Using the legacy chromatic adaption means for most images I do not need to spend anytime adjusting white balance. However, I appreciate some images need white balance attention. Usually I can handle this in the legacy mode by using the temperature and tint sliders.

Now I am not saying I don’t have any appreciation of the Color Calibration module because I have great respect for it in the right circumstances in my work flow. With mixed lighting I can use multiple instances and masking to balance the color and that is not possible through the white balance module. Another great use of the CC module is the ability to measure color in one image and correct another image to this color. Just yesterday I was processing a picture of a boy shot in the highland village of Thailand and the white balance was off because of the challenging lighting. So I went back to another picture of a woman in Thailand with pleasing skin tone and measured the values of her skin. I returned to the image of the boy in the village and tried to correct using his face and the result was disappointing. I then selected a area of skin on his leg and got an excellent result. This is a great new feature added by AP.

I used this same feature to colour match the grass in three shots that I had taken and then exported the images to be stitched together by Microsoft Image Compositor Editor (ICE) into a panorama. BTW, us Windows users have a great option with ICE for stitching great panoramas together, Just export as 16 bit tiffs from DT before stitching and ICE will export a 16 bit tiff. If you get ICE to stitch the RAW files directly the best you can get out of ICE upon export is an 8-bit tiff (bummer).

1 Like

Thanks a lot for all your replies and edits, I learned a lot.

So I will use ‘preserve chrominance = no’ as default, and finetune with ‘white relative exposure’ and ‘highlight reconstruction’ when using max RGB.

But I still have no clue why there is such a difference between legacy-wb and color calibration.

What does ‘wrong WP’ mean? How can I ‘use the canon values’?

1 Like

You could if you find them better just create an auto preset for that camera and set it to apply to all images… you set it in the first module in the pipeline…raw black/white point…

Edit. As for the blue. I bet it has to do with the combination of the norm and the maybe some gamut mapping done by the CC module that the WB module doesn’t do. That is the only thing I can think of but again I only scanned the code and I glaze over pretty quickly :slight_smile:

Or maybe try luminance rather than no… one or the other

I’m stull trying to decide personally if its worth the trouble. If I need to mask things and deal with some crazy lighting then I will use it but might just stick to the good old legacy and avoid extra steps…

thanks again, always learning something new…

WP is White Point. That is, the maximum meaningful value a pixel can have (in the raw file, which is basically a stream of (here 14-bit) integers. The exact value depends on the sensor, and is provided in the EXIF data (in your case, 14800 and 15300).

I’m not sure why there are two white point values, unless the lower one gives the upper limit of the linear response, and the higher gives the actual sensor saturation. (if pixels are between the two values, they have meaninful data, but colours can be off).

In the case of this file there is a linearity value given at 10000… I had the same question…perhaps the lower one is a value reliable for “white” or where all channels are not yet clipped and specular is the highest “measureable” value on the sensor but not necessarly guaranteed not to be clipped say on one channel…?? No idea

but remember white point values often depend on ISO for Canon cameras.

Outside my pay grade…then I guess you just have to do lots of experiments and compare the file data with what happens in DT… This would however create a further deviation if DT uses a fixed WP in all cases and doesn’t account for this??

@priort , DT already takes ISO into account I believe, at least in my experience. I just checked a few shots from my 6D and the raw black white module is showing sensible values. You don’t have to do lots of experiments in any case, RT’s camconst.json file (text) has all the values if anyone wants to compare with DT.

I am sure there are config files to provide the values…in the end I am just checking the meta data against what gets used by the software…

In the RT file there are a ton of entries…
Eg. Canon D6
{ // Quality A, some missing scaling factors are safely guessed - samples by sfink16 & RawConvert at RT forums
“make_model”: “Canon EOS 6D”,
“dcraw_matrix”: [ 7034,-804,-1014,-4420,12564,2058,-851,1994,5758 ],
“ranges”: {
“white”: [
{ “iso”: [ 50, 100, 125, 200, 250, 400, 500, 800, 1000, 1600, 2000, 3200 ], “levels”: 15180 }, // typical 15283
{ “iso”: [ 4000, 6400, 8000, 12800 ], “levels”: 15100 }, // typical 15283
{ “iso”: [ 16000, 25600 ], “levels”: 14900 }, // typical 15283
{ “iso”: [ 160, 320, 640, 1250, 2500 ], “levels”: 13100 }, // typical 13225
{ “iso”: [ 5000, 10000 ], “levels”: 13000 }, // typical 13225
{ “iso”: [ 20000 ], “levels”: 12800 }, // typical 13225
{ “iso”: [ 51200, 102400 ], “levels”: 15900 } // typical 16383
“white_max”: 16383,
“aperture_scaling”: [
// no scale factors known for f/1.0 (had no lenses to test with), but the
// ISO 160-320… 12650 white levels maxes out at “white_max” for f/1.2 and below anyway.
{ “aperture”: 1.2, “scale_factor”: 1.130 }, // from histogramm 1 gap in every 7 levels
{ “aperture”: 1.4, “scale_factor”: 1.090 }, // histogram 3 gaps in every 32 levels
{ “aperture”: 1.6, “scale_factor”: 1.060 }, // 16213/15283
{ “aperture”: 1.8, “scale_factor”: 1.040 }, // 16004/15283
{ “aperture”: 2.0, “scale_factor”: 1.030 }, // 15800/15283
{ “aperture”: 2.2, “scale_factor”: 1.020 }, // guessed
{ “aperture”: 2.5, “scale_factor”: 1.015 }, // 15541/15283
{ “aperture”: 2.8, “scale_factor”: 1.010 }, // 15437/15283
{ “aperture”: 3.2, “scale_factor”: 1.005 }, // 15361/15283
{ “aperture”: 3.5, “scale_factor”: 1.000 } // no sample

Then for some Nikon cameras a single value but then it says set to x to be safe…who decided??

Not debating any of the values, in the end my gold std though would be what is defined in the file against what gets used and then deciding if it really was different enough to make a difference…