First, happy New Year !
Then, some people fried their brain trying to understand the new color calibration module in darktable. While this post is not meant to spare you a good read of the doc (you should try it some time, seriously, it’s simply the best. Tremendous guys working there, I called them the other day, had a great chat), this is what you need to know to use it while the information is brewing in your brain.
What we expect
This is your usual color checker shot using hot-shoe strobes with diffusers, so the color rendition index is not great.
Since the chart has reference values, we are able to compute the error, that is the difference between what is expected (reference) and what we get (actual color of the patches), expressed as the delta E 2000. Don’t get stuck on the meaning of the delta E, for us it’s just a metric helping to express the perceptual color difference in a quantitative fashion.
A delta E of 0 means no error, so what we get is exactly what we expect. A delta E lower than 2.3 will be unnoticeable to the average viewer (2.3 is the Just Noticeable Difference, JND for color geeks), so it’s still an error but acceptable. The higher the delta E, the higher the error, the stranger the picture (compared to the real scene).
The above image shows an average delta E of 2.04 and a max of 8.53 (the blue patch is off). So, we can say it’s pretty close to what it is supposed to look like.
What the camera gives us
Disable any white balance and profiling module, just demosaic the Bayer sensor pattern and print camera RGB to display RGB, and you get this:
That, ladies and gentlemen, are the bullshit data that our camera records. What very few people know, on the internet, is that if we want a “natural” image, we need to work for it by means of software. “Non-retouched” does not imply natural look, it either imply ugly picture (if it’s actually non-retouched) or the retouching process happened behind your back and you are not aware of it (if the picture is kind-of non-ugly). Both ways, please hit anyone on social media who tags picture “non-retouched” or “non-edited”, as if it was a virtue, in the face with a shovel.
(admin note—we know it’s in jest but please don’t hit people with shovels, hit them with the knowledge you gain from reading all of Aurielien’s posts instead )
Rant aside, your camera produces data and, we, the Humans®, need to give meaning to these data. That is, we need to match camera garbage to human colors.
Just for fun, the above image has an average delta E of 33.56 and a max of 53.32. That’s how natural your sensor is.
Let’s apply the usual white balance
As an experiment, just enable the usual white balance of darktable (called “white balance”) and sample the grey on the 18% grey patch. We still don’t apply any input color profile (or, rather, we apply an identity color profile, which is actually just a pass-trough):
That’s a half victory, since greys are actually grey. No more green cast. Buuuut ? What happened to colors ? Well, remember me saying cameras record garbage and not colors ? That’s it.
The average delta E dropped to 12.61 and the max to 38.05.
Let’s apply a profile
Since sensor data are a non-meaningful kind of RGB (R, G, and B are really just conventional names here, we could as well call that A, B, C or garbage 1, garbage 2, garbage 3 – I think the point about sensor “colors” has been made above already), we need to remap them to human colors. That’s the point of the profiling.
So let’s apply the standard input matrice, taken from Adobe DNG Converter:
Now, we get a delta E of 2.56 and a max of 6.89. Better but still not great. Yes, even with a color profile, we still don’t have the real colors and the average error is still above the Just Noticeable Difference.
Let’s refine that white balance
Remember I said that white balance is applied to the sensor RGB space, that makes no sense for us whatsoever ? So it can help us achieve neutral greys, but the rest of the color range is not properly corrected.
The color calibration module does the same kind of white balancing, but in a special RGB space designed to mimic retina cone cells response to light.
So, we set the old white balance to “camera neutral (D65)”, which is a flat correction linked to the input profile but independent from our current scene, and we do the white balancing in CAT16 through color calibration instead (just sample the middle-grey patch or set empirically using the visual feedback):
We now get an average delta E of 2.13 and a max of 6.63. The average is now below the JND, how great is that ?
What about refining the profile too ?
Adobe DNG matrices, used at input profile, tend to be low-saturation, which is clever to avoid pushing colors out of gamut, but tend to lack vividness.
Since we have a reference target here, we can use it to compute a correction of the input profile, with an extension of color calibration that has been coded early-december 2020, so it is not shipped with darktable 3.4.0 but will be at some point in a next release (for those compiling darktable master, it got merged today):
The internal solver computes the matrix (shown in the Profile data section) that minimizes the delta E and can directly set it in the R, G, and B tabs of color calibration. As seen on the screen, the average delta E drops to 2.04 but the max raises to 8.58, which is kind of the tread-off we deal with anyway… Here is the result:
Since the blues are always going to be our worst challenge, there is an option to force the solver to optimize for “sky and water colors”, so the fit will be more accurate in that region:
Also, the patches that are not crossed, in the GUI overlay, are the ones below JND after calibration, so they are the most accurate. The ones crossed with one diagonal are between once and twice the JND, so they are so-so accurate. The patches crossed with 2 diagonals are above 2 JND, so they are complety off. This feed-back will help you check which colors are in and out, so you can optimize the fitting for the colors that matter the most to your scene.
The computed values from the color-matching fit will then be input in the R, G, and B tabs for you:
Such profiles, computed from color checker, are fairly reusable for similar illuminants, so you can put them in presets and only tweak the white balance later if needed (or just set it to “as shot in camera” in the preset). But you will need to redo the profiles on each scene, under each particular illuminant, if you want a maximal accuracy, given that the fits are never perfect anyway.
This whole topic is about getting a systematic workflow, as independent as possible from the peculiarities of each scene, assuming you don’t carry your own color checker everywhere with you (if you even own one). So it’s about a trade-off between reproduceability and accuracy. This way, if you create your own profiles, say one for tungsten light, one for D65 and one for your fixed studio setup, you can reuse them for other pictures.
As of darktable 3.4.0, the profile extraction from the color checker is not yet available, but the point of showing it here (beside making you drool) is to prove that the R, G, B tabs from the channel mixer are indeed just a color profile matrix in disguise, so these coeffs can be used to correctively or artisticaly adjust the input profile color-matching depending on situations and shots. Because standard input matrices are far from perfect (yet not too bad either). Because it’s nearly impossible to get a good fit for both neutral and saturated colors, at least now you have a choice as to where you want to invest the maximum accuracy.
We have shown that cameras don’t actually record colors, but arbitrary data that need heavy corrections to remotely look like colors after processing. Even with corrections, colors still don’t match 1:1 the reality and some trade-offs have to be made. And trade-offs imply that a human, being a user or an engineer, somewhere, had to make an arbitrary choice to set the trade-off depending on contextual priorities.
That one might be hard to swallow for people coming from the user-friendly/intuitive photo-software world (Lightroom, Capture One and the likes), because said software put a lot of effort into hiding all that to users. As a result, users get completely mistaken about their tools and what they actually do. No, cameras don’t record reality, or colors, or anything natural for that matter.
We presented here an example with fairly shitty strobes where the color calibration helped reducing the average delta E by 20%. Things will be a bit different with gorgeous natural daylight, especially the average delta E. Here is an example under a clear winter sky:
Average delta E dropped to 1.91 and the max to 5.66. The old white-balance with only the standard input profile yields an average delta E of 2.16 and a max of 6. The average delta E is then reduced by 12%.
I guess the whole question is : is 12 to 20% extra color precision worth a whole new complex module ?
The answer is yours:
But remember the highest precision bonus (20%) was given in the worst lighting conditions, and that’s usually when you need it most. Make from that what you want.
Bonus : But I don’t care if output colors look like the scene
That one was thrown at me by my wife. And I was like “me neither, but that’s beyond the point”. I know I pass as an hardcore color-science geek, but the actual point is not to get 1:1 matching between scene and output.
The fact is, in your color massaging pipeline, lots of things rely on perceptual definitions of things like chroma, saturation and luminance. Each of them is a combination of RGB channels.
If you convert non-calibrated RGB to any luminance/chroma/hue space (YSL, HSL, HSV, Yxy, Yuv, LCh, Luv and whatnot), then your luminance axis is not achromatic, your chroma is not uniformly spread around the achromatic locus, and hues are actually not hues but random angles around a tilted luminance axis. Basically, hue/luminance/etc. do not mean what you think they mean anymore if you haven’t properly match our RGB against retina cone response.
So, perceptual models break if the calibration failed. Getting a proper calibration is not just about getting high-fidelity colors, it’s about making the whole toolset work as smoothly as possible. Unless you only work in RGB and stay far away from anything that decouples luminance from chromaticity, which is kind of hard in modern apps. Notice that gamut mapping also operates in perceptual frameworks, by trying to preserve hue and luminance while compressing chroma until colors fit in destination gamut.
If you want to share styles between pictures, or copy-paste editing history, your only option is to do a clean corrective calibration early in the pipe, contextual to each picture, then apply your batch artistic grading later in the pipe. Trying to share styles between pictures that have not been “perfectly” normalized to the same ground-truth first is going to give you a lot of work to match the final looks.
So calibration is not really/only about output look, it’s more about finding the path of least pain to operate the color tools later. Then, you can go crazy in color balance all you want.