Color calibration for dummies (without the maths)

First, happy New Year !

Then, some people fried their brain trying to understand the new color calibration module in darktable. While this post is not meant to spare you a good read of the doc (you should try it some time, seriously, it’s simply the best. Tremendous guys working there, I called them the other day, had a great chat), this is what you need to know to use it while the information is brewing in your brain.

What we expect

This is your usual color checker shot using hot-shoe strobes with diffusers, so the color rendition index is not great.

Since the chart has reference values, we are able to compute the error, that is the difference between what is expected (reference) and what we get (actual color of the patches), expressed as the delta E 2000. Don’t get stuck on the meaning of the delta E, for us it’s just a metric helping to express the perceptual color difference in a quantitative fashion.

A delta E of 0 means no error, so what we get is exactly what we expect. A delta E lower than 2.3 will be unnoticeable to the average viewer (2.3 is the Just Noticeable Difference, JND for color geeks), so it’s still an error but acceptable. The higher the delta E, the higher the error, the stranger the picture (compared to the real scene).

The above image shows an average delta E of 2.04 and a max of 8.53 (the blue patch is off). So, we can say it’s pretty close to what it is supposed to look like.

What the camera gives us

Disable any white balance and profiling module, just demosaic the Bayer sensor pattern and print camera RGB to display RGB, and you get this:

That, ladies and gentlemen, are the bullshit data that our camera records. What very few people know, on the internet, is that if we want a “natural” image, we need to work for it by means of software. “Non-retouched” does not imply natural look, it either imply ugly picture (if it’s actually non-retouched) or the retouching process happened behind your back and you are not aware of it (if the picture is kind-of non-ugly). Both ways, please hit anyone on social media who tags picture “non-retouched” or “non-edited”, as if it was a virtue, in the face with a shovel.

(admin note—we know it’s in jest but please don’t hit people with shovels, hit them with the knowledge you gain from reading all of Aurielien’s posts instead :wink: )

Rant aside, your camera produces data and, we, the Humans®, need to give meaning to these data. That is, we need to match camera garbage to human colors.

Just for fun, the above image has an average delta E of 33.56 and a max of 53.32. That’s how natural your sensor is.

Let’s apply the usual white balance

As an experiment, just enable the usual white balance of darktable (called “white balance”) and sample the grey on the 18% grey patch. We still don’t apply any input color profile (or, rather, we apply an identity color profile, which is actually just a pass-trough):

That’s a half victory, since greys are actually grey. No more green cast. Buuuut ? What happened to colors ? Well, remember me saying cameras record garbage and not colors ? That’s it.

The average delta E dropped to 12.61 and the max to 38.05.

Let’s apply a profile

Since sensor data are a non-meaningful kind of RGB (R, G, and B are really just conventional names here, we could as well call that A, B, C or garbage 1, garbage 2, garbage 3 – I think the point about sensor “colors” has been made above already), we need to remap them to human colors. That’s the point of the profiling.

So let’s apply the standard input matrice, taken from Adobe DNG Converter:

Now, we get a delta E of 2.56 and a max of 6.89. Better but still not great. Yes, even with a color profile, we still don’t have the real colors and the average error is still above the Just Noticeable Difference.

Let’s refine that white balance

Remember I said that white balance is applied to the sensor RGB space, that makes no sense for us whatsoever ? So it can help us achieve neutral greys, but the rest of the color range is not properly corrected.

The color calibration module does the same kind of white balancing, but in a special RGB space designed to mimic retina cone cells response to light.

So, we set the old white balance to “camera neutral (D65)”, which is a flat correction linked to the input profile but independent from our current scene, and we do the white balancing in CAT16 through color calibration instead (just sample the middle-grey patch or set empirically using the visual feedback):

We now get an average delta E of 2.13 and a max of 6.63. The average is now below the JND, how great is that ?

What about refining the profile too ?

Adobe DNG matrices, used at input profile, tend to be low-saturation, which is clever to avoid pushing colors out of gamut, but tend to lack vividness.

Since we have a reference target here, we can use it to compute a correction of the input profile, with an extension of color calibration that has been coded early-december 2020, so it is not shipped with darktable 3.4.0 but will be at some point in a next release (for those compiling darktable master, it got merged today):

The internal solver computes the matrix (shown in the Profile data section) that minimizes the delta E and can directly set it in the R, G, and B tabs of color calibration. As seen on the screen, the average delta E drops to 2.04 but the max raises to 8.58, which is kind of the tread-off we deal with anyway… Here is the result:

Since the blues are always going to be our worst challenge, there is an option to force the solver to optimize for “sky and water colors”, so the fit will be more accurate in that region:

Also, the patches that are not crossed, in the GUI overlay, are the ones below JND after calibration, so they are the most accurate. The ones crossed with one diagonal are between once and twice the JND, so they are so-so accurate. The patches crossed with 2 diagonals are above 2 JND, so they are complety off. This feed-back will help you check which colors are in and out, so you can optimize the fitting for the colors that matter the most to your scene.

The computed values from the color-matching fit will then be input in the R, G, and B tabs for you:

Such profiles, computed from color checker, are fairly reusable for similar illuminants, so you can put them in presets and only tweak the white balance later if needed (or just set it to “as shot in camera” in the preset). But you will need to redo the profiles on each scene, under each particular illuminant, if you want a maximal accuracy, given that the fits are never perfect anyway.

Further notes

This whole topic is about getting a systematic workflow, as independent as possible from the peculiarities of each scene, assuming you don’t carry your own color checker everywhere with you (if you even own one). So it’s about a trade-off between reproduceability and accuracy. This way, if you create your own profiles, say one for tungsten light, one for D65 and one for your fixed studio setup, you can reuse them for other pictures.

As of darktable 3.4.0, the profile extraction from the color checker is not yet available, but the point of showing it here (beside making you drool) is to prove that the R, G, B tabs from the channel mixer are indeed just a color profile matrix in disguise, so these coeffs can be used to correctively or artisticaly adjust the input profile color-matching depending on situations and shots. Because standard input matrices are far from perfect (yet not too bad either). Because it’s nearly impossible to get a good fit for both neutral and saturated colors, at least now you have a choice as to where you want to invest the maximum accuracy.

Conclusion

We have shown that cameras don’t actually record colors, but arbitrary data that need heavy corrections to remotely look like colors after processing. Even with corrections, colors still don’t match 1:1 the reality and some trade-offs have to be made. And trade-offs imply that a human, being a user or an engineer, somewhere, had to make an arbitrary choice to set the trade-off depending on contextual priorities.

That one might be hard to swallow for people coming from the user-friendly/intuitive photo-software world (Lightroom, Capture One and the likes), because said software put a lot of effort into hiding all that to users. As a result, users get completely mistaken about their tools and what they actually do. No, cameras don’t record reality, or colors, or anything natural for that matter.

We presented here an example with fairly shitty strobes where the color calibration helped reducing the average delta E by 20%. Things will be a bit different with gorgeous natural daylight, especially the average delta E. Here is an example under a clear winter sky:

Average delta E dropped to 1.91 and the max to 5.66. The old white-balance with only the standard input profile yields an average delta E of 2.16 and a max of 6. The average delta E is then reduced by 12%.

I guess the whole question is : is 12 to 20% extra color precision worth a whole new complex module ?

The answer is yours:

But remember the highest precision bonus (20%) was given in the worst lighting conditions, and that’s usually when you need it most. Make from that what you want.

Bonus : But I don’t care if output colors look like the scene

That one was thrown at me by my wife. And I was like “me neither, but that’s beyond the point”. I know I pass as an hardcore color-science geek, but the actual point is not to get 1:1 matching between scene and output.

The fact is, in your color massaging pipeline, lots of things rely on perceptual definitions of things like chroma, saturation and luminance. Each of them is a combination of RGB channels.

If you convert non-calibrated RGB to any luminance/chroma/hue space (YSL, HSL, HSV, Yxy, Yuv, LCh, Luv and whatnot), then your luminance axis is not achromatic, your chroma is not uniformly spread around the achromatic locus, and hues are actually not hues but random angles around a tilted luminance axis. Basically, hue/luminance/etc. do not mean what you think they mean anymore if you haven’t properly match our RGB against retina cone response.

So, perceptual models break if the calibration failed. Getting a proper calibration is not just about getting high-fidelity colors, it’s about making the whole toolset work as smoothly as possible. Unless you only work in RGB and stay far away from anything that decouples luminance from chromaticity, which is kind of hard in modern apps. Notice that gamut mapping also operates in perceptual frameworks, by trying to preserve hue and luminance while compressing chroma until colors fit in destination gamut.

If you want to share styles between pictures, or copy-paste editing history, your only option is to do a clean corrective calibration early in the pipe, contextual to each picture, then apply your batch artistic grading later in the pipe. Trying to share styles between pictures that have not been “perfectly” normalized to the same ground-truth first is going to give you a lot of work to match the final looks.

So calibration is not really/only about output look, it’s more about finding the path of least pain to operate the color tools later. Then, you can go crazy in color balance all you want.

47 Likes

Does this mean that I could generate this R,G,B data for my color checker in darktable master and then key those coefficients into darktable 3.4 and save a preset? I don’t use master for my day-to-day editing but would love to be able to use this sooner if possible.

1 Like

Yes, absolutely. These are just values that go in the same parameters, the difference is in 3.4.0 you need to eyeball them, and in master you can automatically optimize them if you get a chart.

3 Likes

The delta e value differences between cat and wb may be subtle, but to my eye, the subtleties make a big difference in quality. I love the new kitty, you are doing amazing work.

1 Like

What are the current choices for a color target in master? I don’t have master yet, but want to get a new target. My Macbeth color checker is 2500km away for the duration of covid.

Spyder Checker 24 and 48, Xrite Passport/Classic Checker 24 pre- and post-2014.

Thanks! That helps a lot with my decision.

My full size Macbeth is 1978 vintage and my ColorChecker Passport is several years old. I’ve been considering an X-rite ColorChecker Video Passport, but if it’s not supported, I’ll go with something else.

Brilliant work and brilliant article @anon41087856

2 Likes

@anon41087856 Awesome tool Aurelien…nice job on the grid layout and fitting it looks really nice and works smoothly the snapping the mouse pointer to the corners is a nice touch …I have one question to be sure that I understand. I usually don’t use styles that much but I do edits and then copy the history stack to similar photos in a group so if I was on a hike for example and stopped and took a few photo’s then I would edit one and set WB etc etc and then as a first pass I copy and paste that to the rest of the photos. I assume that what you are saying is that first shot could be with a colorchecker and then I could edit that and proceed as I usually have done where I assumed WB was the same as my initial edit. Now if i walk down the trail and come to another spot maybe the lighting is different so really I would need to repeat the process again and no preset created earlier ie for foliage etc would be of much use correct?? Or would there still be some merit to shooting a chart in good D65 light and then saving a coefficients for each of the optimizations that you provide and then use them down the road on a scene with say green foliage where I did not have the color checker… I guess the short version of my question is if I understand things that If I don’t have my checker for a series of shots then I might actually be just as well off using CAT16 in the color calibration module rather than a preset optimized for foliage from a previous shot using the colorchecker…I know this is a bit of a ramble just trying to follow some of your comments about styles etc in the last few paragraphs to guage how widely a preset or profile created that way would be…Thanks this is really nice…

@anon41087856 Aurelien I am on windows and playing with the color chart. I have noticed that if you activate the color picker after you have set the grid it disappears and can only be retrieved by toggling the colorchart arrow. When it comes back it has lost its coordinates and must be repositioned…I will see if anyone else see this and if so I will submit a report…

Yup.

Well, I wouldn’t go crazy color geek for pictures shot in such conditions, because landscapes can have different illuminants along the depth of field. The parts far away will likely get sky/sun light whereas the closest parts may get a lot of colored bounced light from close foliage and such, so if you calibrate for daylight + green bounced light (because, well, you can’t hold your color checker in the far away), you will make the far horizon completely off.

Just go with a generic daylight profile at current color temperature, get a proper average look, then maybe mask zones where illuminants don’t match the average. Also, by experience, hiking with a crazy photographer is a PITA already for those going with you, so maybe keep the workflow light and fast.

This is the intended behaviour, but I will change it because I find it annoying too.

4 Likes

Thanks for your comments…insightful as always…as for the picker I was just trying to do some patch checking with the different optimizations to compare when I noticed this. I suspect others would try something similar

Please don’t follow this request!

1 Like

You might provide the Lab-values for the 24 patch grid - since xrite keeps these values constant it‘s just copy and adapt to support it.
Template: Color calibration : add profiling from color checker by aurelienpierre · Pull Request #7293 · darktable-org/darktable · GitHub

Great article… now it is more clear for me what color calibration does.

If I read it well, color calibration lets you adjust the linear color matrices transformation to produce the most accurate result around a target color.

I usually don’t care to much about precise color reproduction, for me is usually enough to get natural colors.
But it seems you can get benefits from color calibration even if you don’t need precise colors, as Aureliene explains.

I suppose you can get more precise results if you use ICC profiles for your edition instead of just linear matrices.

Now I have to understand the difference between CAT16 and Bradford, and how those of us that don’t have a color checker can benefit of color calibration.

Usually there are input color profiles canned for each camera model that work more or less well with a camera model under normal light conditions.

Is darktable going to move the color profiling from the input color profiles to the camera calibration module?
Is it going to distribute “canned” profiles provided by its users with color checkers?

Just to clarify: Color Calibration is a “normal” color matrix in a fancy color space, right? I.e. Color Calibration can do things White Balance on its own can not do, because WB applies its color matrix in a less fancy color space? Right?

No exactly. WB just map a single color to a gray color (adjusting R G and B coeficents).

Color calibration (linear color calibration) adjust the color matrix coeficients to transform color and the goal is to get the least possible deviation of color from the original target.
It can do white balance but also adjusts other colors.

At leas that is how I understand it.

ICC have a matrix and can have a lut and tone mapping so this is not a complete replacement. By convention ICC are D50 illuminant based however this setup with color calibration could be a quick way if you have a color checker to tweak your matrix values to better fit the scene illuminant or that is how I interpret it…Aurelien told me that Cat16 works better in mixed and difficult lighting…the module will switch to that if it can’t find a standard illuminant when set for Bradford…

1 Like

When dealing with linear operators such as ‘white balance’ or ‘color’ matrices there are no more or less fancy spaces mathematically. Just like 3x4x5 is equal to 3x5x4 one can choose to apply the operators in any sequence, or collapse them all into one (3x20) - though perhaps one way is more convenient or intuitively explained than the other.

When one introduces non-linear processing (such as in some types of demosaicing, LUTs or curves) things become more complex because of the introduced non-linearities, not because of the perceived fanciness of a space.

They don’t make any difference for the fitting, only for the white balancing. I have found that CAT16 gives lower delta E pretty much all the time. Previously, I thought that Bradford had a slight advantage for daylight (I read it somewhere), but testing it shows that CAT16 is better in all the cases I have tested (daylight, tungsten and shitty CRI LED).

ICC profiles are only a container. The input profiles are always matrices if you do it the standard way, ICC or not. Then, ICC can be LUT too, but LUT are a double-edged sword (theoritically more accurate than matrices, but also less forgiving to any user mistake during calibration and not exposure-invariant – general advice is to stay away from that).

Calibration module does not replace input profiles, it only complements it. I tried to bypass the input color profile at all, and use only the color calibration module, it gives really weird results. Having a first rough color matching helps the calibration. There are numerical issues at play here, in the solver.

Yes.

The algo is the same, only it is applied in a different space.

Color calibration white balance also adjust the R, G, B coeffs, but in a special RGB space designed just for that.

Yes and no, what you say is too broad. Matrices multiplications are not associative, so the order of application matters. Also, if you perform a least-squares fitting, you will get a better or worse fit depending on how your values are normalized and how your matrix is conditioned, so doing WB before matrix fitting or matrix fitting before WB does not produce the same fitting quality. Plus we are dealing with numeric computations, so roundings errors are to be taken into account, and make things slightly less nice than linear algebra theory.