Color calibration for dummies (without the maths)

I agree

1 Like

I just wanted to point this out, because in RawTherapee we had summation errors up to 10% range for some algorithms (long ago) because we did not take care of this…

For wide dynamic range, additive device like a camera, a “2.5” D LUT is probably the right approach. This means creating a table in which the per channel luminance lookup is independent (i.e. orthogonal) to the chrominance lookup. If your luminance linearization is done outside the colorspace conversion, then the table could be purely 2D. This would have an advantage over a matrix in allowing for the embedding of spectral sample statistics. i.e. you can weight areas of the table for the best conversion for the likely real world spectra associated with each region in the chromaticity plane.

3 Likes

It’s just funny how the ‘without the maths’ entry is already at the point of discussing commutativity of matrix operations and error accumulation in limited precision floating point operations. :smiley:

18 Likes

So I would be happy if you could review my code, because my tests don’t show that property. Also, what solver did you use ?

Hello Aurelien,

My point is general, not specific to a particular piece of software (plus I am not a coder): long ago I took a raw capture of a CC24, white balanced the raw data by multiplying it with diag(WB_mult), converted to RGB by keeping the R and B channels as-is and averaging the greens of each quartet, then fed them to the optimization routine to obtain CCM(1); then I repeated the process but without the white balancing step, to obtain CCM(2). For practical purposes CCM(2) was equal to CCM(1)*diag(WB_mult), as theory suggests.

I don’t remember if I used Matlab’s fminsearch (Nelder-Mead) or lsqnonlin (trust-region-reflective) for a solver, with CIEDE2000 as the minimization criterion.

When theory meets c programming practice, practice wins :wink:

2 Likes

plus ça change, plus c’est la même chose :slight_smile:

It would be possible to add a mixed white balance, 50% CAT and 50% native rgb?
This should gives a more robust white balance

That’s possible - color calibration alows masking.

1 Like

Thanks aurelienpierre for this new feature. I used it together with Spydercheckr to replicate some paintings and that I needed matching colours for. And thanks johnny-bit for recommending Spydercheckr for me.

Anyone tried this since 4.0.0? Normalization values don’t move anymore, even if I increase the slider in the exposure module.

If your saying what I think your saying it changed at some point and the values given are the values that you should set for exposure and black level to be accurate. it doesn t change like before, ie its not an offset anymore but the actual value for exposure.

No difference in the 3.8 manual from the 4.0 manual. Has this been mentioned at Github?

Ya I will try to dig it up and I think it is mentioned in one of AP’s videos…maybe the one called something like getting the most out of color calibration or something like that…

Here you go…took me a moment…

Thanks!

1 Like

This is an old thread. I am curious how much of it holds true today.

  • As per AP - originally calibration is done on 2 steps - the old WB (White Balance) module and the new Color Calibration. I am trying to understand the manual darktable 4.4 user manual - color calibration but I am not perceiving that WB module is used anymore (in terms - not as a color picker) - it is left on the reference setting. Am I perceiving it correctly?

  • My understanding is that the “Normalization values” a guidance for the exposure and black level correction. And the user is guided how to change them. With few trials - I was able to adjust “black offset” to zero but even with big offsets - I was unable to make “exposure compensation” zero. Is the user simply expected to put the number written (and not to expect that it will change to zero when the profile is re calculated)?

  • To have a fairly universal profile - the suggestion is to create a preset that is based on the “as shot in camera” after applying the profile so the “matrix adaptation space” is applied on top of it.

On initial thought I understand why this would be the case. But then this is going to take the WB as recorded by the camera. And it can vary - Auto WB or fixed (by measurement - graycard or by estimate. Suppose the user used Auto WB - is the user expected to further estimate the WB by measuring the scene?

Also - the universal profile is based on natural light (good quality). Is there any difference when a profile is being created if the test shot is done using direct sunlight or cloudy or overcast?

What is the approach going to be if the shot is done using artificial light (like fluorescent / energy savers etc. not photo grade) - is the user expected to change back to “as shot in camera” or this would be not a needed step?

There are 3 icons at the bottom right of the color calibration
image
The re calculate and apply are self explanatory but what is “check output delta E” used for?

Very quick replies…

Point 1 yes leave WB on reference if using CC module for WB/illuminant selection

Point 2 Initially those values were offsets to add to existing exposure etc…this was later changed… those are the values at which the profile is accurate as reported so you would enter them…

Point 2 …As shot is just really what ever value the camera used for the shot and is passed to DT. Your matrix will be added to that… The profile will be a varying value at any lighting other than when the shot was taken… the closest to a general use is a daylight shot profile saved to apply to the as-shot camera… Its set to as shot in the preset so that it only applies the matrix to your image and the current wb and not some totally wrong value encoded at the time of the preset creation…

That button I believe is a check is so you can see where your current profile deltaE is when you start and then you can see from there where you move to…

I believe this is accurate… There is a tweak to this in the recent code allowing some modules to get access to the wb coefficients as that is a better set of reference values for them and the D65 part is now handled a little differently but that is all in the background…

1 Like

My experience is that these values move (the normalization value).
However - the black offset can be driven to zero while the exposure compensation cannot.

Unless - I am misunderstanding and I should simply apply the values initially seen in the exposure module and then either not re calculate or re calculate but ignore the second set of values (because re calculation makes the values move).

I am finding that it is possible to create a style that is based on “as shot” but further refine it based on the white card. So - if the user has a style then they can use the white card after applying the style. This can be useful as the style can take a while to prepare.

https://www.datacolor.com/spyder/products/spyder-checkr-photo-sck310/
I was surprised to learn that the 18% card is for film exposure (on the info section).

Another interesting detail is that these patches are expected to last about 2y. My guess is that this would be of bigger value for somebody that uses it commercially.

It is a pleasant surprise that the tool is listed in the module so all the patches can be used together.
And about price - it is cheaper than xrite/calibrite.