Color calibration for dummies (without the maths)

If you have numbers very far apart in magnitude, you can get cancellation effects and other funny things happening – but those do not apply here

I agree that for the case of this topic, the differences may be irrelavant. But take this example (more general) in single precision floating point space:

17000000 + 1 + 1 + 1+ 1 = 17000000 != 17000004 = 17000000 + (1 + 1 + 1 + 1)
1 Like

Anyway I’m waving my hands a bit here, so take this with a grain of salt :slightly_smiling_face:
My point is that I would not expect rounding errors to be significant in this context

3 Likes

I agree

1 Like

I just wanted to point this out, because in RawTherapee we had summation errors up to 10% range for some algorithms (long ago) because we did not take care of this…

For wide dynamic range, additive device like a camera, a “2.5” D LUT is probably the right approach. This means creating a table in which the per channel luminance lookup is independent (i.e. orthogonal) to the chrominance lookup. If your luminance linearization is done outside the colorspace conversion, then the table could be purely 2D. This would have an advantage over a matrix in allowing for the embedding of spectral sample statistics. i.e. you can weight areas of the table for the best conversion for the likely real world spectra associated with each region in the chromaticity plane.

3 Likes

It’s just funny how the ‘without the maths’ entry is already at the point of discussing commutativity of matrix operations and error accumulation in limited precision floating point operations. :smiley:

18 Likes

So I would be happy if you could review my code, because my tests don’t show that property. Also, what solver did you use ?

Hello Aurelien,

My point is general, not specific to a particular piece of software (plus I am not a coder): long ago I took a raw capture of a CC24, white balanced the raw data by multiplying it with diag(WB_mult), converted to RGB by keeping the R and B channels as-is and averaging the greens of each quartet, then fed them to the optimization routine to obtain CCM(1); then I repeated the process but without the white balancing step, to obtain CCM(2). For practical purposes CCM(2) was equal to CCM(1)*diag(WB_mult), as theory suggests.

I don’t remember if I used Matlab’s fminsearch (Nelder-Mead) or lsqnonlin (trust-region-reflective) for a solver, with CIEDE2000 as the minimization criterion.

When theory meets c programming practice, practice wins :wink:

2 Likes

plus ça change, plus c’est la même chose :slight_smile:

It would be possible to add a mixed white balance, 50% CAT and 50% native rgb?
This should gives a more robust white balance

That’s possible - color calibration alows masking.

1 Like

Thanks aurelienpierre for this new feature. I used it together with Spydercheckr to replicate some paintings and that I needed matching colours for. And thanks johnny-bit for recommending Spydercheckr for me.

Anyone tried this since 4.0.0? Normalization values don’t move anymore, even if I increase the slider in the exposure module.

If your saying what I think your saying it changed at some point and the values given are the values that you should set for exposure and black level to be accurate. it doesn t change like before, ie its not an offset anymore but the actual value for exposure.

No difference in the 3.8 manual from the 4.0 manual. Has this been mentioned at Github?

Ya I will try to dig it up and I think it is mentioned in one of AP’s videos…maybe the one called something like getting the most out of color calibration or something like that…

Here you go…took me a moment…

Thanks!

1 Like