Color calibration for dummies (without the maths)

Hi,

You mean in floating-point, right? And if so, how do you take this into account exactly? I’m not sure I understand what you mean…

Floating point or else. We take that into account by ensuring the right operation happens at the right place in the pipe. Which means that, if you least-square-fit a matrix at some point in the pipeline, it needs to be always applied at the same point in the pipe, or the coeffs in said matrix become wrong.

Matrix multiplication is associative over e.g. the reals, that’s why I found your statement confusing…

Hi again,

FWIW, LUT profiles generated by dcamprof (at least by default) are supposed to be exposure invariant. I never really checked though to be honest

With A and B, 2 matrices, and * the matrix product, A * B ≠ B * A is what I meant. Double-checking it on wikipedia, this is not associativity, seems like permutability ? Anyway, words lie, trust equations.

How ? If the LUT is ever so-slightly non-linear, you lose exposure-invariance by definition. But if your LUT is linear, just use a matrix…

1 Like

My algebra is a bit forgotten but that is commutative property, isn’t it?
As long as I can remember, matrix multiplication is not conmutative but it is associative, meaning (AB)C= A(BC)…

I thought ICC profiles using LUTS and applied at the input profile stage were the mos accurate way to reproduce colors, as long as it can approximate better the non linear behaviour of the sensor color pigments.

Cameras use linear matrices to make the color transformation themselves when generating jpegs, anyway.

But some people uses detailed profiling under controlled light (in studio) to get the most accurate colors and an ICC profile and use programs like capture one that can work in the camera space (after applying that icc profile) to get the most of their camera color reproduction and the most acurate color results.
At least that is what some experts working with product photography recommend.

Any way I don’t mind perfect color reproduction, I just need good natural colors.

I will use CAT16 then (but no color checker to calibrate colors).

Thank you for the clarifications.

Alright, that’s commutativity. Mystery solved :slight_smile:

Regarding luts, better to point at the docs:

http://rawtherapee.com/mirror/dcamprof/dcamprof.html#camera_model

Hi Aurelien, that’s because I am a broad guy, one has to read between the lines otherwise it’s too easy :slight_smile:

Pardon? They are linear matrices, one can combine them unless one introduces non-linear changes in between. But I explicitly was not referring to that case.

I happen to have done this and in fact one gets exactly the same final result whether (1) WB is applied to the data before running the optimization routine or whether (2) the optimization routine is run on the unbalanced raw data from the capture as-is. In the end for all intents and purposes CCM(2) = CCM(1)*diag(WB).

If you use floating point, as I assume you are, rounding errors are immaterial in this case.

Keep up the good work!
Jack

In floating point calculations it’s not guaranteed that:

  1. (a + b) + c = a + (b + c)
  2. (a * b) * c = a * (b * c)

On the other hand tough, if the numbers are of the same magnitude (e.g. normalised to [0,1]), the error is not significant, especially in this context (i.e. It’s not something that you can perceive)

@agriggio Isn’t the normalization irrelevant in this case? The relative error should be the same…

Well, I guess it depends what one means for guaranteed (eps = ?). But wouldn’t they effectively be the same in practice?

I believe those are effectively equal if ‘a’ were the whiteBalancedData->XYZ CCM, ‘b’ were diag(WB_mult) and ‘c’ 3xN image data.

If you have numbers very far apart in magnitude, you can get cancellation effects and other funny things happening – but those do not apply here

I agree that for the case of this topic, the differences may be irrelavant. But take this example (more general) in single precision floating point space:

17000000 + 1 + 1 + 1+ 1 = 17000000 != 17000004 = 17000000 + (1 + 1 + 1 + 1)
1 Like

Anyway I’m waving my hands a bit here, so take this with a grain of salt :slightly_smiling_face:
My point is that I would not expect rounding errors to be significant in this context

3 Likes

I agree

1 Like

I just wanted to point this out, because in RawTherapee we had summation errors up to 10% range for some algorithms (long ago) because we did not take care of this…

For wide dynamic range, additive device like a camera, a “2.5” D LUT is probably the right approach. This means creating a table in which the per channel luminance lookup is independent (i.e. orthogonal) to the chrominance lookup. If your luminance linearization is done outside the colorspace conversion, then the table could be purely 2D. This would have an advantage over a matrix in allowing for the embedding of spectral sample statistics. i.e. you can weight areas of the table for the best conversion for the likely real world spectra associated with each region in the chromaticity plane.

3 Likes

It’s just funny how the ‘without the maths’ entry is already at the point of discussing commutativity of matrix operations and error accumulation in limited precision floating point operations. :smiley:

18 Likes

So I would be happy if you could review my code, because my tests don’t show that property. Also, what solver did you use ?