Quick math questions

There should be a healthy dose of probability going on, so it isn’t so pat at the extremities. In general, it is advisable to take images in the ranges and conditions in which the camera is supposed to perform most reliably, the rest is not guaranteed. Remember that consumer cameras are just that. Well, no instrument can observe and record phenomena perfectly due to a number of reasons, some natural and others by design. I won’t go into the details, mostly because I forgot my education. :blush: That said, this discussion about “0” is still valuable.

The quantum nature of the raw material of general photography (light) makes it a probabilistic process by definition, our eyes expect it so, nothing to worry about that. The rest can be modeled fairly accurately, see the earlier link for a simple intuition. If one understands the basics one knows what to expect and how to get the best out of one’s kit.

1 Like

I know my comment isn’t helpful on a practical level. What I am saying is that cameras are black boxes. The data isn’t untouched so to speak. But due to you all being persistent nerds, it is easy to overcome that by analyzing the output, examining the hardware and firmware, and making profiles and other adjustments. And as you say, fairly accurate is good enough.

“Quick math questions” turns into “How does noise and 0 work in digital photography”. Apparently this is still a photography forum at the core :smiley:

Detours are fun. We substituted “quick” with “light”, which is both quick and light. :rofl:

The original purpose was for us to pose questions related to math and coding, not so much theory. My earliest questions after joining the forum were concerned with unusual values (NaN, inf, zero, imaginary).

Whenever I look into changing illuminants or colorspaces of an image my head explodes because the tutorials out there aren’t as straightforward as I hope. In particular, I am interested in how we arrive at these matrices and would like an easy to follow step by step explanation so that I can make more.

From https://github.com/Beep6581/RawTherapee/blob/dev/rtengine/iccmatrices.h; e.g., how do we get to this?

constexpr double xyz_rec2020[3][3] = {
    {0.6734241,  0.1656411,  0.1251286},
    {0.2790177,  0.6753402,  0.0456377},
    { -0.0019300,  0.0299784, 0.7973330}
};

PS Future: once clarified, I or someone could make G’MIC more capable in this area.

1 Like

I found these links
http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html

https://www.ryanjuckett.com/rgb-color-space-conversion/

https://engineering.purdue.edu/~bouman/ece637/notes/pdf/ColorSpaces.pdf

https://physics.stackexchange.com/questions/487763/how-are-the-matrices-for-the-rgb-to-from-cie-xyz-conversions-generated

https://mina86.com/2019/srgb-xyz-matrix/

EDIT : these two are the more clear to me
https://www.ryanjuckett.com/rgb-color-space-conversion/

https://engineering.purdue.edu/~bouman/ece637/notes/pdf/ColorSpaces.pdf

Some links I have found useful:

CIE 1931 color space - Wikipedia Wikipedia: CIE 1931 color space

sRGB - Wikipedia Wikipedia: sRGB

White point - Wikipedia Wikipedia: White point

Standard illuminant - Wikipedia Wikipedia: Standard illuminant

http://www.brucelindbloom.com/ Bruce Lindbloom. The maths behind colour science.

https://ninedegreesbelow.com/ Nine Degrees Below Photograpy, Elle Stone.
Articles and tutorials on ICC profile color management and free/libre image editing. Includes a large collection of profiles.

A Standard Default Color Space for the Internet - sRGB A Standard Default Color Space for the Internet - sRGB, 1996. Proposed standard.

http://color.org/chardata/rgb/sRGB.pdf How to interpret the sRGB color space (specified in IEC 61966-2-1) for ICC profiles

http://www2.units.it/ipl/students_area/imm2/files/Colore1/sRGB.pdf IEC/4WD 61966-2-1: Colour Measurement and Management in Multimedia Systems and Equipment - Part 2-1: Default RGB Colour Space - sRGB

In quest of well behaved working spaces Nine Degrees Below Photograpy: In quest of well behaved working spaces, Elle Stone.

I am aware of some of these links. I guess the best way is to learn by doing. :blush: I will start an exercises thread if need be (when I have more energy).

In the meantime, (if any of you want to participate,)

That is the transform matrix (I think) from xyz to rec2020 with D50 illuminant. As an exercise, I want to change the illuminant to E.

1 Like

Yes.
For me the first thing to try as exercise is to find the srgb (d65) to xyz and the xyz to srgb (d65) matrix given the xy chromaticity coordinates

Here the pratical exercise
Linear Transformation Example for sRGB Space

The second exercise would be find the rec.2020 (d65) to xyz matrix given the xy chromaticity coordinates.

The next step would be learn how to do the Bradford chromatic adaptation transform

For Chromatic Adaptation, a useful source is Welcome to Bruce Lindbloom's Web Site

Given the (X,Y,Z) of the reference whites of the source and destination, and cone response matrix MA and its inverse MA^-1, we calculate the linear transformation matrix M.

Lindbloom gives values of MA and MA^-1 for three methods: XYZ scaling, Bradford and Von Kries. Other methods include Sharp, CMCCAT2000, CAT02, BS, BS-PC and Fairchild.

For example, to convert from D65 to E, using the Bradford method, Lindbloom gives M:

 1.0502616  0.0270757 -0.0232523
 0.0390650  0.9729502 -0.0092579
-0.0024047  0.0026446  0.9180873

Consider the numbers on the diagonal. Approx 1.05 will slightly increase the value in the red channel; 0.97 will slightly decrease green, and 0.918 will decrease blue. The overall transformation will make the image slightly redder.

According to Standard illuminant - Wikipedia, illuminant E is a theoretical illuminant roughly similar to D55. We can see from File:Planckian-locus.png - Wikipedia that D55 is slightly redder than D65. So this confirms that the matrix M shifts colours in the right direction.

The matrix xyz_rec2020[3][3] can be multiplied by M to give a conversion from XYZ to REC2020 with standard illuminant E.

In this case, it is from D50 to E.

Thanks for pointing out what happens to the colour. It reinforces the learning process. Indeed, E and D55 are similar. I hope to be able to hop between arbitrary illuminants.

Sorry, I misread your post. For D50 to E, Lindbloom gives M as:

 1.0025535  0.0036238  0.0359837
 0.0096914  0.9819125  0.0105947
 0.0089181 -0.0160789  1.2208770

So this shifts lightly to the blue.

Lindbloom has calculated M for our convenience. For programatic calculation, we don’t want to store all those values for M. We do need to store the white points of whatever illuminants we care about, eg from Standard illuminant - Wikipedia , and we need the cone response matrices for whatever methods we want to use.

Whenever I see a 3x3 colour transformation matrix, I ask myself: Are the diagonals approx 1.0, and the other values approx 0.0? If so, then it will cause a subtle colour change. Then, which diagonals are above 1.0 and which are below 1.0? This gives the colour shift.

And we can readily see the visual effect of the transformation (Windows BAT syntax):

rem From D50 to E.

set SMAT=^
1.0025535,0.0036238,0.0359837,^
0.0096914,0.9819125,0.0105947,^
0.0089181,-0.0160789,1.2208770

magick ^
  toes.png ^
  -color-matrix %SMAT% ^
  x.jpg

x
The toes look cold. Brrr.

@afre Following the maths on Bruce’s site I arrive at this (fairly straightforward) method to compute the example matrix you picked out. Note that for some odd reason, the variable name in RT is not what I expected. As I understand Bruce’s site, this matrix is used to convert an RGB value in Rec.2020 to a XYZ value and not the other way around. Maybe I’m wrong… :man_shrugging:

Anyway, here’s the maths. Forgive me for using slightly different notation… force of habit.

Pick your origin color space, here Rec.2020, and note its primaries and white point in xy chromaticity coordinates:

(x_R,y_R) =(0.708,0.292)\\ (x_G,y_G) = (0.170,0.797)\\ (x_B,y_B) = (0.131,0.046)\\ (x_W,y_W) = (0.3127,0.3290)

These values are taken from Wikipedia. The white point is D65 and therefore needs adaptation to D50. Adaptation is a simple matrix multiplication of (X,Y,Z) values with the respective transformation matrix \mathbf{M_A} (see Bruce’s table). For going from D65 to D50 we have,

\mathbf{M_A}=\begin{bmatrix} 1.0478112 & 0.0228866 & -0.0501270\\ 0.0295424 &0.9904844 & -0.0170491 \\ -0.0092345 & 0.0150436 & 0.7521316 \end{bmatrix}

We can now calculate Bradford-adapted (X,Y,Z) values from the chromaticity coordinates:

\begin{bmatrix} X_R'\\ Y_R'\\ Z_R' \end{bmatrix} = \mathbf{M_A} \begin{bmatrix} x_R / y_R\\ 1\\ (1 - x_R - y_R) / y_R \end{bmatrix}

Do the same for the green, blue and white coordinates.
Finally, the conversion matrix \mathbf{M} to go from RGB to XYZ is given here:

\begin{bmatrix} S_R\\ S_G\\ S_B \end{bmatrix}= \begin{bmatrix} X_R' & Y_R' & Z_R'\\ X_G' & Y_G' & Z_G'\\ X_B' & Y_B' & Z_B' \end{bmatrix}^{-1} \begin{bmatrix} X_W'\\ Y_W'\\ Z_W' \end{bmatrix}\\ \mathbf{M} = \begin{bmatrix} S_R X_R' & S_G X_G' & S_B X_B'\\ S_R Y_R' & S_G Y_G' & S_B Y_B'\\ S_R Z_R' & S_G Z_G' & S_B Z_B' \end{bmatrix}

When plugging in the above numbers, I end up with the following matrix, with all values rounded to 7 decimal places:

\mathbf{M}_\textrm{Rec. 2020 to XYZ} = \begin{bmatrix} 0.6734241 & 0.1656411 & 0.1251286\\ 0.2790177 & 0.6753402 & 0.0456377\\ -0.0019300 & 0.0299784 & 0.7973330 \end{bmatrix}

Which is identical to the values in RT’s source code.

1 Like

So this is how I have interpreted the Bruce Lindbloom page, RGB/XYZ Matrices
The difference is that he starts with the XYZ value for white point while I start from the xy coordinates for white point too.

rgbxyz-brucelindbloom.py (1.3 KB)

I think it should be easy to translate in g’mic language.

With numpy I use np.linalg.inv for a matrix inversion, how to do this task with g’mic :thinking: ?

Here finally my rec.2020 to xyz d50

rgb d65 to xyz d65
[[6.36958048e-01 1.44616904e-01 1.68880975e-01]
[2.62700212e-01 6.77998072e-01 5.93017165e-02]
[4.99410657e-17 2.80726930e-02 1.06098506e+00]]

xyz d65 to rgb d65
[[ 1.71665119 -0.35567078 -0.25336628]
[-0.66668435 1.61648124 0.01576855]
[ 0.01763986 -0.04277061 0.94210312]]

rgb d65 to xyz d50
[[ 0.67012458 0.16035917 0.09262633]
[ 0.29474229 0.67845094 0.01987506]
[-0.00896833 0.0437666 0.79752177]]

They are different from the rawtherapee matrices but I’m confident that my code is correct because I could match the d65 matrices here
https://colour.readthedocs.io/en/feature-read_the_docs/colour.models.dataset.rec_2020.html

Rawtherapee has matrices more close to this online calculator
http://www.russellcottrell.com/photo/matrixCalculator.htm

I believe one of us has mixed up the order of operations. In your final step you have rec2020toxyzd50=np.matmul(M,d65tod50) , so you seem to do \mathbf{M} \cdot \mathbf{M_A} while I effectively do the reverse, \mathbf{M_A} \cdot \mathbf{M}. Since matrix multiplication doesn’t commute, we get different results.

2 Likes

Ah!
Thanks @Thanatomanic
Fixed
rgbxyz-brucelindbloom-2020.py (1.5 KB)

I have a challenging question (there’s probably a simple solution.)

Using pixel coordinate space: I would like to be able to generate a quadrilateral gradient that is basically a “loft of two lines”.

To expand, I’d like a solution to this:

Note that the each points do have the same colors, but as you approach the center of colinear lines, the colors are very much different.

EDIT: Added separate channels.

How can I generate arbitrary quads without that problem in pixel-coordinate space? Every point in colinear lines should have the same color basically.

It should look more like this, but with shifted center point of quads with C0 continuity:

EDIT: I think this will help - math - Relative position of a point within a quadrilateral - Stack Overflow