Color calibration - colorfulness

I used the preset swap G and B, so adaptation is none (bypass), gamut compression is 0, clip negative RGB from gamut is off. I think I should have attached the XMP sooner, to remove any uncertainty. Here it is now:
rgb.tif.xmp (4.4 KB)
The TIFF file is in Color calibration - colorfulness - #37 by kofa

I think thereā€™s the issue. Following the color calibration module code, if adaptation is none, then in the R, G, B tabs R = X, G = Y and B = Z. Now the Y in XYZ is the luminance. If we replace the luminance by Z (~blue) in the mixer, the R letter goes to black because the Z component (and therefore Y in the mixing result) for it is zero.

Switching the adaptation to linear Bradford, the result looks much more sensible:

I wonder if the channel-swapping presets should indeed use something else than XYZ for the space.

Edit: the tooltip of the adaptation combobox says that ā€œnoneā€ should use the pipeline working RGB instead of XYZ. Therefore I think it may be a bug here: darktable/src/iop/channelmixerrgb.c at c9e593666015b59b928b676f722446f5bc60acfa Ā· darktable-org/darktable Ā· GitHub
What do you think @anon41087856?

2 Likes

Let me summarize this discussion in my own words to see if I understand. Say you walk into a completely dark room and light a color wheel with one red, one light, and one green light bulb, all of equal intensity. The color wheel looks like it should.

Then, you vary the intensity of, say, the red light. This actually affects all (many?) of the colors in the color wheel, not just the reds. The same is true for the green and blue light bulbs.

The RGB sliders in the Color Calibration module have the same effect as varying the intensity of the individual R, G, B lights in my hypothetical dark room.

Is this correct?

If that is the case, then the preset basic channel mixer should also be updated (it also applies no adaptation), because completely swapping channels is just a special case. But Iā€™d argue that since all modes result in colour shift (easily demonstrated by the swap operations), none of them truly replace the legacy channel mixer.

BTW, in my original post, the starting point of instance color calibration 1 (visible in the screenshot) was also the basic channel mixer preset, so did then reducing the colourfulness of G in fact operate based on the XYZ Y coordinate, luminance?

Please see my edit, it may be just a bug that choosing ā€œadaptation = noneā€ results in mixing happening in XYZ.

2 Likes

However, it is still not correct, as the blue is purple.

In another recent thread Aurelien criticizes a different gamut compression algorithm for degrading blue to purple. So if we want proper accuracy here the blue needs to be blue. I think the various adaptations are primarily designed for cat White balancing, and shouldnā€™t be used as a kind of hack to try and get the channel mixer working properly. But you may have just used that as an example to point out a flaw in the none bypass setting? Not intended as a proper solution?

Oh yeah, thatā€™s a beautiful bug. Thanks ! Fix incomingā€¦ Since the RGB path is not used for WB nor color checker, it was the least tested.

1 Like


No, you are mistaking reflection and emission. RGB is an emissive model, you work in additive light. If you display a color wheel under some lighting, then your color wheel is in reflection, so you work in subtractive light and itā€™s much more complicated.

No, what you describe is the RGB slope or gain in the color balance : you dial up or down the intensity of each primary color independently. What channel mixing is, is cross-talk between channels, so what you do is define a red, green, blue boost that is equal to your settings times the original RGB intensities of each pixel. Each input channel collaborates to the output boost.

Say you want to boost the red channel of pixels that are currently mostly green, you go in the R tab and set the green slider > 0. Since that adds red to green pixels, the result is actually more yellow. But it wonā€™t affect the pixels that have barely any green component. Thatā€™s different from the color balance R gain, which would affect all the pixels proportionally to their input R channel, independently from the G one.

And because RGB is not color, but light, doing so in different color spaces is going to change the final result because you are mixing different primary lights (just imagine changing the RGB spaces change the color of the light bulbs, the same balance doesnā€™t give the same result).

But itā€™s very intuitive if you think of it as a linear change of the vector base in a 3D euclidean space : you rotate the primary colors around. Everyone got lost over the hue diagram because itā€™s the worst framework to think about the problem. We massage the light spectrum through its 3D decomposition, thatā€™s all we do.

4 Likes

Thank you, @anon41087856 and @flannelhead.
Now just one more, and Iā€™ll shut up, promise. Also in channel mixer mode, without adaptation:
Neutral:


Paler red, as expected:

Paler blue, as expected:

Paler red and blue, not as expected:

This is the issue that triggered the whole discussion, with the flower at the top, I believe.

1 Like

Looks like thereā€™s a bug in this post :smiley:

3 Likes

I am going to have to read this 10 more times and I still donā€™t think I will get it. If the @kofa image is pure red R and pure green G and pure green B. I would expect only red to be effected when you remove red. Okay so this is wrong as the changes are happening in the module colorspace and then converted to display maybe?? I am not sure . But in any space how can removing red (so adding cyan??) also seem to remove green to the same extent that it removed the red?? In the first image presented above the green clearly also gets as pale as the red. Why would removing ā€œcolorā€ from blue affect pure green?? I am clearly missing something. If the green is not 0,255,0 and some blend of rgb I still donā€™t think you would see that result. I guess I will re-read this and try to get my head around itā€¦ I am bracing for Aurelien please be kind I didnā€™t use the word intuitive anywhere :wink:

2 Likes

After doing the whole spectral sensitivity function thing, Iā€™m sorry to say that I understand what @anon41087856 is explainingā€¦ :laughing:

Now, disclaimer, I havenā€™t studied the tool at all, but IMHO what the dialogue portends is a chromatic adjustment, not really a color adjustment. What youā€™re used to is a metameric construction, where single wavelengths corresponding roughly to what weā€™d think of as R, G, and B (actually high, medium, and low wavelengths) are mixed in their respective intensities to produce a notion of a particular color in our heads. This tool is more about the overall spectral contributions, where both wavelength and intensity are moved around (@anon41087856, correct me if Iā€™m wrong). Thatā€™s why one sees ā€œnon-intuitiveā€ changesā€¦

1 Like

Thanks Glen, clearly it is complicated and I have some learning to do nevertheless having a module with a slider in a tab called colorfulness and having selections for r g and b will be a misdirection for the vast majority of DT users. If you have a module and a slider then it should behave in a predictable way based on the naming. Maybe I will come to understand this and then it will make senseā€¦so my simple question then is if I have an image and I remove ā€œcolorfulnessā€ say using the red slider what is it exactly that i am doing to the image and how does that change with the various possible CAT selections?? Is that a fair question or a stupid one??

4 Likes

Quite fair, I think. One should be able to correlate outcome with input in a meaningful way. The notion of ā€œmeaningfulā€ is whatā€™s vexing here, IMHOā€¦

1 Like

Thanks again. I am going to work through from the ground up to try and figure this outā€¦if I can ā€¦no guaranteeā€¦ I need to grasp this part first Spectrum to RGB Conversion

1 Like

Thatā€™s the part I understand perfectly; in my work I use linear algebra every day.

What I was trying to do (and failed) was to interpret the R, G, B sliders in terms of real-world concepts. In other words, I donā€™t understand what you mean by RGB being light (which is a real-world concept).

Thanks for the explanation: I will take the time to study this subject and keep trying to find a correlation to the ā€œreal worldā€.

So I guess itā€™s the spectrum <-> RGB conversion thatā€™s causing problem.

Around us is light. Light sources emit light, material surfaces bounce light from light sources. That light is a pack of photons having different wavelengths. Each wavelength is represented by an intensity that, once plotted, gives a spectrum shape. Example with a daylight D50 ā€œwhiteā€ :

image

Camera sensors as well as human retina act as a photons sink. So they donā€™t really differentiate between wavelengths, they just get excited proportionally to the number of photons they collect. The more the merrier.

To actually differentiate between wavelengths, cameras and retinas use a trick : they donā€™t have one but 3 sensors, that ā€œsitā€ behind a wavelength filter. These sensors have different transmittances, meaning they let more or fewer photons go through, depending on their wavelengths. Here is the normalized transmittance of the average human retinaā€™s cone cells:

image

Every RGB space is tied to light spectrum in the same way, however, the shape of the transmittance is different for each space. To form a perception, the spectral intensity of the original light is multiplied by the transmittance of the filter and integrated along the wavelengths. So:

\begin{cases} R = \int_{380}^{750} S(\lambda) \cdot \bar{r}(\lambda) d\lambda\\ G = \int_{380}^{750} S(\lambda) \cdot \bar{g}(\lambda) d\lambda\\ B = \int_{380}^{750} S(\lambda) \cdot \bar{b}(\lambda) d\lambda \end{cases}

In the example above, you see the D50 spectrum has a hole in blue wavelengths. However, our ā€œblueā€ sensor has acute sensitivity in the same spectral range. So, at the end, these two compensate and we should get rougly R = G = B.

The shit comes from the fact that:

  1. over our 3 sensors, there is spectral overlap,
  2. photons binning is destructive, since we lose the shape of the spectrum, and 2 different spectra can theoritically produce the same RGB intensities after integration.

For example, the ā€œredā€ sensor transmittance has a bump in the blue wavelengths, which means it also captures some blue wavelengths. So, neither cameras or retina cone cells have a blue, red or green sensor because there is no direct bidirectional link between wavelength and RGB stimulus. Sensors are only capturing overlapping spectral slices. Whenever you read ā€œRGBā€, what you should understand is ā€œspectral slice 1, spectral slice 2, spectral slice 3ā€.

Plus the only wavelength <-> color bidirectional link is for lasers, which are light made of a single wavelength. Natural lights are spectra, and if you want to predict their color, you have to use empiric LUTs made from psychophysics measurements or go through color models (that actually only turn these LUTs into continuous models).

Then, as soon as you go to RGB, you integrate the spectrum and lose the spectral overlapping information: all you get is 3 bins of photons tied to a filter, which at that point is merely a label put on the bin (what we call the primary color). When you change sensors, aka you change RGB spaces, what you actually do is change the spectral transmittance of the filter, but since you donā€™t have the original spectrum anymore, there is no spectral-aware way of doing it (besides spectral reconstruction with LUTsā€¦ not the topic here). So the most you can do is to reweigh your RGB intensities to account for the weight change in the integral of the new RGB sensors transmittance vs. the old RGB sensor transmittance.

Butā€¦ because there is overlap in filterā€™s transmittances, each wavelength will appear in our 3 photons bins with different statistical frequencies. So you canā€™t just reweight R, G, B independently, Ć  la white balance, you have to do it in 3D space as a change of vector base and take advantage of the overlap to drive a smooth reweighting.

And then, weird psychophysics get convolved into the mix. For starters, the ā€œgreenā€ sensor of human retina (Y channel of CIE XYZ 1931 2Ā° space) is actually interpreted by the brain as luminance. Also, there is some training in the brain that let you see white where you know there should be white even though the surface is clearly orange (just read a book in warm lightā€¦). Thatā€™s 100% brain-made and light doesnā€™t care about that, so itā€™s one more reason to not even try fitting perceptual models into RGB.

Bottom line, TL;DR:

  1. any RGB space is a lie, in the sense that none of the R, G, B channels actually represent what we expect red, green and blue to be (hues). Each R, G, B channel is a photon bin sitting behind a spectral filter in an additive light setup. Spectral filters can have overlap or not, depending on what they represent or how they are designed.
  2. you can associate a laser wavelength with a perceived color (aka some hue at maximum saturation), but the color of a spectrum is much more difficult to predict, and impossible to get directly from any RGB. Even if you go to HSV, depending on your original RGB, hues will not be spaced the same and secondary colors will usually get squeezed by primaries.
  3. RGB is not color. Donā€™t even try.
  4. channel mixing is changing our sensor filters transmittance after we lost the spectra, so we do it indirectly in RGB, using a 3D matrice dot product.
9 Likes

yeah, that one is not really a bug, itā€™s rather a limitation of the algo. You just gave me an idea for a v2, I need to test.

2 Likes

OK, so Iā€™m going to make a fool of myself in public, not having done C over 20 years, and not knowing colour science. I just donā€™t see why green saturation, without adaptation, leaves green alone and changes red and blue.

In loop_switch, the input is copied into temp_two.
If we run with DT_ADAPTATION_RGB, we donā€™t convert colour spaces before we perform the channel mixing from temp_two ā†’ temp_one (both RGB).
Then, RGB temp_one ā†’ XYZ temp_two, and then copy XYZ temp_two to temp_one.

Next, gamut_mapping temp_one ā†’ temp_two (XYZ)
Then temp_two (XYZ) ā†’ temp_one (RGB)

On line 707 thereā€™s a comment /* FROM HERE WE ARE IN LMS, XYZ OR PIPELINE RGB depending on user param - DATA IS IN temp_one */, but I donā€™t think we can be in XYZ here (if adaptation is performed, we end up in LMS; for the RGB case, we end up in RGB).

Then, maybe we clip temp_one (in place)

luma_chroma processes temp_one into temp_two (for the no adaptation case, both are RGB data).

Then, maybe we clip temp_two (in place).

If we donā€™t play with grey, on the RGB path we end up with temp_two (RGB) ā†’ temp_one (XYZ).

Maybe we clip temp_one (in place).

Then we convert temp_one (XYZ) ā†’ temp_two (RGB).

The final step is to copy temp_two to out, potentially with clipping.

I donā€™t see any place in luma_chroma where the pixels in the 3 channels would be handled differently. In particular, for the saturation, each channel is used the same way (along with the corresponding saturation value to calculate coeff_ratio, and then each channel is processed the same way:

  // Compute ratios and a flat colorfulness adjustment for the whole pixel
  float coeff_ratio = 0.f;
  for(size_t c = 0; c < 3; c++)
    coeff_ratio += sqf(1.0f - output[c]) * saturation[c];
  coeff_ratio /= 3.f;

  // Adjust the RGB ratios with the pixel correction
  for(size_t c = 0; c < 3; c++)
  {
    // if the ratio was already invalid (negative), we accept the result to be invalid too
    // otherwise bright saturated blues end up solid black
    const float min_ratio = (output[c] < 0.0f) ? output[c] : 0.0f;
    const float output_inverse = 1.0f - output[c];
    output[c] = fmaxf(DT_FMA(output_inverse, coeff_ratio, output[c]), min_ratio); // output_inverse  * coeff_ratio + output
  }

There is a puzzling comment near the end, talking about LMS (in the no adaptation case, weā€™re actually in RGB), but even that code is applied the same way to all 3 channels:

  // Apply colorfulness adjustment channel-wise and repack with lightness to get LMS back
  norm *= fmaxf(1.f + mix / avg, 0.f);
  for(size_t c = 0; c < 3; c++) output[c] *= norm;

So why does green influence the other two channels?

Because itā€™s not green. We massage color ratios, expressed as [r g b] = \frac{1}{\sqrt{R^2 + G^2 + B^2}} * [R G B], so everything is tied together by the norm, and then the whole pixel gets the same chroma boost computed from:

  // Compute ratios and a flat colorfulness adjustment for the whole pixel
  float coeff_ratio = 0.f;
  for(size_t c = 0; c < 3; c++)
    coeff_ratio += sqf(1.0f - output[c]) * saturation[c];
  coeff_ratio /= 3.f;

in which the 3 channels contribute with a weight depending on how far they are from 1.0 (aka how far the channel is from the norm). The issue is in the algo, not in the code. Indeed, all channels are handled just the same.

But if you change the above by:

  // Compute ratios and a flat colorfulness adjustment for the whole pixel
  float coeff_ratio = 0.f;
  for(size_t c = 0; c < 3; c++)
    coeff_ratio += output[c] * saturation[c];
  coeff_ratio /= 3.f;

then it behaves more predictably, although the effect quickly overshoots (then you will notice that R and B channels are swaped, but thatā€™s normal). But that method needs more test before I can call it v2 because there are possible problems that could yield to some pixels being desaturated while other get resaturated.