What does linear RGB mean ?

Have we though? What @ggbutcher did, as far as I understood, was not with the goal of being perceptually pleasing. He implemented a different, more accurate(?) way to deal with highly saturated colors (a LUT for the device space to PCS conversion). As a side result that turned out to be more perceptually pleasing (only imho). That ‘perceptually pleasing’ part can well come from the PCS to Display conversion which definitely tries to be perceptually pleasing, but might fail when the decive to PCS conversion is inaccurate (or at least not precise for saturated colors).

Honest question: Did I misunderstand this?

That’s not just Xenon. Thanks and I stand corrected.

Also: That i1Pro seems to be nice!

I did not see the example but I assume that the flowers in question were out of gamut and M. Butcher brought them in. What does that mean?

Given your moniker I trust you understand space projections. The sensor sees spectrum arriving from the scene and converts it, say, from 33 dimensional space to 3 (400-720nm every 10 nm to the 3 rgb raw channels). The result at this stage is bounded because only positive values up to clipping are allowed, let’s call this bounded set of values input ‘camera’ space.

Since ‘camera’ is a linear 3-dimensional space we can visualize it as a parallelepiped and project it accurately to any other linear 3D space by matrix multiplication, where the result will also look like a parallelepiped of different size and shape. In floating point we can do back-to-back projections ad infinitum without losing tones, but for this discussion let’s assume that we are going straight from camera to an output colorimetric color space like sRGB. sRGB also cannot have negative values or values beyond clipping.

In such a case, linear matrix multiplication may result in out of bound values, that is in negative or clipped values in the destination space. These tones do not exist in the landing color space because they fall outside of the relative allowed ‘cube’. They are deemed to be out-of-gamut. What to do with them? Gamut mapping to the rescue.

One way to deal with them is to simply clip/block them to the nearest boundary, many ways to do that. Another way is to say ‘I know this color does not exist in sRGB, but the closest one geometrically, or perceptually or pleasingly is this other one’. You can do gamut mapping by using a plethora of formulas or with LUTs, the results will be least bad according to your choices (‘intent’). Of course such results are by definition ‘incorrect’ because they are subjective and not linearly related to the original tones. A few simplifications but you get the idea.

Jack

PS Perhaps the best way to understand gamut mapping is to play with this brilliant Marc Levoy applet.

1 Like

Oh i am sorry, it was post 33 in this very thread: What does linear RGB mean ? - #33 by ggbutcher

Everything you write I at least think I understood. So far we are on the same page. The only difference I see is that there are actually two transforms taking place. One from camera to XYZ and then from there to display (often sRGB). I understand that there are many ways to go from XYZ to sRGB depending on the rendering intent. My specific question (or struggle maybe! :smiley: ) is about the camera to XYZ though. I understand how to do that with a 3x3 matrix. My question is what errors do you get for colors which are close to monochromatic in the 33dimensional sense in combination with a colorfilter array which is not a simple bandpass or gaussian but a rather complex shape?

In the results of post33 above, a monochromatic, or close to monochromatic source is illuminating a room (but is not the only lightsource) and creates a gradient on the wall. The only thing that changed between the two pics was to go from a 3x3 to a LUT for the device-to-XYZ conversion (I assume it’s XYZ) and suddenly the gradient is handled much better. Is the assumption of a parallelepiped in a CIE-plot for complex CFA filter shapes justified? You do have three scalars now, yes, but each still multiplies a 33dim vector. I have the feeling that for low saturations this is not a problem, but for high saturations aka. more monochromatic light sources, this becomes more and more sensitive to the actual colorfilter-spectrum.

Wasn’t this the reason why Anders Torger in https://torger.se/anders/dcamprof.html used an additional target with high saturation to get rid of residual errors of highly saturated colors?

I am so sorry everyone if I am derailing this topic. I am happy to take this into DMs or a seperate topic.

EDIT: I’ll try to simulate a bit over the weekend. I think I know where I am wrong :smiley:

Yeah, many ways to skin a cat, but to make things simple pretend that the matrix projects you directly from ‘camera’ space to sRGB. They are both 3D spaces, so it is just a reversible linear transformation if we stick with floating point math. But sRGB is not based on floating point, it is based on capped positive integers only. So now you may have some tones that land outside the ‘cube’ thus out of gamut. How do you bring them into the fold? A LUT is one way to do it. Of course you could take a detour to XYZ and apply the LUT there but the end effect is exactly the same.

If your output color space is big enough and all tones fit within it, the process is entirely linear and no gamut squeezing LUT needed.

But, you say, the CFA’s SSFs do not look like LMS cone fundamentals so the best we can do while staying totally linear is via a compromise color matrix, which results in some ‘errors’. And you are right. Errors compared to what? Usually compared to the 1932 CIE standard observer etc. etc. You can try to correct those ‘errors’ via another LUT or by other means, with the limitations mentioned earlier.

If you have looked at Leroy’s applet it is easy to see that any highly saturated color is guaranteed to be out of gamut in sRGB (and Adobe RGB). If the color is the result of a linear transformation it may also be off because of the compromise color matrix. LUTs are interpolated, usually in HSV space, so if you capture a lot of saturated colors it behooves you to have some saturated samples in your test target otherwise you are extrapolating blind.

If you are curious about the type of errors that can be expected by treating a modern CFA linearly off a simple CC24 target take a look at this article.

Jack

To be fair, the sRGB colorspace isn’t welded to 8bit, it just usually ends up there in JPEG. I do colorspace conversions to sRGB all the time, float → float

BTW, the Mark Levoy pages are excellent, worth the effort to re-enable Flash…

1 Like

I did, and that was not my problem. That I understood. And it is a nice visualization actually. Tells you so much more than the words describing the different rendering intents.

I found my problem through some playing around with different ‘lightsources’ and ‘CFAs’ in octave… it’s actually a bit embarassing. More monochromatic does not cause the problem I had, or imagined that I had.

Oooh, thanks for that one!

I found that too, you geeks :

https://www.hdm-stuttgart.de/open-film-tools/english/publications/ProjectSummary.pdf

1 Like