I think what you’re missing from Elle’s responses is that there are multiple ‘white points’ that are used in different ways, at different stages within the calculations used to generate the matrix used to convert between colorspaces. Specifically, the keyword you should look at more closely is ‘adapted’.
Disclaimer: I’m not an expert on the standards, I’ve just struggled with the math and figured this out after reading way too much documentation that was way too vague. I might still be misunderstanding a lot of this, so I would honestly like some feedback from experts like Elle.
So, consider for a moment that we consider ‘white’ to be [1, 1, 1] no matter what RGB colorspace we’re in. This doesn’t specify a whitepoint per se - no, we specify a white point in terms of the XYZ colorspace. For example, while sRGB’s white point has xy coordinates [0.3127, 0.3290], that still is just saying that the exact ‘color’ for [1, 1, 1] (or ‘white’) can be measured externally as having those xy coordinates.
ICC profiles use what’s called a ‘Profile Connection Space’ (PCS). What this is will vary, but most of the time it’s either XYZ or L*a*b* - and for ICC profiles (I guess versions 2 and 4), the white point that they use for the PCS isn’t E, but instead D50 - which is roughly equal to XYZ values [0.964, 1.000, 0.825]. This means that, to stay consistent, we have to transform whatever ‘white’ is to XYZ values such that ‘pure white’ is [0.964, 1.000, 0.825], rather than [1, 1, 1] (or, if we were using D65, roughly [0.950, 1.000, 1.089]).
However, because of how human eyes work, you can’t just rescale XYZ values directly to convert between white points. Instead, you have to convert XYZ values into LMS (native colorspace for the human eye), rescale those values, then convert back into XYZ.
There is some debate about what the best matrix to use is for converting between XYZ and LMS, and it often depends on your use case, needs, and specific setup. However, the most common when dealing with ICC profiles is the original ‘Bradford’ color transformation matrix. I specify ‘original’ because apparently there are two versions, and ICC profiles explicitly use the original one.
So, here’s an overview of how this looks:
Linear sRGB→XYZ→LMS→D50/D65→LMS→XYZ (PCS)
And going to another RGB space (for this example, to be displayed on a monitor with a D75 white point):
XYZ (PCS)→LMS→D75/D50→LMS→XYZ→RGB
It’s important to note that in both RGB colorspaces (both sRGB and the monitor’s colorspace), the RGB value for ‘white’ remains [1, 1, 1]. If the picture is a photo of a white piece of paper with a drawing on it, any part that shows the paper will have the same RGB value in both RGB colorspaces (assuming that it’s perfectly encoded as white and not slightly off-color, nor darkened to a light gray).
That’s why one of Elle’s comments carefully noted that the ICC specs assume that your eyes are 100% adapted to the white point of your display - because they’re designed to make sure that the display’s white point is always used for the actual value of white.
Now, for the math:
orig = Original RGB value.
final = Final resulting RGB value.
toXyz = RGB to XYZ matrix for the initial (or ‘source’) RGB colorspace. Uses whatever that colorspace’s actual white point is, such as D65.
toRgb = XYZ to RGB matrix for the final (or ‘destination’) RGB colorspace. Uses whatever that colorspace’s actual white point is, such as D75.
whiteSource = Source RGB colorspace’s white point.
whiteDest = Destination RGB colorspace’s white point.
toLms = XYZ to LMS matrix, such as the Bradford or Hunt matrices.
diag() = Function to turn a 3-element vector into a diagonal matrix.
I noticed that the built-in editor had decided to line-break right at the point where colors would be in the PCS (at the time I hadn’t put spaces around the asterisks), so I decided to put an actual line break in there. I put the spaces around most of the asterisks to help show where each colorspace conversion takes place. Decided not to with the ones inside ‘diag()’, to better group those together as a single ‘conversion’.
Hope this helps! While I did find this thread while googling for how to do matrix multiplication in gmic, I saw what looked like a very recent thread from someone going through some of the same issues I did.
Now, the reason I had gotten so confused while learning all this, was because I was wanting to figure this all out so that I could specifically use absolute colorimetric conversions between colorspaces; I didn’t want to use white point adaptation. Specifically, I wanted to make one image look identical on several different monitors, and make that look identical to the original object in real life. I had all displays in the same room as the object, too.
But I had in my head the idea that ‘white balance’ was meant to help adjust colors to be more or less white, going bluish or orangeish based on color temperature. So I kept trying to use white point adaptation to do the opposite of what it was intended to do, and since none of the documentation I could find was geared toward that, it was kinda frustrating!
Had to take a step back and figure out what it did first, in the context in which it was being used - and after I figured that out it was much easier to ‘undo’ the whitepoint adaptation.
Except then I learned that my phone’s camera was doing it all wrong and was assuming D50 was the actual white point for the display. Figuring out why certain shades of green lacked that slight tint of blue while everything else looked spot on was ‘fun’, alright.
… Actually it kinda was. And the whole project was just for fun anyway; can’t seem to get a job, so may as well mess around with colorspaces instead!
Not really, if something is the same, then it is the same, if something is different then it is different!
This is not meant as criticism of Elle, who has been very helpful!
At the end of the day my question is very simple and can be summarised as: If I have some data that is in the sRGB colorspace what are the parameters used to describe that data? and in addition are those parameters any different from that I would find in sRGB ICC profile.
In seems at least as a defacto standard the answer to the later part of that question is that there is no difference. Now perhaps that was not intended, maybe its even a mistake… but otherwise users would likely complain if they attached an sRGB ICC profile to sRGB data and the result looked different!
Elle, I think you should have point me to this page…
What I did not get before, is the values of the primaries in the an ICC profile have (or SHOULD have) been chromatically adapted from there absolute values. This was not intuitive but I get it now.
So in summary the sRGB data that uses the unadapted SRGB primaries plus the D65 white point should equal (or be close enough) to same data that is defined using correctly adapted primaries and the D50 white point?
Hi @LaurenceLumi - hmm, well, I actually did mention that article, back in Comment 4 , where I gave a link to an article that has a downloadable spreadsheet for calculating the sRGB ICC profile from the ICC specs and the sRGB color space specs. But I’m really glad you found that article helpful - it was a ton of work to write! - but I learned a lot while writing it.
@Tynach - I really like all the experimenting you’ve been doing with various displays. That sort of stuff is the 100% best way to actually learn how ICC profile color management really works. Otherwise the constant temptation is to make a lot of unconscious assumptions, that seem inevitably to end up being not correct just when you really want something to work “as expected”.
It always makes me a bit nervous when people refer to me as an expert because everything I know about ICC profile color management was learned the hard way, one experiment at a time just like you are doing, followed by trying to figure out “why” results are as they are, whether by doing further testing or doing a lot of background reading or asking questions on forums and mailing lists, or whatever it takes. So whatever expertise I have is relative to the stuff I’ve tried to figure out. I don’t have any formal university degree in color management or anything like that.
Anyway, I do have some thoughts on your descriptions and comments for your wonderful monitor experiments, but I need to clear off some other “to dos” my list before sitting down to type something up.
I should probably mention that this is all done with GLSL code, on Shadertoy and in the Android app ‘Shader Editor’. I’ve looked at ICC profiles and compared different ones to each other, but I’ve yet to write any code that actually uses any such profile.
It’s one of those things I feel I should do, but a large part of what I’m doing right now is on my phone in Shader Editor (where I have to ‘calibrate’ the display by having a massive array containing the equivalent to a VCGT tag, multiply all color values by 255, and use that as the array index for nabbing the final value for that channel).
Same, it’s happened a few times with me… And here I am unemployed because nobody wants to hire someone without actual job experience. Often I’ll post something that really makes sense to me and seems completely true, but I’ll still get the feeling that it might only seem true to me because I “don’t live in the real world.” And that’s often the sort of thing (though in more detail and with examples) told to me when I give opinions on topics like tab indentation, so I have a feeling they might be right.
What I more or less meant by ‘expert’ in my own post, however, was that you’re someone who has done that testing before - and thus you have built up a fairly decently sized repository of knowledge, at least when compared to others. I hesitate to say ‘professional’ because I honestly don’t know what your actual job consists of, but given the number of articles you’ve written (and both the depth and breadth of the topics in them), I think it’s safe to say you’re an expert - at least relatively.
I should have joined this community much sooner, but I didn’t really know about it. Besides that, it was only very recently that I broke down and finally just bought myself a colorimeter, as before that I was making a lot more assumptions about the quality of my own equipment (factory-calibrated monitors are, apparently, often very badly calibrated).
Mostly so far I’ve just been posting test code on Shadertoy, and occasionally asking for things like ‘where can I find copies of various official standards?’ on Reddit… Where I didn’t get any really useful leads; the subreddit I saw over there was I think /r/colorists, and 90% of the content is people saying things like, “In this program, always set these settings to those values when doing that.”
So Wikipedia has still been my number one source for things like what chromaticity coordinates belong to the primaries of which standards, and I’ve not really had anywhere to go for asking for feedback on the actual algorithms.
As for responding later, that’s no problem! I figure that’s what forum-like websites are for - group conversations that could take anywhere from minutes to weeks between responses. Wouldn’t want to rush you
At any rate, uh… I sorta split up when I wrote this comment, part of it in the morning and part of it in the evening. I’m not really sure where I was going with some of it or if I intended to modify/add to/remove from earlier parts, so I’m sorry if it’s a little bit of a rambling mess. I’ll just post it as-is for now, as I’m not sure what else to do with it.
Hi @Tynach - my apologies for taking so long to circle back around to your very thought-provoking post - I bet you thought I forgot about this post, but nope, not at all!
Edit: with my usual stellar inability to speak clearly, when I tried to rewrite my initial sentence to make it more clear, I left out the critical part of the sentence above, which is that I bet you thought I forgot about this thread, not true!
So to try again, your post was very thought-provoking, and I didn’t forget about it, in fact have been mulling over the points you’ve made. So I just edited the original sentence above to put in the missing phrase. Sigh.
I never stopped to think about what RGB values the monitor profile might have for the color white near-white colors - thanks! for mentioning that.
I used ArgyllCMS xicclu to check several monitor profiles that I made at different times, using different algorithms, and sure enough “white” defined as Lab color (100, 0,0) was close to or exactly (1,1,1) in the monitor space, using relative colorimetric intent:
xicclu -ir -pL -fif file-name-for-monitor-profile
But “how close are the channel values for white and near-white” does depend on the type of monitor profile. For my LUT monitor profiles, R, G, and B are only approximately the same for grayscale values, being very close for white and near white, and progressively farther apart as the grays get darker. On the other hand, for my profile made using “-aS”, R=G=B up and down the gray axis.
I’m guessing that "how different are the channel values for white and gray also depends on what sort of prior calibration was done using the vcgt tag, before making the monitor profile.
Your goal of making one image look identical on several different monitors, and also make the image look identical to the original object in real life, of course means that at some point you took a photograph of the real life object (I’m really good at figuring out the obvious ).
Recently I took a photograph of a painting and used RawTherapee’s CIECAM02 module to make the colors on the screen match the colors in the painting:
Of course your situation - multiple monitors in the same room right along with the photographed object - might have the advantage that the entire room is evenly lit with the same color and brightness of light. In which case “what color of light” to calibrate all the monitors to might depend on the color of the ambient light. But then you’d need to consider whatever compromises might be required when calibrating any given monitor to a white point that’s too far away from its native white point.
I had been thinking about your quest to make the colors look the same on all your monitors, and thinking that the CIECAM02 modules might be a way to accomplish your goal (even without first calibrating and perhaps also profiling the monitors). Making images look the same on different display devices was @jdc 's motivation for RawTherapee’s CIECAM02 module existing in the first place.
@ggbutcher - the RawTherapee CIECAM02 module is something that might also work for displaying images on your projection screen, though it might mean making a solid white image (and perhaps also an 11-step grayscale wedge at L=100 down to L=0) using editing software, projecting that image onto your screen, taking a photograph of the projected image, and seeing what color of white the projected white actually is. There are sophisticated devices for meauring such things, but probably a photograph would get you “in the ballpark”.
@gwgill - now that @Tynach has a colorimeter and can calibrate and profile his various monitors, would this colprof switch allow him to accomplish his goal of making images look the same on all the monitors using ICC profile color management? Or (as I sort of suspect) am I missing something critical in how images are displayed on different monitors?
For input profiles, this flag forces the effective intent to be Absolute Colorimetric even when used with Relative Colorimetric intent selection in a CMM, by setting a D50 white point tag. This also has the effect of preserving the conversion of colors whiter than the white patch of the test chart without clipping them (similar to the -u flag), but does not hue correct white. This flag can be useful when an input profile is needed for using a scanner as a “poor mans” colorimeter.
My apologies in return @Elle, I was not only indecisive as far as what to say was concerned, but I also had accidentally deleted part of my code. It’s not on version control (and I’m not sure if Shader Editor uses files, or Android’s per-app SQLite database), and I had set it up so that running the code auto-saved it… So when I accidentally deleted some code portions (and then tapped ‘run’ without thinking) I had to spend some time recreating what I’d written beforehand.
I really should put it into a file on my desktop, but I’d honestly rather just… Completely rewrite it instead. It’s a mess of commented out code right now, especially since I have something around the lines of half a dozen sets of chromaticity coordinates specifying my phone’s display colorspace, all but one commented out.
Also, I… Honestly don’t know what to say to you, of all people, calling my post thought-provoking. All I had intended to do was explain white point adaptation, and what it meant for the math behind colorspace conversions. I saw what looked like either a misunderstanding or some missing information, and fueled mostly by feelings of, “Hey, I had to figure this out recently, here’s a chance to ramble about it,” I typed up the post I had.
Then feelings of, “Wait what if I don’t actually understand this as well as I think I do?” kicked in and I put that disclaimer in. After all, I’m literally just some guy who still lives with his parents who has way too much free time. And since I’ve yet to 100% accurately reproduce all lighting scenarios with one set of options plugged into my phone, I’m honestly fairly sure there are things I’m definitely getting wrong.
… Aanyway, on to the interesting stuff.
It’s good to know that it checks out, but I was kinda talking abstractly, in the sort of, “If a program were told to ‘display white’ on the screen, what values would it apply to the color of the pixels?” kind of way.
I only meant to describe things like, if your program is dumb then white is just gonna be ‘set all channels to full’. And that extends to if your program is smart but presents itself as dumb (meaning it’ll convert RGB colorspaces, but for the converted values it’ll still state that white is all channels being full).
Most likely. This is how I have my desktop monitor set up, and apparently it’s good enough that instead of a LUT for conversion, Chrome at least just uses a math function as the transfer characteristics. Not that Chrome is the best when it comes to color management, but going to the bottom of chrome://gpu lets me see exactly what Chrome uses for color correction.
If I use a profile that was generated without a VCGT tag, it actually says that it’s using a LUT (and I’ve noticed some serious banding and general low accuracy when that’s the case, though only in Chrome). But currently it instead has this as the ‘Color space information’:
Actually, not quite! Shader Editor lets me use the camera as a texture source, so I’m taking already-processed RGB data and having to undo that processing as best as possible, then re-do it how I want it redone. In real-time, so it’s a good thing it’s with GLSL!
Sadly, I don’t have full control over the camera hardware with Shader Editor, but my phone does fully support Android’s newer Camera2 API. This means I can get all the necessary information about my camera hardware that I need to essentially return the RGB values into, as close as possible, the original RAW values. I used Camera2 Test to extract the data (and of course transposed the matrices for use in GLSL).
… Except for what white balance is currently in use. I have to deal with the white balance constantly changing as I aim the phone at different items, so I’ve had to use either whatever whitest item is nearby, or just set items on some paper, or just… Look at things that are on my cluttered nightstand, which has several pieces of paper on it.
Instead, to calculate the camera matrix, I had to dig through the DNG file format specification (and I’m having to assume that my phone’s camera goes through the same process as outlined in said DNG specifications) to figure out how to turn chromaticity coordinates for light sources (measured with my colorimeter) into the XYZ-to-RAW matrix. In the DNG spec, the relevant section is chapter 6 (Mapping Camera Color Space to CIE XYZ Space).
It is not a simple task, and I don’t think RawTherapee gets it quite right in the more complex case of not having chromaticity coordinates (instead if you start off with just whatever RAW value is used for D50 white, AKA the ‘AsShotNeutral’ tag), which is the case when I use camera apps that let me capture RAW data.
There apparently are 2 camera matrices, and I have to calculate the CCT of the light source, and if it’s between the CCT for Standard Illuminant A and Standard Illuminant D65, I have to determine where it is between those two and use that to perform linear interpolation between the two camera matrices.
I don’t know for sure if I do that correctly in my code, especially since it’s that part that I accidentally deleted. I’ve spent a few days (mostly the last 3 days, but also various days over the last few months) testing and tweaking it though, and think it’s correct now.
At any rate, I’ve made some minor changes to my code to clear things up (I originally had used ‘LMS’ to refer to both ‘human eye’ space and camera RAW space), so I’ve done some regex search/replaces to turn a few instances of the word lms into raw. I’ll go ahead and just attach the code as-is for you pull your hair out over… It’s commented, but all of the comments are ‘for me’ and not for anyone else, including the “don’t take this section seriously” warning.
I do think I did a decent job at variable naming overall, but I’ve had numerous times where I wanted to name multiple things the same thing in the same scope, so… Well, I had to mix and match how names were organized/formatted a few times. And sometimes decisions like that carry over even when half the variables get removed, rewritten, or commented out anyway, which… Is why I want to rewrite it at some point. Make it much cleaner.
I wish. Light bleeds in through my bedroom window, and the ceiling light that’s in here has a really low CCT. Something around 3500K if I remember correctly (colorimeter readings vary between 3100K and 3700K).
The only light I have actual control over is my nightstand lamp, and for that I fairly recently bought an LED bulb that’s rated as having a 90+ CRI. Its CCT is actually at exactly 5000K, but exact chromaticity coordinates do seem to still vary a little… But since I can hold my colorimeter right up to the bulb, they’re much more consistent.
So at night, with my monitors turned off and with items on my cluttered nightstand, with only the nightstand lamp turned on and the ceiling light turned off… Then I can have full control over the color of the light.
I suppose I also have decent color readings for the CFL bulb that was in that lamp before I put the LED bulb in there, and that bulb now helps light up the bathroom (with other bulbs from the same box, even). Since there’s no windows in the bathroom, that means I can also test the code on things that are known to be white, like the bath tub and toilet. The sink is kinda an off-white, I’ve noticed.
I don’t even try to calibrate my displays to ambient lighting. Right now I have them profiled with a VCGT tag that changes it from using the native white point of the display, to instead using plain 'ol D65, and transfer characteristics that match sRGB.
At least, more or less. Chrome has some weird values in the equation it gleans from my profile, but it seems to work? Either way I usually just force Chrome to treat my displays as sRGB, ignoring system profile. Otherwise I get some noticeable (but relatively minor) banding in smooth images.
To this end, I had already settled on recreating the appropriate camera RAW matrices according to the DNG spec. CIE 2002 uses a color matrix designed around an LMS colorspace that had ‘sharpened’ spectral sensitivities. This is less accurate, but overall works out better when it comes to human perception of chromatic adaptation. In other words, when our brain performs white balancing, it exaggerates some things while dulling others, and CIE 2002 helps model that.
You’ll see some huge chunks of my code commented out that have some… Less informative comments - in particular, the majority of the convert() function. This was from my attempt to model and understand CIE 2002. I have an unpublished page on ShaderToy that has a working version of this, but… While I have tried to understand all the variables and how they work together, the documentation I can find on CIE 2002 is sparse. Very sparse. Wikipedia is where I got most of the equations I do use, but I have to guess half the time at what many variables actually mean.
At any rate, I’m trying to go for ‘absolute colorimetric’ types of things, and am in fact undoing a lot of white point adaptation and other things which try to model perception. As such, using a system that models perception is only useful when undoing its effects, and since the code (or maybe its hardware) that performs those operations to begin with resides in the camera module itself, I doubt it’s as complex as CIE 2002.
But! Sometimes I do want to adapt things to my phone display’s white point - and when I do, I basically uncomment the end of toRgb’s declaration (line 814), and comment out part of transMat’s declaration (line 821):
And here’s more inconsistent naming. aTo is the matrix applied to colors just before the absolute XYZ→RGB matrix is applied (for converting to the final RGB colorspace). It’s basically ‘adaptation matrix for the colorspace being converted To’. And it’s defined using what I label as the ‘output CAM matrix’ (CAM meaning Color Appearance Model), which is the XYZ→LMS matrix I have chosen to use.
I could use CIE 2002’s matrix, but again it’s spectrally sharpened (and I like having overly extreme accuracy). Instead I decided to calculate such a matrix from data and calculations given by the Color Vision and Research Laboratory (CVRL). Links are in the source code just above my declaration for primariesLms (lines 548 and 549 contain the links).
The results are very close to those produced if I were to use the Hunt-Pointer-Estevez LMS matrix, which is regarded as the ‘more accurate’ of the professionally produced LMS matrices (and is commonly used in color blindness simulation and research). However, the results of my CVRL-inspired matrix seem to be slightly ‘sharpened’, placing them somewhere between the Hunt and CIE 2002 matrices (though closer to Hunt).
I fully realize, of course, that if I really wanted to seriously use the proposed 2012 XYZ functions, I’d have to use a spectrophotometer and match the color temperature of my displays and light sources to new xy chromaticity coordinates calculated from said new color matching functions. I can’t even hope to afford a spectrophotometer, so that’s… Basically not happening any time soon. I trust that the CVRL is honest when they say they matched the new CMFs to the 1931 CMFs as closely as possible, so I’ve just been using 1931-tuned chromaticities. Seems to work well enough.
Wouldn’t that just make the profile badly formed, in that the white point reported isn’t the actual, true white point? I’m a bit confused by this, and this is going way outside of the scope of stuff I’ve researched… I don’t know a lot about actual color profiles and how to make heads or tails of them, I’ve mostly just researched colorspace parameters and the math to convert to/from them.
I think over the last few weeks I’ve had a lot of things I wanted to say. I’m not sure I remembered all of them.
Anyway, you might be wondering why I do the whole RGB→YUV→rescaling + offsetting values→RGB thing. That’s because of some bug in… I think either Android itself, or some framework/library for handling cameras that most apps seem to use on Android, that causes any (most?) hardware-accelerated camera view to have RGB values rescaled - as if they were using ‘limited/tv’-ranged RGB signals (values in the 16 - 235 range) instead of ‘full/pc’-ranged values (0-255).
Because of this, by default brights are super bright (going over 1.0) and darks are super dark (going below 0.0). I simply correct for this, and thank goodness the initial scaling (which shouldn’t have taken place to begin with) must be done GPU-side, since I can indeed recover those negative and over-bright values (they weren’t chopped off). I’ve also noticed that just rescaling the RGB values back to full range causes some items to appear more dull than they should be, so I do the scaling in YUV instead.
Besides that, uh… Hm. I think the only other obvious ‘why in the world are you doing this?’ part of the code is that I use the SMPTE 170M (same as Rec. 709) transfer characteristics for the input colorspace. If I don’t, everything still seems too dark and there’s a wide range between ‘bright’ and ‘dark’ that should be a medium gray but is too dark of a medium gray to really match what I see in person.
I also attempt to ‘dither’ colors at the end to make up for so many transformations being applied to the color, to reduce banding. I have no idea if it really helps or not, nor do I know if I actually do it properly. It’s around in the same part as when I ‘calibrate’ the output picture using the 1D LUT (basically quick-and-dirty VCGT).
I think that’s most of what I’d thought of to say? Either way this post feels like it’s already way too long. I’ll stop here and let you decide how much of this rambling mess is worth responding to. I should probably go through and reorganize/rewrite bits of this post, but… Almost every time I’ve thought that sort of thing in the past when responding to others, I end up not actually responding at all. You’re someone I’ve actually heard of and have a lot of respect for, so I feel I really should give a response, even if it’s poorly organized and mashed out of a keyboard just before dinner all in one sitting.