Matrix multiplication


#29

Spot on!

Yes that is very helpful! and is exactly in agreement with several examples I have.


#30

I am using the RT code as reference, for what I am trying to do. Ideally I would like to incorporate my work into at RT some point!

Specifically I am trying to create a matrix that will take color values that I have computed from density readings of a film scan, that should be in a particular colorspace. Lets call that colorspace RA4 space. (i.e. the colorspace that is created by using the CMY dyes in a RA4 processed color print)

I then want to transform that data to sRGB correctly. At least that is what I want to do initially…

That’s why I want understand how to do it properly! Once I have got the method correct I may do things slightly differently. But my goal initially is to do it correctly.

I can make up some examples using generated solid colors, and compare the results that I get from RT.


(Elle Stone) #31

I’m totally confused as to how your inquiries regarding viewing conditions is related to "Perhaps the other reason . . . " as quoted above.

There are many web browsers that fail this test of “sRGB image looks the same with and without an embedded sRGB ICC profile” - are you talking about something you see in a web browser?


#32

Perhaps a step back is order.

What I am trying to should be straight forward in that I want use the exact same methods that are used in digital photography and some extent even more so those used in RT. (which I hope are same etc…) The only difference is my values “camera” to XYZ are unique.

This involves a lot of trial and error, trying different things, and each time I try something I like to understand the process, the values used etc.

I understand this because the software simply ignores the values. But in the case of software that processes ICC profiles correctly what should happen?

Perhaps I can ask the question in different way. Let take two tiff files. File A has the correct sRGB value attached with D50. and File B which has the exact same data inside but no profile.

When the same file is displayed in RT should they appear the same or different? Or what color values does the internals or RT thinks it has?


(Elle Stone) #33

Hmm, I thought I already answered this question above :slight_smile: . The files should look the same assuming the software assigns sRGB to images without embedded ICC profiles, which I’m sure RT does, leastways I can’t see any change in how a sample image looks before and after assigning an sRGB profile to an image with no embedded ICC profile. Do you see a visual difference?

I’m not sure what you mean by “ignores” but the problem is that those web browsers are flawed :slight_smile: . They should all by default assign sRGB to images without embedded ICC profiles, and then convert to the monitor profile.


#34

No, hence my point that when no profile is applied RT assumes it sRGB at D50, and I assume so does many other applications. If it assumed the file without the profile was D65 it should apply an adjustment and the result should look different to the one with D50 profile?

Putting aside applications that don’t do things correctly I am looking to understand what the correct behaviour should be. Does that make my question clear?


(Elle Stone) #35

If a V2 or V4 ICC profile color-managed editing application assumed an image without an embedded ICC profile was somehow “D65” and then tried to apply an adjustment to somehow compensate for the difference between D65 and D50, it would be a very confused ICC profile color-managed editing application.

If I were using software that behaved in this fashion, I’d file a bug report.


#36

Very sorry @LaurenceLumi, the confusion was all mine. @agriggio explains it perfectly. Of course it’s not A * B * C because the input pixel is a vector, not a 3x3 matrix therefore needs to be on the rhs of each multiply! Incidentally, gmic has the mix_channels command for that. I think I’ll keep quiet now :slight_smile:


(Elle Stone) #37

I can’t help but think that I’m somehow missing the question that you are actually asking. An sRGB ICC profile already has the “D65” source white point incorporated into the profile by means of the chromatic adaptation that was used to make the sRGB ICC profile from the sRGB color space specs. It doesn’t have to be added again, in the context of using ICC profile color management.

I tried an experiment once, modifying LCMS to use D65 as the illuminant, instead of D50. I installed the modified LCMS in /usr (this is on Linux) so all ICC profile applications used this modified LCMS. And I made a set of ICC profiles that used D65 as the illuminant. With this modified LCMS, when making ICC profiles from color space specs, D65 color spaces didn’t need to be chromatically adapted to D50. But D50 color spaces such as ProPhotoRGB did need to be chromatically adapted to D65. And of course “E”, D60, and etc color spaces still needed to be chromatically adapted, but to D65 instead of D50.

When I tried editing using this modified LCMS and the modified profiles with my editing software such as GIMP, all the colors looked exactly the same. Exactly the same. The only way I could get “different colors” was to:

  • Use my “D50 illuminant” ICC profiles with my modified-to-use-D65 version of LCMS
  • Or else use my “D65” version of LCMS with my normal D50-adapted ICC profiles: Edit: What I meant to say, should have said, was “Use the D65-illuminant profiles with normal non-modified D50-illuminant LCMS”.

Sometimes you might run across an incorrectly-made sRGB ICC profile where the chromatic adaptation from D65 to D50 wasn’t done. Using such a profile makes images look blue, such as the image below on the right (the colors on the left are correct):

Back in the days of V2 workflows, you could “get different colors” - either blue or yellow or even other colors depending on your monitor’s actual white point, by using Absolute colorimetric rendering intent to the monitor.

You could also get different colors when converting from one ICC RGB working space to another ICC RGB working space with a different white point, but you had to specifically ask for Absolute colorimetric intent - all the editing software I’ve ever seen defaults to Perceptual or Relative, so nobody was likely to do this accidentally.

For example, you might convert from from sRGB to BetaRGB (which has a D50 white point) or vice versa, using Absolute colorimetric intent, resulting in images such as are shown below. Notice the image on the right is “too yellow” and the image on the left is “too blue”:

But the ICC decided this sort of color change when using Absolute colorimetric was confusing to users.

So for V4 workflows, when the source and destination color spaces are both “monitor class” profiles (all the standard RGB working spaces we use in the digital darkroom are “monitor class” profiles), when you ask for Absolute colorimetric rendering intent, what you get is Relative. Which makes it decidedly more difficult to write tutorials that encourage users to experiment and thus learn for themselves first-hand the difference between relative and absolute colorimetric intents :frowning:

The images above come from my article on “Will the real sRGB profile please stand up”, which was written when I actually had access to V2 editing software: https://ninedegreesbelow.com/photography/srgb-profile-comparison.html


What are the LCh and JCh values for the sRGB blue primary?
#38

Specifically if I am creating data to store in a file that does not have ICC profile attached what parameters should I use?, and what parameters should I use if use a ICC profile with the correct primaries, and D50 white point etc ? Initially I though I should use the sRGB primaries and D65 for the former and D50 for the later.

But that does not seem to fit what the software I am using as example does. It seems rightly or wrongly that if you want the software to work as expected you need to create data in the former (the file without the ICC profile) with a D50 white point. As the software will NOT make any chromatic adaptation.


(Elle Stone) #39

Hmm, well, the only answer anyone will ever be able to give you is that for V2 and V4 ICC profile applications, use the D50 adapted matrix for sRGB. Whether the profile is actually embedded in the image or not is irrelevant. I’ve tried to give reasons why several times in this thread, but as I said, I’m not hitting the area that answers your questions, and at this point I somewhat doubt my ability to do so :slight_smile: .

If you don’t want to use a D50-adapted matrix, wait until someone adds iccMAX support to an ICC profile color managed editing application, in which scenario I don’t have much of a clue what will happen or be possible. But don’t try to mix whatever you do using iccMAX applications with what you do using V2/V4 applications.

Or else don’t use ICC profile color management at all, and instead use OCIO color management, which requires using OCIO LUTS to get from whatever color space the image is in, to whatever color space you want the image to be in. But I’m not the person to advise you on the specifics of OCIO, if that’s the direction you want to go. If you do a search on the forum, there are already some threads on the topic.

Here is a thought: Go ahead and try whatever it is that you think should be done, as you generate the matrices for whatever application you have in mind. And if it works, great! Experimenting with doing whatever you think should work is a great way to learn what does work. In general trying stuff and seeing what happens, and then figuring out why really is a nice way to learn stuff.


(Glenn Butcher) #40

Bear-of-little-brain here, methinks that 1) if you’re going to put primaries and whitepoint information in an image file, it should represent the color gamut to which the data was last converted, and 2) if you’re not going to put that information, you need to ensure the color gamut of the data can be used as-is by whatever media the data is intended for.

You can combine #1 and #2 for the largely unmanaged wild of the web by converting and storing sRGB/D50 (D50 mainly because the ICC tells people that’s their notion of reference whitepoint) and pray someone’s not going to regard your image on a cheap projector, ask me how I know…

I think the primary consideration is to ensure the metadata properly represents the image characteristics, and in its absence you need to have particular media in mind.


#41

I think what you’re missing from Elle’s responses is that there are multiple ‘white points’ that are used in different ways, at different stages within the calculations used to generate the matrix used to convert between colorspaces. Specifically, the keyword you should look at more closely is ‘adapted’.

Disclaimer: I’m not an expert on the standards, I’ve just struggled with the math and figured this out after reading way too much documentation that was way too vague. I might still be misunderstanding a lot of this, so I would honestly like some feedback from experts like Elle.

So, consider for a moment that we consider ‘white’ to be [1, 1, 1] no matter what RGB colorspace we’re in. This doesn’t specify a whitepoint per se - no, we specify a white point in terms of the XYZ colorspace. For example, while sRGB’s white point has xy coordinates [0.3127, 0.3290], that still is just saying that the exact ‘color’ for [1, 1, 1] (or ‘white’) can be measured externally as having those xy coordinates.

ICC profiles use what’s called a ‘Profile Connection Space’ (PCS). What this is will vary, but most of the time it’s either XYZ or L*a*b* - and for ICC profiles (I guess versions 2 and 4), the white point that they use for the PCS isn’t E, but instead D50 - which is roughly equal to XYZ values [0.964, 1.000, 0.825]. This means that, to stay consistent, we have to transform whatever ‘white’ is to XYZ values such that ‘pure white’ is [0.964, 1.000, 0.825], rather than [1, 1, 1] (or, if we were using D65, roughly [0.950, 1.000, 1.089]).

However, because of how human eyes work, you can’t just rescale XYZ values directly to convert between white points. Instead, you have to convert XYZ values into LMS (native colorspace for the human eye), rescale those values, then convert back into XYZ.

There is some debate about what the best matrix to use is for converting between XYZ and LMS, and it often depends on your use case, needs, and specific setup. However, the most common when dealing with ICC profiles is the original ‘Bradford’ color transformation matrix. I specify ‘original’ because apparently there are two versions, and ICC profiles explicitly use the original one.

So, here’s an overview of how this looks:
Linear sRGB→XYZ→LMS→D50/D65→LMS→XYZ (PCS)

And going to another RGB space (for this example, to be displayed on a monitor with a D75 white point):
XYZ (PCS)→LMS→D75/D50→LMS→XYZ→RGB

It’s important to note that in both RGB colorspaces (both sRGB and the monitor’s colorspace), the RGB value for ‘white’ remains [1, 1, 1]. If the picture is a photo of a white piece of paper with a drawing on it, any part that shows the paper will have the same RGB value in both RGB colorspaces (assuming that it’s perfectly encoded as white and not slightly off-color, nor darkened to a light gray).

That’s why one of Elle’s comments carefully noted that the ICC specs assume that your eyes are 100% adapted to the white point of your display - because they’re designed to make sure that the display’s white point is always used for the actual value of white.

Now, for the math:

  1. orig = Original RGB value.
  2. final = Final resulting RGB value.
  3. toXyz = RGB to XYZ matrix for the initial (or ‘source’) RGB colorspace. Uses whatever that colorspace’s actual white point is, such as D65.
  4. toRgb = XYZ to RGB matrix for the final (or ‘destination’) RGB colorspace. Uses whatever that colorspace’s actual white point is, such as D75.
  5. whiteSource = Source RGB colorspace’s white point.
  6. whiteDest = Destination RGB colorspace’s white point.
  7. toLms = XYZ to LMS matrix, such as the Bradford or Hunt matrices.
  8. diag() = Function to turn a 3-element vector into a diagonal matrix.

final = toRgb * (toLms^-1) * diag((toLms*whiteDest)/(toLms*D50)) * toLms *
(toLms^-1) * diag((toLms*D50)/(toLms*whiteSource)) * toLms * toXyz * orig

I noticed that the built-in editor had decided to line-break right at the point where colors would be in the PCS (at the time I hadn’t put spaces around the asterisks), so I decided to put an actual line break in there. I put the spaces around most of the asterisks to help show where each colorspace conversion takes place. Decided not to with the ones inside ‘diag()’, to better group those together as a single ‘conversion’.

Hope this helps! While I did find this thread while googling for how to do matrix multiplication in gmic, I saw what looked like a very recent thread from someone going through some of the same issues I did.


Now, the reason I had gotten so confused while learning all this, was because I was wanting to figure this all out so that I could specifically use absolute colorimetric conversions between colorspaces; I didn’t want to use white point adaptation. Specifically, I wanted to make one image look identical on several different monitors, and make that look identical to the original object in real life. I had all displays in the same room as the object, too.

But I had in my head the idea that ‘white balance’ was meant to help adjust colors to be more or less white, going bluish or orangeish based on color temperature. So I kept trying to use white point adaptation to do the opposite of what it was intended to do, and since none of the documentation I could find was geared toward that, it was kinda frustrating!

Had to take a step back and figure out what it did first, in the context in which it was being used - and after I figured that out it was much easier to ‘undo’ the whitepoint adaptation.

Except then I learned that my phone’s camera was doing it all wrong and was assuming D50 was the actual white point for the display. Figuring out why certain shades of green lacked that slight tint of blue while everything else looked spot on was ‘fun’, alright.

… Actually it kinda was. And the whole project was just for fun anyway; can’t seem to get a job, so may as well mess around with colorspaces instead!


#42

Not really, if something is the same, then it is the same, if something is different then it is different!

This is not meant as criticism of Elle, who has been very helpful!

At the end of the day my question is very simple and can be summarised as: If I have some data that is in the sRGB colorspace what are the parameters used to describe that data? and in addition are those parameters any different from that I would find in sRGB ICC profile.

In seems at least as a defacto standard the answer to the later part of that question is that there is no difference. Now perhaps that was not intended, maybe its even a mistake… but otherwise users would likely complain if they attached an sRGB ICC profile to sRGB data and the result looked different!


#43

Elle, I think you should have point me to this page… :slight_smile:

https://ninedegreesbelow.com/photography/srgb-color-space-to-profile.html

What I did not get before, is the values of the primaries in the an ICC profile have (or SHOULD have) been chromatically adapted from there absolute values. This was not intuitive but I get it now.

So in summary the sRGB data that uses the unadapted SRGB primaries plus the D65 white point should equal (or be close enough) to same data that is defined using correctly adapted primaries and the D50 white point?


(Elle Stone) #44

Hi @LaurenceLumi - hmm, well, I actually did mention that article, back in Comment 4 :slight_smile: , where I gave a link to an article that has a downloadable spreadsheet for calculating the sRGB ICC profile from the ICC specs and the sRGB color space specs. But I’m really glad you found that article helpful - it was a ton of work to write! - but I learned a lot while writing it.

@Tynach - I really like all the experimenting you’ve been doing with various displays. That sort of stuff is the 100% best way to actually learn how ICC profile color management really works. Otherwise the constant temptation is to make a lot of unconscious assumptions, that seem inevitably to end up being not correct just when you really want something to work “as expected”.

It always makes me a bit nervous when people refer to me as an expert :slight_smile: because everything I know about ICC profile color management was learned the hard way, one experiment at a time just like you are doing, followed by trying to figure out “why” results are as they are, whether by doing further testing or doing a lot of background reading or asking questions on forums and mailing lists, or whatever it takes. So whatever expertise I have is relative to the stuff I’ve tried to figure out. I don’t have any formal university degree in color management or anything like that.

Anyway, I do have some thoughts on your descriptions and comments for your wonderful monitor experiments, but I need to clear off some other “to dos” my list before sitting down to type something up.


#45

I should probably mention that this is all done with GLSL code, on Shadertoy and in the Android app ‘Shader Editor’. I’ve looked at ICC profiles and compared different ones to each other, but I’ve yet to write any code that actually uses any such profile.

It’s one of those things I feel I should do, but a large part of what I’m doing right now is on my phone in Shader Editor (where I have to ‘calibrate’ the display by having a massive array containing the equivalent to a VCGT tag, multiply all color values by 255, and use that as the array index for nabbing the final value for that channel).

Same, it’s happened a few times with me… And here I am unemployed because nobody wants to hire someone without actual job experience. Often I’ll post something that really makes sense to me and seems completely true, but I’ll still get the feeling that it might only seem true to me because I “don’t live in the real world.” And that’s often the sort of thing (though in more detail and with examples) told to me when I give opinions on topics like tab indentation, so I have a feeling they might be right.

What I more or less meant by ‘expert’ in my own post, however, was that you’re someone who has done that testing before - and thus you have built up a fairly decently sized repository of knowledge, at least when compared to others. I hesitate to say ‘professional’ because I honestly don’t know what your actual job consists of, but given the number of articles you’ve written (and both the depth and breadth of the topics in them), I think it’s safe to say you’re an expert - at least relatively.

I should have joined this community much sooner, but I didn’t really know about it. Besides that, it was only very recently that I broke down and finally just bought myself a colorimeter, as before that I was making a lot more assumptions about the quality of my own equipment (factory-calibrated monitors are, apparently, often very badly calibrated).

Mostly so far I’ve just been posting test code on Shadertoy, and occasionally asking for things like ‘where can I find copies of various official standards?’ on Reddit… Where I didn’t get any really useful leads; the subreddit I saw over there was I think /r/colorists, and 90% of the content is people saying things like, “In this program, always set these settings to those values when doing that.”

So Wikipedia has still been my number one source for things like what chromaticity coordinates belong to the primaries of which standards, and I’ve not really had anywhere to go for asking for feedback on the actual algorithms.

As for responding later, that’s no problem! I figure that’s what forum-like websites are for - group conversations that could take anywhere from minutes to weeks between responses. Wouldn’t want to rush you :slight_smile:

At any rate, uh… I sorta split up when I wrote this comment, part of it in the morning and part of it in the evening. I’m not really sure where I was going with some of it or if I intended to modify/add to/remove from earlier parts, so I’m sorry if it’s a little bit of a rambling mess. I’ll just post it as-is for now, as I’m not sure what else to do with it.


(Elle Stone) #46

Hi @Tynach - my apologies for taking so long to circle back around to your very thought-provoking post - :slight_smile: I bet you thought I forgot about this post, but nope, not at all!

Edit: with my usual stellar inability to speak clearly, when I tried to rewrite my initial sentence to make it more clear, I left out the critical part of the sentence above, which is that I bet you thought I forgot about this thread, not true!

So to try again, your post was very thought-provoking, and I didn’t forget about it, in fact have been mulling over the points you’ve made. So I just edited the original sentence above to put in the missing phrase. Sigh.

I never stopped to think about what RGB values the monitor profile might have for the color white near-white colors - thanks! for mentioning that.

I used ArgyllCMS xicclu to check several monitor profiles that I made at different times, using different algorithms, and sure enough “white” defined as Lab color (100, 0,0) was close to or exactly (1,1,1) in the monitor space, using relative colorimetric intent:

xicclu -ir -pL -fif file-name-for-monitor-profile

But “how close are the channel values for white and near-white” does depend on the type of monitor profile. For my LUT monitor profiles, R, G, and B are only approximately the same for grayscale values, being very close for white and near white, and progressively farther apart as the grays get darker. On the other hand, for my profile made using “-aS”, R=G=B up and down the gray axis.

I’m guessing that "how different are the channel values for white and gray also depends on what sort of prior calibration was done using the vcgt tag, before making the monitor profile.

Your goal of making one image look identical on several different monitors, and also make the image look identical to the original object in real life, of course means that at some point you took a photograph of the real life object (I’m really good at figuring out the obvious :slight_smile: ).

Recently I took a photograph of a painting and used RawTherapee’s CIECAM02 module to make the colors on the screen match the colors in the painting:

Of course your situation - multiple monitors in the same room right along with the photographed object - might have the advantage that the entire room is evenly lit with the same color and brightness of light. In which case “what color of light” to calibrate all the monitors to might depend on the color of the ambient light. But then you’d need to consider whatever compromises might be required when calibrating any given monitor to a white point that’s too far away from its native white point.

I had been thinking about your quest to make the colors look the same on all your monitors, and thinking that the CIECAM02 modules might be a way to accomplish your goal (even without first calibrating and perhaps also profiling the monitors). Making images look the same on different display devices was @jdc 's motivation for RawTherapee’s CIECAM02 module existing in the first place.

@ggbutcher - the RawTherapee CIECAM02 module is something that might also work for displaying images on your projection screen, though it might mean making a solid white image (and perhaps also an 11-step grayscale wedge at L=100 down to L=0) using editing software, projecting that image onto your screen, taking a photograph of the projected image, and seeing what color of white the projected white actually is. There are sophisticated devices for meauring such things, but probably a photograph would get you “in the ballpark”.

@gwgill - now that @Tynach has a colorimeter and can calibrate and profile his various monitors, would this colprof switch allow him to accomplish his goal of making images look the same on all the monitors using ICC profile color management? Or (as I sort of suspect) am I missing something critical in how images are displayed on different monitors?

http://argyllcms.com/doc/colprof.html#ua

For input profiles, this flag forces the effective intent to be Absolute Colorimetric even when used with Relative Colorimetric intent selection in a CMM, by setting a D50 white point tag. This also has the effect of preserving the conversion of colors whiter than the white patch of the test chart without clipping them (similar to the -u flag), but does not hue correct white. This flag can be useful when an input profile is needed for using a scanner as a “poor mans” colorimeter.


(Desmis) #47

@Elle
Thanks for the compliment, I’ll look at what Ciecam can or can not bring… with my (very) bad english :slight_smile:


#48

My apologies in return @Elle, I was not only indecisive as far as what to say was concerned, but I also had accidentally deleted part of my code. It’s not on version control (and I’m not sure if Shader Editor uses files, or Android’s per-app SQLite database), and I had set it up so that running the code auto-saved it… So when I accidentally deleted some code portions (and then tapped ‘run’ without thinking) I had to spend some time recreating what I’d written beforehand.

I really should put it into a file on my desktop, but I’d honestly rather just… Completely rewrite it instead. It’s a mess of commented out code right now, especially since I have something around the lines of half a dozen sets of chromaticity coordinates specifying my phone’s display colorspace, all but one commented out.

Also, I… Honestly don’t know what to say to you, of all people, calling my post thought-provoking. All I had intended to do was explain white point adaptation, and what it meant for the math behind colorspace conversions. I saw what looked like either a misunderstanding or some missing information, and fueled mostly by feelings of, “Hey, I had to figure this out recently, here’s a chance to ramble about it,” I typed up the post I had.

Then feelings of, “Wait what if I don’t actually understand this as well as I think I do?” kicked in and I put that disclaimer in. After all, I’m literally just some guy who still lives with his parents who has way too much free time. And since I’ve yet to 100% accurately reproduce all lighting scenarios with one set of options plugged into my phone, I’m honestly fairly sure there are things I’m definitely getting wrong.

… Aanyway, on to the interesting stuff.

It’s good to know that it checks out, but I was kinda talking abstractly, in the sort of, “If a program were told to ‘display white’ on the screen, what values would it apply to the color of the pixels?” kind of way.

I only meant to describe things like, if your program is dumb then white is just gonna be ‘set all channels to full’. And that extends to if your program is smart but presents itself as dumb (meaning it’ll convert RGB colorspaces, but for the converted values it’ll still state that white is all channels being full).

Most likely. This is how I have my desktop monitor set up, and apparently it’s good enough that instead of a LUT for conversion, Chrome at least just uses a math function as the transfer characteristics. Not that Chrome is the best when it comes to color management, but going to the bottom of chrome://gpu lets me see exactly what Chrome uses for color correction.

If I use a profile that was generated without a VCGT tag, it actually says that it’s using a LUT (and I’ve noticed some serious banding and general low accuracy when that’s the case, though only in Chrome). But currently it instead has this as the ‘Color space information’:

{primaries_d50_referred: [[0.6608, 0.3388], [0.3321, 0.5954], [0.1501, 0.0602]], transfer:0.0782*x + 0.0000 if x < 0.0431 else (0.9476*x + 0.0522)**2.4005 + 0.0000, matrix:RGB, range:FULL}

Actually, not quite! Shader Editor lets me use the camera as a texture source, so I’m taking already-processed RGB data and having to undo that processing as best as possible, then re-do it how I want it redone. In real-time, so it’s a good thing it’s with GLSL!

Sadly, I don’t have full control over the camera hardware with Shader Editor, but my phone does fully support Android’s newer Camera2 API. This means I can get all the necessary information about my camera hardware that I need to essentially return the RGB values into, as close as possible, the original RAW values. I used Camera2 Test to extract the data (and of course transposed the matrices for use in GLSL).

… Except for what white balance is currently in use. I have to deal with the white balance constantly changing as I aim the phone at different items, so I’ve had to use either whatever whitest item is nearby, or just set items on some paper, or just… Look at things that are on my cluttered nightstand, which has several pieces of paper on it.

Instead, to calculate the camera matrix, I had to dig through the DNG file format specification (and I’m having to assume that my phone’s camera goes through the same process as outlined in said DNG specifications) to figure out how to turn chromaticity coordinates for light sources (measured with my colorimeter) into the XYZ-to-RAW matrix. In the DNG spec, the relevant section is chapter 6 (Mapping Camera Color Space to CIE XYZ Space).

It is not a simple task, and I don’t think RawTherapee gets it quite right in the more complex case of not having chromaticity coordinates (instead if you start off with just whatever RAW value is used for D50 white, AKA the ‘AsShotNeutral’ tag), which is the case when I use camera apps that let me capture RAW data.

There apparently are 2 camera matrices, and I have to calculate the CCT of the light source, and if it’s between the CCT for Standard Illuminant A and Standard Illuminant D65, I have to determine where it is between those two and use that to perform linear interpolation between the two camera matrices.

I don’t know for sure if I do that correctly in my code, especially since it’s that part that I accidentally deleted. I’ve spent a few days (mostly the last 3 days, but also various days over the last few months) testing and tweaking it though, and think it’s correct now.

At any rate, I’ve made some minor changes to my code to clear things up (I originally had used ‘LMS’ to refer to both ‘human eye’ space and camera RAW space), so I’ve done some regex search/replaces to turn a few instances of the word lms into raw. I’ll go ahead and just attach the code as-is for you pull your hair out over… It’s commented, but all of the comments are ‘for me’ and not for anyone else, including the “don’t take this section seriously” warning.

I do think I did a decent job at variable naming overall, but I’ve had numerous times where I wanted to name multiple things the same thing in the same scope, so… Well, I had to mix and match how names were organized/formatted a few times. And sometimes decisions like that carry over even when half the variables get removed, rewritten, or commented out anyway, which… Is why I want to rewrite it at some point. Make it much cleaner.

I wish. Light bleeds in through my bedroom window, and the ceiling light that’s in here has a really low CCT. Something around 3500K if I remember correctly (colorimeter readings vary between 3100K and 3700K).

The only light I have actual control over is my nightstand lamp, and for that I fairly recently bought an LED bulb that’s rated as having a 90+ CRI. Its CCT is actually at exactly 5000K, but exact chromaticity coordinates do seem to still vary a little… But since I can hold my colorimeter right up to the bulb, they’re much more consistent.

So at night, with my monitors turned off and with items on my cluttered nightstand, with only the nightstand lamp turned on and the ceiling light turned off… Then I can have full control over the color of the light.

I suppose I also have decent color readings for the CFL bulb that was in that lamp before I put the LED bulb in there, and that bulb now helps light up the bathroom (with other bulbs from the same box, even). Since there’s no windows in the bathroom, that means I can also test the code on things that are known to be white, like the bath tub and toilet. The sink is kinda an off-white, I’ve noticed.

I don’t even try to calibrate my displays to ambient lighting. Right now I have them profiled with a VCGT tag that changes it from using the native white point of the display, to instead using plain 'ol D65, and transfer characteristics that match sRGB.

At least, more or less. Chrome has some weird values in the equation it gleans from my profile, but it seems to work? Either way I usually just force Chrome to treat my displays as sRGB, ignoring system profile. Otherwise I get some noticeable (but relatively minor) banding in smooth images.

To this end, I had already settled on recreating the appropriate camera RAW matrices according to the DNG spec. CIE 2002 uses a color matrix designed around an LMS colorspace that had ‘sharpened’ spectral sensitivities. This is less accurate, but overall works out better when it comes to human perception of chromatic adaptation. In other words, when our brain performs white balancing, it exaggerates some things while dulling others, and CIE 2002 helps model that.

You’ll see some huge chunks of my code commented out that have some… Less informative comments - in particular, the majority of the convert() function. This was from my attempt to model and understand CIE 2002. I have an unpublished page on ShaderToy that has a working version of this, but… While I have tried to understand all the variables and how they work together, the documentation I can find on CIE 2002 is sparse. Very sparse. Wikipedia is where I got most of the equations I do use, but I have to guess half the time at what many variables actually mean.

At any rate, I’m trying to go for ‘absolute colorimetric’ types of things, and am in fact undoing a lot of white point adaptation and other things which try to model perception. As such, using a system that models perception is only useful when undoing its effects, and since the code (or maybe its hardware) that performs those operations to begin with resides in the camera module itself, I doubt it’s as complex as CIE 2002.

But! Sometimes I do want to adapt things to my phone display’s white point - and when I do, I basically uncomment the end of toRgb's declaration (line 814), and comment out part of transMat's declaration (line 821):

814: const mat3 toRgb = xyzToRgb(outSpace)*aTo;
...
821: const mat3 transMat = /*fromRaw*whiteBalance*toRaw**/toXyz;

And here’s more inconsistent naming. aTo is the matrix applied to colors just before the absolute XYZ→RGB matrix is applied (for converting to the final RGB colorspace). It’s basically ‘adaptation matrix for the colorspace being converted To’. And it’s defined using what I label as the ‘output CAM matrix’ (CAM meaning Color Appearance Model), which is the XYZ→LMS matrix I have chosen to use.

I could use CIE 2002’s matrix, but again it’s spectrally sharpened (and I like having overly extreme accuracy). Instead I decided to calculate such a matrix from data and calculations given by the Color Vision and Research Laboratory (CVRL). Links are in the source code just above my declaration for primariesLms (lines 548 and 549 contain the links).

The results are very close to those produced if I were to use the Hunt-Pointer-Estevez LMS matrix, which is regarded as the ‘more accurate’ of the professionally produced LMS matrices (and is commonly used in color blindness simulation and research). However, the results of my CVRL-inspired matrix seem to be slightly ‘sharpened’, placing them somewhere between the Hunt and CIE 2002 matrices (though closer to Hunt).

I fully realize, of course, that if I really wanted to seriously use the proposed 2012 XYZ functions, I’d have to use a spectrophotometer and match the color temperature of my displays and light sources to new xy chromaticity coordinates calculated from said new color matching functions. I can’t even hope to afford a spectrophotometer, so that’s… Basically not happening any time soon. I trust that the CVRL is honest when they say they matched the new CMFs to the 1931 CMFs as closely as possible, so I’ve just been using 1931-tuned chromaticities. Seems to work well enough.

Wouldn’t that just make the profile badly formed, in that the white point reported isn’t the actual, true white point? I’m a bit confused by this, and this is going way outside of the scope of stuff I’ve researched… I don’t know a lot about actual color profiles and how to make heads or tails of them, I’ve mostly just researched colorspace parameters and the math to convert to/from them.


Here’s the code as mentioned: Back-Camera.txt (36.1 KB)

I think over the last few weeks I’ve had a lot of things I wanted to say. I’m not sure I remembered all of them.

Anyway, you might be wondering why I do the whole RGB→YUV→rescaling + offsetting values→RGB thing. That’s because of some bug in… I think either Android itself, or some framework/library for handling cameras that most apps seem to use on Android, that causes any (most?) hardware-accelerated camera view to have RGB values rescaled - as if they were using ‘limited/tv’-ranged RGB signals (values in the 16 - 235 range) instead of ‘full/pc’-ranged values (0-255).

Because of this, by default brights are super bright (going over 1.0) and darks are super dark (going below 0.0). I simply correct for this, and thank goodness the initial scaling (which shouldn’t have taken place to begin with) must be done GPU-side, since I can indeed recover those negative and over-bright values (they weren’t chopped off). I’ve also noticed that just rescaling the RGB values back to full range causes some items to appear more dull than they should be, so I do the scaling in YUV instead.

Besides that, uh… Hm. I think the only other obvious ‘why in the world are you doing this?’ part of the code is that I use the SMPTE 170M (same as Rec. 709) transfer characteristics for the input colorspace. If I don’t, everything still seems too dark and there’s a wide range between ‘bright’ and ‘dark’ that should be a medium gray but is too dark of a medium gray to really match what I see in person.

I also attempt to ‘dither’ colors at the end to make up for so many transformations being applied to the color, to reduce banding. I have no idea if it really helps or not, nor do I know if I actually do it properly. It’s around in the same part as when I ‘calibrate’ the output picture using the 1D LUT (basically quick-and-dirty VCGT).

I think that’s most of what I’d thought of to say? Either way this post feels like it’s already way too long. I’ll stop here and let you decide how much of this rambling mess is worth responding to. I should probably go through and reorganize/rewrite bits of this post, but… Almost every time I’ve thought that sort of thing in the past when responding to others, I end up not actually responding at all. You’re someone I’ve actually heard of and have a lot of respect for, so I feel I really should give a response, even if it’s poorly organized and mashed out of a keyboard just before dinner all in one sitting.