Matrix multiplication

Spot on!

Yes that is very helpful! and is exactly in agreement with several examples I have.

I am using the RT code as reference, for what I am trying to do. Ideally I would like to incorporate my work into at RT some point!

Specifically I am trying to create a matrix that will take color values that I have computed from density readings of a film scan, that should be in a particular colorspace. Lets call that colorspace RA4 space. (i.e. the colorspace that is created by using the CMY dyes in a RA4 processed color print)

I then want to transform that data to sRGB correctly. At least that is what I want to do initiallyā€¦

Thatā€™s why I want understand how to do it properly! Once I have got the method correct I may do things slightly differently. But my goal initially is to do it correctly.

I can make up some examples using generated solid colors, and compare the results that I get from RT.

Iā€™m totally confused as to how your inquiries regarding viewing conditions is related to "Perhaps the other reason . . . " as quoted above.

There are many web browsers that fail this test of ā€œsRGB image looks the same with and without an embedded sRGB ICC profileā€ - are you talking about something you see in a web browser?

Perhaps a step back is order.

What I am trying to should be straight forward in that I want use the exact same methods that are used in digital photography and some extent even more so those used in RT. (which I hope are same etcā€¦) The only difference is my values ā€œcameraā€ to XYZ are unique.

This involves a lot of trial and error, trying different things, and each time I try something I like to understand the process, the values used etc.

I understand this because the software simply ignores the values. But in the case of software that processes ICC profiles correctly what should happen?

Perhaps I can ask the question in different way. Let take two tiff files. File A has the correct sRGB value attached with D50. and File B which has the exact same data inside but no profile.

When the same file is displayed in RT should they appear the same or different? Or what color values does the internals or RT thinks it has?

Hmm, I thought I already answered this question above :slight_smile: . The files should look the same assuming the software assigns sRGB to images without embedded ICC profiles, which Iā€™m sure RT does, leastways I canā€™t see any change in how a sample image looks before and after assigning an sRGB profile to an image with no embedded ICC profile. Do you see a visual difference?

Iā€™m not sure what you mean by ā€œignoresā€ but the problem is that those web browsers are flawed :slight_smile: . They should all by default assign sRGB to images without embedded ICC profiles, and then convert to the monitor profile.

No, hence my point that when no profile is applied RT assumes it sRGB at D50, and I assume so does many other applications. If it assumed the file without the profile was D65 it should apply an adjustment and the result should look different to the one with D50 profile?

Putting aside applications that donā€™t do things correctly I am looking to understand what the correct behaviour should be. Does that make my question clear?

If a V2 or V4 ICC profile color-managed editing application assumed an image without an embedded ICC profile was somehow ā€œD65ā€ and then tried to apply an adjustment to somehow compensate for the difference between D65 and D50, it would be a very confused ICC profile color-managed editing application.

If I were using software that behaved in this fashion, Iā€™d file a bug report.

Very sorry @LaurenceLumi, the confusion was all mine. @agriggio explains it perfectly. Of course itā€™s not A * B * C because the input pixel is a vector, not a 3x3 matrix therefore needs to be on the rhs of each multiply! Incidentally, gmic has the mix_channels command for that. I think Iā€™ll keep quiet now :slight_smile:

1 Like

I canā€™t help but think that Iā€™m somehow missing the question that you are actually asking. An sRGB ICC profile already has the ā€œD65ā€ source white point incorporated into the profile by means of the chromatic adaptation that was used to make the sRGB ICC profile from the sRGB color space specs. It doesnā€™t have to be added again, in the context of using ICC profile color management.

I tried an experiment once, modifying LCMS to use D65 as the illuminant, instead of D50. I installed the modified LCMS in /usr (this is on Linux) so all ICC profile applications used this modified LCMS. And I made a set of ICC profiles that used D65 as the illuminant. With this modified LCMS, when making ICC profiles from color space specs, D65 color spaces didnā€™t need to be chromatically adapted to D50. But D50 color spaces such as ProPhotoRGB did need to be chromatically adapted to D65. And of course ā€œEā€, D60, and etc color spaces still needed to be chromatically adapted, but to D65 instead of D50.

When I tried editing using this modified LCMS and the modified profiles with my editing software such as GIMP, all the colors looked exactly the same. Exactly the same. The only way I could get ā€œdifferent colorsā€ was to:

  • Use my ā€œD50 illuminantā€ ICC profiles with my modified-to-use-D65 version of LCMS
  • Or else use my ā€œD65ā€ version of LCMS with my normal D50-adapted ICC profiles: Edit: What I meant to say, should have said, was ā€œUse the D65-illuminant profiles with normal non-modified D50-illuminant LCMSā€.

Sometimes you might run across an incorrectly-made sRGB ICC profile where the chromatic adaptation from D65 to D50 wasnā€™t done. Using such a profile makes images look blue, such as the image below on the right (the colors on the left are correct):

Back in the days of V2 workflows, you could ā€œget different colorsā€ - either blue or yellow or even other colors depending on your monitorā€™s actual white point, by using Absolute colorimetric rendering intent to the monitor.

You could also get different colors when converting from one ICC RGB working space to another ICC RGB working space with a different white point, but you had to specifically ask for Absolute colorimetric intent - all the editing software Iā€™ve ever seen defaults to Perceptual or Relative, so nobody was likely to do this accidentally.

For example, you might convert from from sRGB to BetaRGB (which has a D50 white point) or vice versa, using Absolute colorimetric intent, resulting in images such as are shown below. Notice the image on the right is ā€œtoo yellowā€ and the image on the left is ā€œtoo blueā€:

But the ICC decided this sort of color change when using Absolute colorimetric was confusing to users.

So for V4 workflows, when the source and destination color spaces are both ā€œmonitor classā€ profiles (all the standard RGB working spaces we use in the digital darkroom are ā€œmonitor classā€ profiles), when you ask for Absolute colorimetric rendering intent, what you get is Relative. Which makes it decidedly more difficult to write tutorials that encourage users to experiment and thus learn for themselves first-hand the difference between relative and absolute colorimetric intents :frowning:

The images above come from my article on ā€œWill the real sRGB profile please stand upā€, which was written when I actually had access to V2 editing software: https://ninedegreesbelow.com/photography/srgb-profile-comparison.html

1 Like

Specifically if I am creating data to store in a file that does not have ICC profile attached what parameters should I use?, and what parameters should I use if use a ICC profile with the correct primaries, and D50 white point etc ? Initially I though I should use the sRGB primaries and D65 for the former and D50 for the later.

But that does not seem to fit what the software I am using as example does. It seems rightly or wrongly that if you want the software to work as expected you need to create data in the former (the file without the ICC profile) with a D50 white point. As the software will NOT make any chromatic adaptation.

Hmm, well, the only answer anyone will ever be able to give you is that for V2 and V4 ICC profile applications, use the D50 adapted matrix for sRGB. Whether the profile is actually embedded in the image or not is irrelevant. Iā€™ve tried to give reasons why several times in this thread, but as I said, Iā€™m not hitting the area that answers your questions, and at this point I somewhat doubt my ability to do so :slight_smile: .

If you donā€™t want to use a D50-adapted matrix, wait until someone adds iccMAX support to an ICC profile color managed editing application, in which scenario I donā€™t have much of a clue what will happen or be possible. But donā€™t try to mix whatever you do using iccMAX applications with what you do using V2/V4 applications.

Or else donā€™t use ICC profile color management at all, and instead use OCIO color management, which requires using OCIO LUTS to get from whatever color space the image is in, to whatever color space you want the image to be in. But Iā€™m not the person to advise you on the specifics of OCIO, if thatā€™s the direction you want to go. If you do a search on the forum, there are already some threads on the topic.

Here is a thought: Go ahead and try whatever it is that you think should be done, as you generate the matrices for whatever application you have in mind. And if it works, great! Experimenting with doing whatever you think should work is a great way to learn what does work. In general trying stuff and seeing what happens, and then figuring out why really is a nice way to learn stuff.

Bear-of-little-brain here, methinks that 1) if youā€™re going to put primaries and whitepoint information in an image file, it should represent the color gamut to which the data was last converted, and 2) if youā€™re not going to put that information, you need to ensure the color gamut of the data can be used as-is by whatever media the data is intended for.

You can combine #1 and #2 for the largely unmanaged wild of the web by converting and storing sRGB/D50 (D50 mainly because the ICC tells people thatā€™s their notion of reference whitepoint) and pray someoneā€™s not going to regard your image on a cheap projector, ask me how I knowā€¦

I think the primary consideration is to ensure the metadata properly represents the image characteristics, and in its absence you need to have particular media in mind.

I think what youā€™re missing from Elleā€™s responses is that there are multiple ā€˜white pointsā€™ that are used in different ways, at different stages within the calculations used to generate the matrix used to convert between colorspaces. Specifically, the keyword you should look at more closely is ā€˜adaptedā€™.

Disclaimer: Iā€™m not an expert on the standards, Iā€™ve just struggled with the math and figured this out after reading way too much documentation that was way too vague. I might still be misunderstanding a lot of this, so I would honestly like some feedback from experts like Elle.

So, consider for a moment that we consider ā€˜whiteā€™ to be [1, 1, 1] no matter what RGB colorspace weā€™re in. This doesnā€™t specify a whitepoint per se - no, we specify a white point in terms of the XYZ colorspace. For example, while sRGBā€™s white point has xy coordinates [0.3127, 0.3290], that still is just saying that the exact ā€˜colorā€™ for [1, 1, 1] (or ā€˜whiteā€™) can be measured externally as having those xy coordinates.

ICC profiles use whatā€™s called a ā€˜Profile Connection Spaceā€™ (PCS). What this is will vary, but most of the time itā€™s either XYZ or L*a*b* - and for ICC profiles (I guess versions 2 and 4), the white point that they use for the PCS isnā€™t E, but instead D50 - which is roughly equal to XYZ values [0.964, 1.000, 0.825]. This means that, to stay consistent, we have to transform whatever ā€˜whiteā€™ is to XYZ values such that ā€˜pure whiteā€™ is [0.964, 1.000, 0.825], rather than [1, 1, 1] (or, if we were using D65, roughly [0.950, 1.000, 1.089]).

However, because of how human eyes work, you canā€™t just rescale XYZ values directly to convert between white points. Instead, you have to convert XYZ values into LMS (native colorspace for the human eye), rescale those values, then convert back into XYZ.

There is some debate about what the best matrix to use is for converting between XYZ and LMS, and it often depends on your use case, needs, and specific setup. However, the most common when dealing with ICC profiles is the original ā€˜Bradfordā€™ color transformation matrix. I specify ā€˜originalā€™ because apparently there are two versions, and ICC profiles explicitly use the original one.

So, hereā€™s an overview of how this looks:
Linear sRGBā†’XYZā†’LMSā†’D50/D65ā†’LMSā†’XYZ (PCS)

And going to another RGB space (for this example, to be displayed on a monitor with a D75 white point):
XYZ (PCS)ā†’LMSā†’D75/D50ā†’LMSā†’XYZā†’RGB

Itā€™s important to note that in both RGB colorspaces (both sRGB and the monitorā€™s colorspace), the RGB value for ā€˜whiteā€™ remains [1, 1, 1]. If the picture is a photo of a white piece of paper with a drawing on it, any part that shows the paper will have the same RGB value in both RGB colorspaces (assuming that itā€™s perfectly encoded as white and not slightly off-color, nor darkened to a light gray).

Thatā€™s why one of Elleā€™s comments carefully noted that the ICC specs assume that your eyes are 100% adapted to the white point of your display - because theyā€™re designed to make sure that the displayā€™s white point is always used for the actual value of white.

Now, for the math:

  1. orig = Original RGB value.
  2. final = Final resulting RGB value.
  3. toXyz = RGB to XYZ matrix for the initial (or ā€˜sourceā€™) RGB colorspace. Uses whatever that colorspaceā€™s actual white point is, such as D65.
  4. toRgb = XYZ to RGB matrix for the final (or ā€˜destinationā€™) RGB colorspace. Uses whatever that colorspaceā€™s actual white point is, such as D75.
  5. whiteSource = Source RGB colorspaceā€™s white point.
  6. whiteDest = Destination RGB colorspaceā€™s white point.
  7. toLms = XYZ to LMS matrix, such as the Bradford or Hunt matrices.
  8. diag() = Function to turn a 3-element vector into a diagonal matrix.

final = toRgb * (toLms^-1) * diag((toLms*whiteDest)/(toLms*D50)) * toLms *
(toLms^-1) * diag((toLms*D50)/(toLms*whiteSource)) * toLms * toXyz * orig

I noticed that the built-in editor had decided to line-break right at the point where colors would be in the PCS (at the time I hadnā€™t put spaces around the asterisks), so I decided to put an actual line break in there. I put the spaces around most of the asterisks to help show where each colorspace conversion takes place. Decided not to with the ones inside ā€˜diag()ā€™, to better group those together as a single ā€˜conversionā€™.

Hope this helps! While I did find this thread while googling for how to do matrix multiplication in gmic, I saw what looked like a very recent thread from someone going through some of the same issues I did.


Now, the reason I had gotten so confused while learning all this, was because I was wanting to figure this all out so that I could specifically use absolute colorimetric conversions between colorspaces; I didnā€™t want to use white point adaptation. Specifically, I wanted to make one image look identical on several different monitors, and make that look identical to the original object in real life. I had all displays in the same room as the object, too.

But I had in my head the idea that ā€˜white balanceā€™ was meant to help adjust colors to be more or less white, going bluish or orangeish based on color temperature. So I kept trying to use white point adaptation to do the opposite of what it was intended to do, and since none of the documentation I could find was geared toward that, it was kinda frustrating!

Had to take a step back and figure out what it did first, in the context in which it was being used - and after I figured that out it was much easier to ā€˜undoā€™ the whitepoint adaptation.

Except then I learned that my phoneā€™s camera was doing it all wrong and was assuming D50 was the actual white point for the display. Figuring out why certain shades of green lacked that slight tint of blue while everything else looked spot on was ā€˜funā€™, alright.

ā€¦ Actually it kinda was. And the whole project was just for fun anyway; canā€™t seem to get a job, so may as well mess around with colorspaces instead!

1 Like

Not really, if something is the same, then it is the same, if something is different then it is different!

This is not meant as criticism of Elle, who has been very helpful!

At the end of the day my question is very simple and can be summarised as: If I have some data that is in the sRGB colorspace what are the parameters used to describe that data? and in addition are those parameters any different from that I would find in sRGB ICC profile.

In seems at least as a defacto standard the answer to the later part of that question is that there is no difference. Now perhaps that was not intended, maybe its even a mistakeā€¦ but otherwise users would likely complain if they attached an sRGB ICC profile to sRGB data and the result looked different!

Elle, I think you should have point me to this pageā€¦ :slight_smile:

What I did not get before, is the values of the primaries in the an ICC profile have (or SHOULD have) been chromatically adapted from there absolute values. This was not intuitive but I get it now.

So in summary the sRGB data that uses the unadapted SRGB primaries plus the D65 white point should equal (or be close enough) to same data that is defined using correctly adapted primaries and the D50 white point?

1 Like

Hi @LaurenceLumi - hmm, well, I actually did mention that article, back in Comment 4 :slight_smile: , where I gave a link to an article that has a downloadable spreadsheet for calculating the sRGB ICC profile from the ICC specs and the sRGB color space specs. But Iā€™m really glad you found that article helpful - it was a ton of work to write! - but I learned a lot while writing it.

@Tynach - I really like all the experimenting youā€™ve been doing with various displays. That sort of stuff is the 100% best way to actually learn how ICC profile color management really works. Otherwise the constant temptation is to make a lot of unconscious assumptions, that seem inevitably to end up being not correct just when you really want something to work ā€œas expectedā€.

It always makes me a bit nervous when people refer to me as an expert :slight_smile: because everything I know about ICC profile color management was learned the hard way, one experiment at a time just like you are doing, followed by trying to figure out ā€œwhyā€ results are as they are, whether by doing further testing or doing a lot of background reading or asking questions on forums and mailing lists, or whatever it takes. So whatever expertise I have is relative to the stuff Iā€™ve tried to figure out. I donā€™t have any formal university degree in color management or anything like that.

Anyway, I do have some thoughts on your descriptions and comments for your wonderful monitor experiments, but I need to clear off some other ā€œto dosā€ my list before sitting down to type something up.

1 Like

I should probably mention that this is all done with GLSL code, on Shadertoy and in the Android app ā€˜Shader Editorā€™. Iā€™ve looked at ICC profiles and compared different ones to each other, but Iā€™ve yet to write any code that actually uses any such profile.

Itā€™s one of those things I feel I should do, but a large part of what Iā€™m doing right now is on my phone in Shader Editor (where I have to ā€˜calibrateā€™ the display by having a massive array containing the equivalent to a VCGT tag, multiply all color values by 255, and use that as the array index for nabbing the final value for that channel).

Same, itā€™s happened a few times with meā€¦ And here I am unemployed because nobody wants to hire someone without actual job experience. Often Iā€™ll post something that really makes sense to me and seems completely true, but Iā€™ll still get the feeling that it might only seem true to me because I ā€œdonā€™t live in the real world.ā€ And thatā€™s often the sort of thing (though in more detail and with examples) told to me when I give opinions on topics like tab indentation, so I have a feeling they might be right.

What I more or less meant by ā€˜expertā€™ in my own post, however, was that youā€™re someone who has done that testing before - and thus you have built up a fairly decently sized repository of knowledge, at least when compared to others. I hesitate to say ā€˜professionalā€™ because I honestly donā€™t know what your actual job consists of, but given the number of articles youā€™ve written (and both the depth and breadth of the topics in them), I think itā€™s safe to say youā€™re an expert - at least relatively.

I should have joined this community much sooner, but I didnā€™t really know about it. Besides that, it was only very recently that I broke down and finally just bought myself a colorimeter, as before that I was making a lot more assumptions about the quality of my own equipment (factory-calibrated monitors are, apparently, often very badly calibrated).

Mostly so far Iā€™ve just been posting test code on Shadertoy, and occasionally asking for things like ā€˜where can I find copies of various official standards?ā€™ on Redditā€¦ Where I didnā€™t get any really useful leads; the subreddit I saw over there was I think /r/colorists, and 90% of the content is people saying things like, ā€œIn this program, always set these settings to those values when doing that.ā€

So Wikipedia has still been my number one source for things like what chromaticity coordinates belong to the primaries of which standards, and Iā€™ve not really had anywhere to go for asking for feedback on the actual algorithms.

As for responding later, thatā€™s no problem! I figure thatā€™s what forum-like websites are for - group conversations that could take anywhere from minutes to weeks between responses. Wouldnā€™t want to rush you :smiling_face:

At any rate, uhā€¦ I sorta split up when I wrote this comment, part of it in the morning and part of it in the evening. Iā€™m not really sure where I was going with some of it or if I intended to modify/add to/remove from earlier parts, so Iā€™m sorry if itā€™s a little bit of a rambling mess. Iā€™ll just post it as-is for now, as Iā€™m not sure what else to do with it.

Hi @Tynach - my apologies for taking so long to circle back around to your very thought-provoking post - :slight_smile: I bet you thought I forgot about this post, but nope, not at all!

Edit: with my usual stellar inability to speak clearly, when I tried to rewrite my initial sentence to make it more clear, I left out the critical part of the sentence above, which is that I bet you thought I forgot about this thread, not true!

So to try again, your post was very thought-provoking, and I didnā€™t forget about it, in fact have been mulling over the points youā€™ve made. So I just edited the original sentence above to put in the missing phrase. Sigh.

I never stopped to think about what RGB values the monitor profile might have for the color white near-white colors - thanks! for mentioning that.

I used ArgyllCMS xicclu to check several monitor profiles that I made at different times, using different algorithms, and sure enough ā€œwhiteā€ defined as Lab color (100, 0,0) was close to or exactly (1,1,1) in the monitor space, using relative colorimetric intent:

xicclu -ir -pL -fif file-name-for-monitor-profile

But ā€œhow close are the channel values for white and near-whiteā€ does depend on the type of monitor profile. For my LUT monitor profiles, R, G, and B are only approximately the same for grayscale values, being very close for white and near white, and progressively farther apart as the grays get darker. On the other hand, for my profile made using ā€œ-aSā€, R=G=B up and down the gray axis.

Iā€™m guessing that "how different are the channel values for white and gray also depends on what sort of prior calibration was done using the vcgt tag, before making the monitor profile.

Your goal of making one image look identical on several different monitors, and also make the image look identical to the original object in real life, of course means that at some point you took a photograph of the real life object (Iā€™m really good at figuring out the obvious :slight_smile: ).

Recently I took a photograph of a painting and used RawTherapeeā€™s CIECAM02 module to make the colors on the screen match the colors in the painting:

Of course your situation - multiple monitors in the same room right along with the photographed object - might have the advantage that the entire room is evenly lit with the same color and brightness of light. In which case ā€œwhat color of lightā€ to calibrate all the monitors to might depend on the color of the ambient light. But then youā€™d need to consider whatever compromises might be required when calibrating any given monitor to a white point thatā€™s too far away from its native white point.

I had been thinking about your quest to make the colors look the same on all your monitors, and thinking that the CIECAM02 modules might be a way to accomplish your goal (even without first calibrating and perhaps also profiling the monitors). Making images look the same on different display devices was @jdc 's motivation for RawTherapeeā€™s CIECAM02 module existing in the first place.

@ggbutcher - the RawTherapee CIECAM02 module is something that might also work for displaying images on your projection screen, though it might mean making a solid white image (and perhaps also an 11-step grayscale wedge at L=100 down to L=0) using editing software, projecting that image onto your screen, taking a photograph of the projected image, and seeing what color of white the projected white actually is. There are sophisticated devices for meauring such things, but probably a photograph would get you ā€œin the ballparkā€.

@gwgill - now that @Tynach has a colorimeter and can calibrate and profile his various monitors, would this colprof switch allow him to accomplish his goal of making images look the same on all the monitors using ICC profile color management? Or (as I sort of suspect) am I missing something critical in how images are displayed on different monitors?

http://argyllcms.com/doc/colprof.html#ua

For input profiles, this flag forces the effective intent to be Absolute Colorimetric even when used with Relative Colorimetric intent selection in a CMM, by setting a D50 white point tag. This also has the effect of preserving the conversion of colors whiter than the white patch of the test chart without clipping them (similar to the -u flag), but does not hue correct white. This flag can be useful when an input profile is needed for using a scanner as a ā€œpoor mansā€ colorimeter.

@Elle
Thanks for the compliment, Iā€™ll look at what Ciecam can or can not bringā€¦ with my (very) bad english :slight_smile:

1 Like

My apologies in return @Elle, I was not only indecisive as far as what to say was concerned, but I also had accidentally deleted part of my code. Itā€™s not on version control (and Iā€™m not sure if Shader Editor uses files, or Androidā€™s per-app SQLite database), and I had set it up so that running the code auto-saved itā€¦ So when I accidentally deleted some code portions (and then tapped ā€˜runā€™ without thinking) I had to spend some time recreating what Iā€™d written beforehand.

I really should put it into a file on my desktop, but Iā€™d honestly rather justā€¦ Completely rewrite it instead. Itā€™s a mess of commented out code right now, especially since I have something around the lines of half a dozen sets of chromaticity coordinates specifying my phoneā€™s display colorspace, all but one commented out.

Also, Iā€¦ Honestly donā€™t know what to say to you, of all people, calling my post thought-provoking. All I had intended to do was explain white point adaptation, and what it meant for the math behind colorspace conversions. I saw what looked like either a misunderstanding or some missing information, and fueled mostly by feelings of, ā€œHey, I had to figure this out recently, hereā€™s a chance to ramble about it,ā€ I typed up the post I had.

Then feelings of, ā€œWait what if I donā€™t actually understand this as well as I think I do?ā€ kicked in and I put that disclaimer in. After all, Iā€™m literally just some guy who still lives with his parents who has way too much free time. And since Iā€™ve yet to 100% accurately reproduce all lighting scenarios with one set of options plugged into my phone, Iā€™m honestly fairly sure there are things Iā€™m definitely getting wrong.

ā€¦ Aanyway, on to the interesting stuff.

Itā€™s good to know that it checks out, but I was kinda talking abstractly, in the sort of, ā€œIf a program were told to ā€˜display whiteā€™ on the screen, what values would it apply to the color of the pixels?ā€ kind of way.

I only meant to describe things like, if your program is dumb then white is just gonna be ā€˜set all channels to fullā€™. And that extends to if your program is smart but presents itself as dumb (meaning itā€™ll convert RGB colorspaces, but for the converted values itā€™ll still state that white is all channels being full).

Most likely. This is how I have my desktop monitor set up, and apparently itā€™s good enough that instead of a LUT for conversion, Chrome at least just uses a math function as the transfer characteristics. Not that Chrome is the best when it comes to color management, but going to the bottom of chrome://gpu lets me see exactly what Chrome uses for color correction.

If I use a profile that was generated without a VCGT tag, it actually says that itā€™s using a LUT (and Iā€™ve noticed some serious banding and general low accuracy when thatā€™s the case, though only in Chrome). But currently it instead has this as the ā€˜Color space informationā€™:

{primaries_d50_referred: [[0.6608, 0.3388], [0.3321, 0.5954], [0.1501, 0.0602]], transfer:0.0782*x + 0.0000 if x < 0.0431 else (0.9476*x + 0.0522)**2.4005 + 0.0000, matrix:RGB, range:FULL}

Actually, not quite! Shader Editor lets me use the camera as a texture source, so Iā€™m taking already-processed RGB data and having to undo that processing as best as possible, then re-do it how I want it redone. In real-time, so itā€™s a good thing itā€™s with GLSL!

Sadly, I donā€™t have full control over the camera hardware with Shader Editor, but my phone does fully support Androidā€™s newer Camera2 API. This means I can get all the necessary information about my camera hardware that I need to essentially return the RGB values into, as close as possible, the original RAW values. I used Camera2 Test to extract the data (and of course transposed the matrices for use in GLSL).

ā€¦ Except for what white balance is currently in use. I have to deal with the white balance constantly changing as I aim the phone at different items, so Iā€™ve had to use either whatever whitest item is nearby, or just set items on some paper, or justā€¦ Look at things that are on my cluttered nightstand, which has several pieces of paper on it.

Instead, to calculate the camera matrix, I had to dig through the DNG file format specification (and Iā€™m having to assume that my phoneā€™s camera goes through the same process as outlined in said DNG specifications) to figure out how to turn chromaticity coordinates for light sources (measured with my colorimeter) into the XYZ-to-RAW matrix. In the DNG spec, the relevant section is chapter 6 (Mapping Camera Color Space to CIE XYZ Space).

It is not a simple task, and I donā€™t think RawTherapee gets it quite right in the more complex case of not having chromaticity coordinates (instead if you start off with just whatever RAW value is used for D50 white, AKA the ā€˜AsShotNeutralā€™ tag), which is the case when I use camera apps that let me capture RAW data.

There apparently are 2 camera matrices, and I have to calculate the CCT of the light source, and if itā€™s between the CCT for Standard Illuminant A and Standard Illuminant D65, I have to determine where it is between those two and use that to perform linear interpolation between the two camera matrices.

I donā€™t know for sure if I do that correctly in my code, especially since itā€™s that part that I accidentally deleted. Iā€™ve spent a few days (mostly the last 3 days, but also various days over the last few months) testing and tweaking it though, and think itā€™s correct now.

At any rate, Iā€™ve made some minor changes to my code to clear things up (I originally had used ā€˜LMSā€™ to refer to both ā€˜human eyeā€™ space and camera RAW space), so Iā€™ve done some regex search/replaces to turn a few instances of the word lms into raw. Iā€™ll go ahead and just attach the code as-is for you pull your hair out overā€¦ Itā€™s commented, but all of the comments are ā€˜for meā€™ and not for anyone else, including the ā€œdonā€™t take this section seriouslyā€ warning.

I do think I did a decent job at variable naming overall, but Iā€™ve had numerous times where I wanted to name multiple things the same thing in the same scope, soā€¦ Well, I had to mix and match how names were organized/formatted a few times. And sometimes decisions like that carry over even when half the variables get removed, rewritten, or commented out anyway, whichā€¦ Is why I want to rewrite it at some point. Make it much cleaner.

I wish. Light bleeds in through my bedroom window, and the ceiling light thatā€™s in here has a really low CCT. Something around 3500K if I remember correctly (colorimeter readings vary between 3100K and 3700K).

The only light I have actual control over is my nightstand lamp, and for that I fairly recently bought an LED bulb thatā€™s rated as having a 90+ CRI. Its CCT is actually at exactly 5000K, but exact chromaticity coordinates do seem to still vary a littleā€¦ But since I can hold my colorimeter right up to the bulb, theyā€™re much more consistent.

So at night, with my monitors turned off and with items on my cluttered nightstand, with only the nightstand lamp turned on and the ceiling light turned offā€¦ Then I can have full control over the color of the light.

I suppose I also have decent color readings for the CFL bulb that was in that lamp before I put the LED bulb in there, and that bulb now helps light up the bathroom (with other bulbs from the same box, even). Since thereā€™s no windows in the bathroom, that means I can also test the code on things that are known to be white, like the bath tub and toilet. The sink is kinda an off-white, Iā€™ve noticed.

I donā€™t even try to calibrate my displays to ambient lighting. Right now I have them profiled with a VCGT tag that changes it from using the native white point of the display, to instead using plain 'ol D65, and transfer characteristics that match sRGB.

At least, more or less. Chrome has some weird values in the equation it gleans from my profile, but it seems to work? Either way I usually just force Chrome to treat my displays as sRGB, ignoring system profile. Otherwise I get some noticeable (but relatively minor) banding in smooth images.

To this end, I had already settled on recreating the appropriate camera RAW matrices according to the DNG spec. CIE 2002 uses a color matrix designed around an LMS colorspace that had ā€˜sharpenedā€™ spectral sensitivities. This is less accurate, but overall works out better when it comes to human perception of chromatic adaptation. In other words, when our brain performs white balancing, it exaggerates some things while dulling others, and CIE 2002 helps model that.

Youā€™ll see some huge chunks of my code commented out that have someā€¦ Less informative comments - in particular, the majority of the convert() function. This was from my attempt to model and understand CIE 2002. I have an unpublished page on ShaderToy that has a working version of this, butā€¦ While I have tried to understand all the variables and how they work together, the documentation I can find on CIE 2002 is sparse. Very sparse. Wikipedia is where I got most of the equations I do use, but I have to guess half the time at what many variables actually mean.

At any rate, Iā€™m trying to go for ā€˜absolute colorimetricā€™ types of things, and am in fact undoing a lot of white point adaptation and other things which try to model perception. As such, using a system that models perception is only useful when undoing its effects, and since the code (or maybe its hardware) that performs those operations to begin with resides in the camera module itself, I doubt itā€™s as complex as CIE 2002.

But! Sometimes I do want to adapt things to my phone displayā€™s white point - and when I do, I basically uncomment the end of toRgbā€™s declaration (line 814), and comment out part of transMatā€™s declaration (line 821):

814: const mat3 toRgb = xyzToRgb(outSpace)*aTo;
...
821: const mat3 transMat = /*fromRaw*whiteBalance*toRaw**/toXyz;

And hereā€™s more inconsistent naming. aTo is the matrix applied to colors just before the absolute XYZā†’RGB matrix is applied (for converting to the final RGB colorspace). Itā€™s basically ā€˜adaptation matrix for the colorspace being converted Toā€™. And itā€™s defined using what I label as the ā€˜output CAM matrixā€™ (CAM meaning Color Appearance Model), which is the XYZā†’LMS matrix I have chosen to use.

I could use CIE 2002ā€™s matrix, but again itā€™s spectrally sharpened (and I like having overly extreme accuracy). Instead I decided to calculate such a matrix from data and calculations given by the Color Vision and Research Laboratory (CVRL). Links are in the source code just above my declaration for primariesLms (lines 548 and 549 contain the links).

The results are very close to those produced if I were to use the Hunt-Pointer-Estevez LMS matrix, which is regarded as the ā€˜more accurateā€™ of the professionally produced LMS matrices (and is commonly used in color blindness simulation and research). However, the results of my CVRL-inspired matrix seem to be slightly ā€˜sharpenedā€™, placing them somewhere between the Hunt and CIE 2002 matrices (though closer to Hunt).

I fully realize, of course, that if I really wanted to seriously use the proposed 2012 XYZ functions, Iā€™d have to use a spectrophotometer and match the color temperature of my displays and light sources to new xy chromaticity coordinates calculated from said new color matching functions. I canā€™t even hope to afford a spectrophotometer, so thatā€™sā€¦ Basically not happening any time soon. I trust that the CVRL is honest when they say they matched the new CMFs to the 1931 CMFs as closely as possible, so Iā€™ve just been using 1931-tuned chromaticities. Seems to work well enough.

Wouldnā€™t that just make the profile badly formed, in that the white point reported isnā€™t the actual, true white point? Iā€™m a bit confused by this, and this is going way outside of the scope of stuff Iā€™ve researchedā€¦ I donā€™t know a lot about actual color profiles and how to make heads or tails of them, Iā€™ve mostly just researched colorspace parameters and the math to convert to/from them.


Hereā€™s the code as mentioned: Back-Camera.txt (36.1 KB)

I think over the last few weeks Iā€™ve had a lot of things I wanted to say. Iā€™m not sure I remembered all of them.

Anyway, you might be wondering why I do the whole RGBā†’YUVā†’rescaling + offsetting valuesā†’RGB thing. Thatā€™s because of some bug inā€¦ I think either Android itself, or some framework/library for handling cameras that most apps seem to use on Android, that causes any (most?) hardware-accelerated camera view to have RGB values rescaled - as if they were using ā€˜limited/tvā€™-ranged RGB signals (values in the 16 - 235 range) instead of ā€˜full/pcā€™-ranged values (0-255).

Because of this, by default brights are super bright (going over 1.0) and darks are super dark (going below 0.0). I simply correct for this, and thank goodness the initial scaling (which shouldnā€™t have taken place to begin with) must be done GPU-side, since I can indeed recover those negative and over-bright values (they werenā€™t chopped off). Iā€™ve also noticed that just rescaling the RGB values back to full range causes some items to appear more dull than they should be, so I do the scaling in YUV instead.

Besides that, uhā€¦ Hm. I think the only other obvious ā€˜why in the world are you doing this?ā€™ part of the code is that I use the SMPTE 170M (same as Rec. 709) transfer characteristics for the input colorspace. If I donā€™t, everything still seems too dark and thereā€™s a wide range between ā€˜brightā€™ and ā€˜darkā€™ that should be a medium gray but is too dark of a medium gray to really match what I see in person.

I also attempt to ā€˜ditherā€™ colors at the end to make up for so many transformations being applied to the color, to reduce banding. I have no idea if it really helps or not, nor do I know if I actually do it properly. Itā€™s around in the same part as when I ā€˜calibrateā€™ the output picture using the 1D LUT (basically quick-and-dirty VCGT).

I think thatā€™s most of what Iā€™d thought of to say? Either way this post feels like itā€™s already way too long. Iā€™ll stop here and let you decide how much of this rambling mess is worth responding to. I should probably go through and reorganize/rewrite bits of this post, butā€¦ Almost every time Iā€™ve thought that sort of thing in the past when responding to others, I end up not actually responding at all. Youā€™re someone Iā€™ve actually heard of and have a lot of respect for, so I feel I really should give a response, even if itā€™s poorly organized and mashed out of a keyboard just before dinner all in one sitting.