Spot on!
Yes that is very helpful! and is exactly in agreement with several examples I have.
Spot on!
Yes that is very helpful! and is exactly in agreement with several examples I have.
I am using the RT code as reference, for what I am trying to do. Ideally I would like to incorporate my work into at RT some point!
Specifically I am trying to create a matrix that will take color values that I have computed from density readings of a film scan, that should be in a particular colorspace. Lets call that colorspace RA4 space. (i.e. the colorspace that is created by using the CMY dyes in a RA4 processed color print)
I then want to transform that data to sRGB correctly. At least that is what I want to do initiallyā¦
Thatās why I want understand how to do it properly! Once I have got the method correct I may do things slightly differently. But my goal initially is to do it correctly.
I can make up some examples using generated solid colors, and compare the results that I get from RT.
Iām totally confused as to how your inquiries regarding viewing conditions is related to "Perhaps the other reason . . . " as quoted above.
There are many web browsers that fail this test of āsRGB image looks the same with and without an embedded sRGB ICC profileā - are you talking about something you see in a web browser?
Perhaps a step back is order.
What I am trying to should be straight forward in that I want use the exact same methods that are used in digital photography and some extent even more so those used in RT. (which I hope are same etcā¦) The only difference is my values ācameraā to XYZ are unique.
This involves a lot of trial and error, trying different things, and each time I try something I like to understand the process, the values used etc.
I understand this because the software simply ignores the values. But in the case of software that processes ICC profiles correctly what should happen?
Perhaps I can ask the question in different way. Let take two tiff files. File A has the correct sRGB value attached with D50. and File B which has the exact same data inside but no profile.
When the same file is displayed in RT should they appear the same or different? Or what color values does the internals or RT thinks it has?
Hmm, I thought I already answered this question above . The files should look the same assuming the software assigns sRGB to images without embedded ICC profiles, which Iām sure RT does, leastways I canāt see any change in how a sample image looks before and after assigning an sRGB profile to an image with no embedded ICC profile. Do you see a visual difference?
Iām not sure what you mean by āignoresā but the problem is that those web browsers are flawed . They should all by default assign sRGB to images without embedded ICC profiles, and then convert to the monitor profile.
No, hence my point that when no profile is applied RT assumes it sRGB at D50, and I assume so does many other applications. If it assumed the file without the profile was D65 it should apply an adjustment and the result should look different to the one with D50 profile?
Putting aside applications that donāt do things correctly I am looking to understand what the correct behaviour should be. Does that make my question clear?
If a V2 or V4 ICC profile color-managed editing application assumed an image without an embedded ICC profile was somehow āD65ā and then tried to apply an adjustment to somehow compensate for the difference between D65 and D50, it would be a very confused ICC profile color-managed editing application.
If I were using software that behaved in this fashion, Iād file a bug report.
Very sorry @LaurenceLumi, the confusion was all mine. @agriggio explains it perfectly. Of course itās not A * B * C because the input pixel is a vector, not a 3x3 matrix therefore needs to be on the rhs of each multiply! Incidentally, gmic has the mix_channels command for that. I think Iāll keep quiet now
I canāt help but think that Iām somehow missing the question that you are actually asking. An sRGB ICC profile already has the āD65ā source white point incorporated into the profile by means of the chromatic adaptation that was used to make the sRGB ICC profile from the sRGB color space specs. It doesnāt have to be added again, in the context of using ICC profile color management.
I tried an experiment once, modifying LCMS to use D65 as the illuminant, instead of D50. I installed the modified LCMS in /usr (this is on Linux) so all ICC profile applications used this modified LCMS. And I made a set of ICC profiles that used D65 as the illuminant. With this modified LCMS, when making ICC profiles from color space specs, D65 color spaces didnāt need to be chromatically adapted to D50. But D50 color spaces such as ProPhotoRGB did need to be chromatically adapted to D65. And of course āEā, D60, and etc color spaces still needed to be chromatically adapted, but to D65 instead of D50.
When I tried editing using this modified LCMS and the modified profiles with my editing software such as GIMP, all the colors looked exactly the same. Exactly the same. The only way I could get ādifferent colorsā was to:
Sometimes you might run across an incorrectly-made sRGB ICC profile where the chromatic adaptation from D65 to D50 wasnāt done. Using such a profile makes images look blue, such as the image below on the right (the colors on the left are correct):
Back in the days of V2 workflows, you could āget different colorsā - either blue or yellow or even other colors depending on your monitorās actual white point, by using Absolute colorimetric rendering intent to the monitor.
You could also get different colors when converting from one ICC RGB working space to another ICC RGB working space with a different white point, but you had to specifically ask for Absolute colorimetric intent - all the editing software Iāve ever seen defaults to Perceptual or Relative, so nobody was likely to do this accidentally.
For example, you might convert from from sRGB to BetaRGB (which has a D50 white point) or vice versa, using Absolute colorimetric intent, resulting in images such as are shown below. Notice the image on the right is ātoo yellowā and the image on the left is ātoo blueā:
But the ICC decided this sort of color change when using Absolute colorimetric was confusing to users.
So for V4 workflows, when the source and destination color spaces are both āmonitor classā profiles (all the standard RGB working spaces we use in the digital darkroom are āmonitor classā profiles), when you ask for Absolute colorimetric rendering intent, what you get is Relative. Which makes it decidedly more difficult to write tutorials that encourage users to experiment and thus learn for themselves first-hand the difference between relative and absolute colorimetric intents
The images above come from my article on āWill the real sRGB profile please stand upā, which was written when I actually had access to V2 editing software: https://ninedegreesbelow.com/photography/srgb-profile-comparison.html
Specifically if I am creating data to store in a file that does not have ICC profile attached what parameters should I use?, and what parameters should I use if use a ICC profile with the correct primaries, and D50 white point etc ? Initially I though I should use the sRGB primaries and D65 for the former and D50 for the later.
But that does not seem to fit what the software I am using as example does. It seems rightly or wrongly that if you want the software to work as expected you need to create data in the former (the file without the ICC profile) with a D50 white point. As the software will NOT make any chromatic adaptation.
Hmm, well, the only answer anyone will ever be able to give you is that for V2 and V4 ICC profile applications, use the D50 adapted matrix for sRGB. Whether the profile is actually embedded in the image or not is irrelevant. Iāve tried to give reasons why several times in this thread, but as I said, Iām not hitting the area that answers your questions, and at this point I somewhat doubt my ability to do so .
If you donāt want to use a D50-adapted matrix, wait until someone adds iccMAX support to an ICC profile color managed editing application, in which scenario I donāt have much of a clue what will happen or be possible. But donāt try to mix whatever you do using iccMAX applications with what you do using V2/V4 applications.
Or else donāt use ICC profile color management at all, and instead use OCIO color management, which requires using OCIO LUTS to get from whatever color space the image is in, to whatever color space you want the image to be in. But Iām not the person to advise you on the specifics of OCIO, if thatās the direction you want to go. If you do a search on the forum, there are already some threads on the topic.
Here is a thought: Go ahead and try whatever it is that you think should be done, as you generate the matrices for whatever application you have in mind. And if it works, great! Experimenting with doing whatever you think should work is a great way to learn what does work. In general trying stuff and seeing what happens, and then figuring out why really is a nice way to learn stuff.
Bear-of-little-brain here, methinks that 1) if youāre going to put primaries and whitepoint information in an image file, it should represent the color gamut to which the data was last converted, and 2) if youāre not going to put that information, you need to ensure the color gamut of the data can be used as-is by whatever media the data is intended for.
You can combine #1 and #2 for the largely unmanaged wild of the web by converting and storing sRGB/D50 (D50 mainly because the ICC tells people thatās their notion of reference whitepoint) and pray someoneās not going to regard your image on a cheap projector, ask me how I knowā¦
I think the primary consideration is to ensure the metadata properly represents the image characteristics, and in its absence you need to have particular media in mind.
I think what youāre missing from Elleās responses is that there are multiple āwhite pointsā that are used in different ways, at different stages within the calculations used to generate the matrix used to convert between colorspaces. Specifically, the keyword you should look at more closely is āadaptedā.
Disclaimer: Iām not an expert on the standards, Iāve just struggled with the math and figured this out after reading way too much documentation that was way too vague. I might still be misunderstanding a lot of this, so I would honestly like some feedback from experts like Elle.
So, consider for a moment that we consider āwhiteā to be [1, 1, 1] no matter what RGB colorspace weāre in. This doesnāt specify a whitepoint per se - no, we specify a white point in terms of the XYZ colorspace. For example, while sRGBās white point has xy coordinates [0.3127, 0.3290], that still is just saying that the exact ācolorā for [1, 1, 1] (or āwhiteā) can be measured externally as having those xy coordinates.
ICC profiles use whatās called a āProfile Connection Spaceā (PCS). What this is will vary, but most of the time itās either XYZ or L*a*b* - and for ICC profiles (I guess versions 2 and 4), the white point that they use for the PCS isnāt E, but instead D50 - which is roughly equal to XYZ values [0.964, 1.000, 0.825]. This means that, to stay consistent, we have to transform whatever āwhiteā is to XYZ values such that āpure whiteā is [0.964, 1.000, 0.825], rather than [1, 1, 1] (or, if we were using D65, roughly [0.950, 1.000, 1.089]).
However, because of how human eyes work, you canāt just rescale XYZ values directly to convert between white points. Instead, you have to convert XYZ values into LMS (native colorspace for the human eye), rescale those values, then convert back into XYZ.
There is some debate about what the best matrix to use is for converting between XYZ and LMS, and it often depends on your use case, needs, and specific setup. However, the most common when dealing with ICC profiles is the original āBradfordā color transformation matrix. I specify āoriginalā because apparently there are two versions, and ICC profiles explicitly use the original one.
So, hereās an overview of how this looks:
Linear sRGBāXYZāLMSāD50/D65āLMSāXYZ (PCS)
And going to another RGB space (for this example, to be displayed on a monitor with a D75 white point):
XYZ (PCS)āLMSāD75/D50āLMSāXYZāRGB
Itās important to note that in both RGB colorspaces (both sRGB and the monitorās colorspace), the RGB value for āwhiteā remains [1, 1, 1]. If the picture is a photo of a white piece of paper with a drawing on it, any part that shows the paper will have the same RGB value in both RGB colorspaces (assuming that itās perfectly encoded as white and not slightly off-color, nor darkened to a light gray).
Thatās why one of Elleās comments carefully noted that the ICC specs assume that your eyes are 100% adapted to the white point of your display - because theyāre designed to make sure that the displayās white point is always used for the actual value of white.
Now, for the math:
final = toRgb * (toLms^-1) * diag((toLms*whiteDest)/(toLms*D50)) * toLms *
(toLms^-1) * diag((toLms*D50)/(toLms*whiteSource)) * toLms * toXyz * orig
I noticed that the built-in editor had decided to line-break right at the point where colors would be in the PCS (at the time I hadnāt put spaces around the asterisks), so I decided to put an actual line break in there. I put the spaces around most of the asterisks to help show where each colorspace conversion takes place. Decided not to with the ones inside ādiag()ā, to better group those together as a single āconversionā.
Hope this helps! While I did find this thread while googling for how to do matrix multiplication in gmic, I saw what looked like a very recent thread from someone going through some of the same issues I did.
Now, the reason I had gotten so confused while learning all this, was because I was wanting to figure this all out so that I could specifically use absolute colorimetric conversions between colorspaces; I didnāt want to use white point adaptation. Specifically, I wanted to make one image look identical on several different monitors, and make that look identical to the original object in real life. I had all displays in the same room as the object, too.
But I had in my head the idea that āwhite balanceā was meant to help adjust colors to be more or less white, going bluish or orangeish based on color temperature. So I kept trying to use white point adaptation to do the opposite of what it was intended to do, and since none of the documentation I could find was geared toward that, it was kinda frustrating!
Had to take a step back and figure out what it did first, in the context in which it was being used - and after I figured that out it was much easier to āundoā the whitepoint adaptation.
Except then I learned that my phoneās camera was doing it all wrong and was assuming D50 was the actual white point for the display. Figuring out why certain shades of green lacked that slight tint of blue while everything else looked spot on was āfunā, alright.
ā¦ Actually it kinda was. And the whole project was just for fun anyway; canāt seem to get a job, so may as well mess around with colorspaces instead!
Not really, if something is the same, then it is the same, if something is different then it is different!
This is not meant as criticism of Elle, who has been very helpful!
At the end of the day my question is very simple and can be summarised as: If I have some data that is in the sRGB colorspace what are the parameters used to describe that data? and in addition are those parameters any different from that I would find in sRGB ICC profile.
In seems at least as a defacto standard the answer to the later part of that question is that there is no difference. Now perhaps that was not intended, maybe its even a mistakeā¦ but otherwise users would likely complain if they attached an sRGB ICC profile to sRGB data and the result looked different!
Elle, I think you should have point me to this pageā¦
What I did not get before, is the values of the primaries in the an ICC profile have (or SHOULD have) been chromatically adapted from there absolute values. This was not intuitive but I get it now.
So in summary the sRGB data that uses the unadapted SRGB primaries plus the D65 white point should equal (or be close enough) to same data that is defined using correctly adapted primaries and the D50 white point?
Hi @LaurenceLumi - hmm, well, I actually did mention that article, back in Comment 4 , where I gave a link to an article that has a downloadable spreadsheet for calculating the sRGB ICC profile from the ICC specs and the sRGB color space specs. But Iām really glad you found that article helpful - it was a ton of work to write! - but I learned a lot while writing it.
@Tynach - I really like all the experimenting youāve been doing with various displays. That sort of stuff is the 100% best way to actually learn how ICC profile color management really works. Otherwise the constant temptation is to make a lot of unconscious assumptions, that seem inevitably to end up being not correct just when you really want something to work āas expectedā.
It always makes me a bit nervous when people refer to me as an expert because everything I know about ICC profile color management was learned the hard way, one experiment at a time just like you are doing, followed by trying to figure out āwhyā results are as they are, whether by doing further testing or doing a lot of background reading or asking questions on forums and mailing lists, or whatever it takes. So whatever expertise I have is relative to the stuff Iāve tried to figure out. I donāt have any formal university degree in color management or anything like that.
Anyway, I do have some thoughts on your descriptions and comments for your wonderful monitor experiments, but I need to clear off some other āto dosā my list before sitting down to type something up.
I should probably mention that this is all done with GLSL code, on Shadertoy and in the Android app āShader Editorā. Iāve looked at ICC profiles and compared different ones to each other, but Iāve yet to write any code that actually uses any such profile.
Itās one of those things I feel I should do, but a large part of what Iām doing right now is on my phone in Shader Editor (where I have to ācalibrateā the display by having a massive array containing the equivalent to a VCGT tag, multiply all color values by 255, and use that as the array index for nabbing the final value for that channel).
Same, itās happened a few times with meā¦ And here I am unemployed because nobody wants to hire someone without actual job experience. Often Iāll post something that really makes sense to me and seems completely true, but Iāll still get the feeling that it might only seem true to me because I ādonāt live in the real world.ā And thatās often the sort of thing (though in more detail and with examples) told to me when I give opinions on topics like tab indentation, so I have a feeling they might be right.
What I more or less meant by āexpertā in my own post, however, was that youāre someone who has done that testing before - and thus you have built up a fairly decently sized repository of knowledge, at least when compared to others. I hesitate to say āprofessionalā because I honestly donāt know what your actual job consists of, but given the number of articles youāve written (and both the depth and breadth of the topics in them), I think itās safe to say youāre an expert - at least relatively.
I should have joined this community much sooner, but I didnāt really know about it. Besides that, it was only very recently that I broke down and finally just bought myself a colorimeter, as before that I was making a lot more assumptions about the quality of my own equipment (factory-calibrated monitors are, apparently, often very badly calibrated).
Mostly so far Iāve just been posting test code on Shadertoy, and occasionally asking for things like āwhere can I find copies of various official standards?ā on Redditā¦ Where I didnāt get any really useful leads; the subreddit I saw over there was I think /r/colorists, and 90% of the content is people saying things like, āIn this program, always set these settings to those values when doing that.ā
So Wikipedia has still been my number one source for things like what chromaticity coordinates belong to the primaries of which standards, and Iāve not really had anywhere to go for asking for feedback on the actual algorithms.
As for responding later, thatās no problem! I figure thatās what forum-like websites are for - group conversations that could take anywhere from minutes to weeks between responses. Wouldnāt want to rush you
At any rate, uhā¦ I sorta split up when I wrote this comment, part of it in the morning and part of it in the evening. Iām not really sure where I was going with some of it or if I intended to modify/add to/remove from earlier parts, so Iām sorry if itās a little bit of a rambling mess. Iāll just post it as-is for now, as Iām not sure what else to do with it.
Hi @Tynach - my apologies for taking so long to circle back around to your very thought-provoking post - I bet you thought I forgot about this post, but nope, not at all!
Edit: with my usual stellar inability to speak clearly, when I tried to rewrite my initial sentence to make it more clear, I left out the critical part of the sentence above, which is that I bet you thought I forgot about this thread, not true!
So to try again, your post was very thought-provoking, and I didnāt forget about it, in fact have been mulling over the points youāve made. So I just edited the original sentence above to put in the missing phrase. Sigh.
I never stopped to think about what RGB values the monitor profile might have for the color white near-white colors - thanks! for mentioning that.
I used ArgyllCMS xicclu to check several monitor profiles that I made at different times, using different algorithms, and sure enough āwhiteā defined as Lab color (100, 0,0) was close to or exactly (1,1,1) in the monitor space, using relative colorimetric intent:
xicclu -ir -pL -fif file-name-for-monitor-profile
But āhow close are the channel values for white and near-whiteā does depend on the type of monitor profile. For my LUT monitor profiles, R, G, and B are only approximately the same for grayscale values, being very close for white and near white, and progressively farther apart as the grays get darker. On the other hand, for my profile made using ā-aSā, R=G=B up and down the gray axis.
Iām guessing that "how different are the channel values for white and gray also depends on what sort of prior calibration was done using the vcgt tag, before making the monitor profile.
Your goal of making one image look identical on several different monitors, and also make the image look identical to the original object in real life, of course means that at some point you took a photograph of the real life object (Iām really good at figuring out the obvious ).
Recently I took a photograph of a painting and used RawTherapeeās CIECAM02 module to make the colors on the screen match the colors in the painting:
Of course your situation - multiple monitors in the same room right along with the photographed object - might have the advantage that the entire room is evenly lit with the same color and brightness of light. In which case āwhat color of lightā to calibrate all the monitors to might depend on the color of the ambient light. But then youād need to consider whatever compromises might be required when calibrating any given monitor to a white point thatās too far away from its native white point.
I had been thinking about your quest to make the colors look the same on all your monitors, and thinking that the CIECAM02 modules might be a way to accomplish your goal (even without first calibrating and perhaps also profiling the monitors). Making images look the same on different display devices was @jdc 's motivation for RawTherapeeās CIECAM02 module existing in the first place.
@ggbutcher - the RawTherapee CIECAM02 module is something that might also work for displaying images on your projection screen, though it might mean making a solid white image (and perhaps also an 11-step grayscale wedge at L=100 down to L=0) using editing software, projecting that image onto your screen, taking a photograph of the projected image, and seeing what color of white the projected white actually is. There are sophisticated devices for meauring such things, but probably a photograph would get you āin the ballparkā.
@gwgill - now that @Tynach has a colorimeter and can calibrate and profile his various monitors, would this colprof switch allow him to accomplish his goal of making images look the same on all the monitors using ICC profile color management? Or (as I sort of suspect) am I missing something critical in how images are displayed on different monitors?
http://argyllcms.com/doc/colprof.html#ua
For input profiles, this flag forces the effective intent to be Absolute Colorimetric even when used with Relative Colorimetric intent selection in a CMM, by setting a D50 white point tag. This also has the effect of preserving the conversion of colors whiter than the white patch of the test chart without clipping them (similar to the -u flag), but does not hue correct white. This flag can be useful when an input profile is needed for using a scanner as a āpoor mansā colorimeter.
@Elle
Thanks for the compliment, Iāll look at what Ciecam can or can not bringā¦ with my (very) bad english
My apologies in return @Elle, I was not only indecisive as far as what to say was concerned, but I also had accidentally deleted part of my code. Itās not on version control (and Iām not sure if Shader Editor uses files, or Androidās per-app SQLite database), and I had set it up so that running the code auto-saved itā¦ So when I accidentally deleted some code portions (and then tapped ārunā without thinking) I had to spend some time recreating what Iād written beforehand.
I really should put it into a file on my desktop, but Iād honestly rather justā¦ Completely rewrite it instead. Itās a mess of commented out code right now, especially since I have something around the lines of half a dozen sets of chromaticity coordinates specifying my phoneās display colorspace, all but one commented out.
Also, Iā¦ Honestly donāt know what to say to you, of all people, calling my post thought-provoking. All I had intended to do was explain white point adaptation, and what it meant for the math behind colorspace conversions. I saw what looked like either a misunderstanding or some missing information, and fueled mostly by feelings of, āHey, I had to figure this out recently, hereās a chance to ramble about it,ā I typed up the post I had.
Then feelings of, āWait what if I donāt actually understand this as well as I think I do?ā kicked in and I put that disclaimer in. After all, Iām literally just some guy who still lives with his parents who has way too much free time. And since Iāve yet to 100% accurately reproduce all lighting scenarios with one set of options plugged into my phone, Iām honestly fairly sure there are things Iām definitely getting wrong.
ā¦ Aanyway, on to the interesting stuff.
Itās good to know that it checks out, but I was kinda talking abstractly, in the sort of, āIf a program were told to ādisplay whiteā on the screen, what values would it apply to the color of the pixels?ā kind of way.
I only meant to describe things like, if your program is dumb then white is just gonna be āset all channels to fullā. And that extends to if your program is smart but presents itself as dumb (meaning itāll convert RGB colorspaces, but for the converted values itāll still state that white is all channels being full).
Most likely. This is how I have my desktop monitor set up, and apparently itās good enough that instead of a LUT for conversion, Chrome at least just uses a math function as the transfer characteristics. Not that Chrome is the best when it comes to color management, but going to the bottom of chrome://gpu
lets me see exactly what Chrome uses for color correction.
If I use a profile that was generated without a VCGT tag, it actually says that itās using a LUT (and Iāve noticed some serious banding and general low accuracy when thatās the case, though only in Chrome). But currently it instead has this as the āColor space informationā:
{primaries_d50_referred: [[0.6608, 0.3388], [0.3321, 0.5954], [0.1501, 0.0602]], transfer:0.0782*x + 0.0000 if x < 0.0431 else (0.9476*x + 0.0522)**2.4005 + 0.0000, matrix:RGB, range:FULL}
Actually, not quite! Shader Editor lets me use the camera as a texture source, so Iām taking already-processed RGB data and having to undo that processing as best as possible, then re-do it how I want it redone. In real-time, so itās a good thing itās with GLSL!
Sadly, I donāt have full control over the camera hardware with Shader Editor, but my phone does fully support Androidās newer Camera2 API. This means I can get all the necessary information about my camera hardware that I need to essentially return the RGB values into, as close as possible, the original RAW values. I used Camera2 Test to extract the data (and of course transposed the matrices for use in GLSL).
ā¦ Except for what white balance is currently in use. I have to deal with the white balance constantly changing as I aim the phone at different items, so Iāve had to use either whatever whitest item is nearby, or just set items on some paper, or justā¦ Look at things that are on my cluttered nightstand, which has several pieces of paper on it.
Instead, to calculate the camera matrix, I had to dig through the DNG file format specification (and Iām having to assume that my phoneās camera goes through the same process as outlined in said DNG specifications) to figure out how to turn chromaticity coordinates for light sources (measured with my colorimeter) into the XYZ-to-RAW matrix. In the DNG spec, the relevant section is chapter 6 (Mapping Camera Color Space to CIE XYZ Space).
It is not a simple task, and I donāt think RawTherapee gets it quite right in the more complex case of not having chromaticity coordinates (instead if you start off with just whatever RAW value is used for D50 white, AKA the āAsShotNeutralā tag), which is the case when I use camera apps that let me capture RAW data.
There apparently are 2 camera matrices, and I have to calculate the CCT of the light source, and if itās between the CCT for Standard Illuminant A and Standard Illuminant D65, I have to determine where it is between those two and use that to perform linear interpolation between the two camera matrices.
I donāt know for sure if I do that correctly in my code, especially since itās that part that I accidentally deleted. Iāve spent a few days (mostly the last 3 days, but also various days over the last few months) testing and tweaking it though, and think itās correct now.
At any rate, Iāve made some minor changes to my code to clear things up (I originally had used āLMSā to refer to both āhuman eyeā space and camera RAW space), so Iāve done some regex search/replaces to turn a few instances of the word lms
into raw
. Iāll go ahead and just attach the code as-is for you pull your hair out overā¦ Itās commented, but all of the comments are āfor meā and not for anyone else, including the ādonāt take this section seriouslyā warning.
I do think I did a decent job at variable naming overall, but Iāve had numerous times where I wanted to name multiple things the same thing in the same scope, soā¦ Well, I had to mix and match how names were organized/formatted a few times. And sometimes decisions like that carry over even when half the variables get removed, rewritten, or commented out anyway, whichā¦ Is why I want to rewrite it at some point. Make it much cleaner.
I wish. Light bleeds in through my bedroom window, and the ceiling light thatās in here has a really low CCT. Something around 3500K if I remember correctly (colorimeter readings vary between 3100K and 3700K).
The only light I have actual control over is my nightstand lamp, and for that I fairly recently bought an LED bulb thatās rated as having a 90+ CRI. Its CCT is actually at exactly 5000K, but exact chromaticity coordinates do seem to still vary a littleā¦ But since I can hold my colorimeter right up to the bulb, theyāre much more consistent.
So at night, with my monitors turned off and with items on my cluttered nightstand, with only the nightstand lamp turned on and the ceiling light turned offā¦ Then I can have full control over the color of the light.
I suppose I also have decent color readings for the CFL bulb that was in that lamp before I put the LED bulb in there, and that bulb now helps light up the bathroom (with other bulbs from the same box, even). Since thereās no windows in the bathroom, that means I can also test the code on things that are known to be white, like the bath tub and toilet. The sink is kinda an off-white, Iāve noticed.
I donāt even try to calibrate my displays to ambient lighting. Right now I have them profiled with a VCGT tag that changes it from using the native white point of the display, to instead using plain 'ol D65, and transfer characteristics that match sRGB.
At least, more or less. Chrome has some weird values in the equation it gleans from my profile, but it seems to work? Either way I usually just force Chrome to treat my displays as sRGB, ignoring system profile. Otherwise I get some noticeable (but relatively minor) banding in smooth images.
To this end, I had already settled on recreating the appropriate camera RAW matrices according to the DNG spec. CIE 2002 uses a color matrix designed around an LMS colorspace that had āsharpenedā spectral sensitivities. This is less accurate, but overall works out better when it comes to human perception of chromatic adaptation. In other words, when our brain performs white balancing, it exaggerates some things while dulling others, and CIE 2002 helps model that.
Youāll see some huge chunks of my code commented out that have someā¦ Less informative comments - in particular, the majority of the convert()
function. This was from my attempt to model and understand CIE 2002. I have an unpublished page on ShaderToy that has a working version of this, butā¦ While I have tried to understand all the variables and how they work together, the documentation I can find on CIE 2002 is sparse. Very sparse. Wikipedia is where I got most of the equations I do use, but I have to guess half the time at what many variables actually mean.
At any rate, Iām trying to go for āabsolute colorimetricā types of things, and am in fact undoing a lot of white point adaptation and other things which try to model perception. As such, using a system that models perception is only useful when undoing its effects, and since the code (or maybe its hardware) that performs those operations to begin with resides in the camera module itself, I doubt itās as complex as CIE 2002.
But! Sometimes I do want to adapt things to my phone displayās white point - and when I do, I basically uncomment the end of toRgb
ās declaration (line 814), and comment out part of transMat
ās declaration (line 821):
814: const mat3 toRgb = xyzToRgb(outSpace)*aTo; ... 821: const mat3 transMat = /*fromRaw*whiteBalance*toRaw**/toXyz;
And hereās more inconsistent naming. aTo
is the matrix applied to colors just before the absolute XYZāRGB matrix is applied (for converting to the final RGB colorspace). Itās basically āadaptation matrix for the colorspace being converted Toā. And itās defined using what I label as the āoutput CAM matrixā (CAM meaning Color Appearance Model), which is the XYZāLMS matrix I have chosen to use.
I could use CIE 2002ās matrix, but again itās spectrally sharpened (and I like having overly extreme accuracy). Instead I decided to calculate such a matrix from data and calculations given by the Color Vision and Research Laboratory (CVRL). Links are in the source code just above my declaration for primariesLms
(lines 548 and 549 contain the links).
The results are very close to those produced if I were to use the Hunt-Pointer-Estevez LMS matrix, which is regarded as the āmore accurateā of the professionally produced LMS matrices (and is commonly used in color blindness simulation and research). However, the results of my CVRL-inspired matrix seem to be slightly āsharpenedā, placing them somewhere between the Hunt and CIE 2002 matrices (though closer to Hunt).
I fully realize, of course, that if I really wanted to seriously use the proposed 2012 XYZ functions, Iād have to use a spectrophotometer and match the color temperature of my displays and light sources to new xy chromaticity coordinates calculated from said new color matching functions. I canāt even hope to afford a spectrophotometer, so thatāsā¦ Basically not happening any time soon. I trust that the CVRL is honest when they say they matched the new CMFs to the 1931 CMFs as closely as possible, so Iāve just been using 1931-tuned chromaticities. Seems to work well enough.
Wouldnāt that just make the profile badly formed, in that the white point reported isnāt the actual, true white point? Iām a bit confused by this, and this is going way outside of the scope of stuff Iāve researchedā¦ I donāt know a lot about actual color profiles and how to make heads or tails of them, Iāve mostly just researched colorspace parameters and the math to convert to/from them.
Hereās the code as mentioned: Back-Camera.txt (36.1 KB)
I think over the last few weeks Iāve had a lot of things I wanted to say. Iām not sure I remembered all of them.
Anyway, you might be wondering why I do the whole RGBāYUVārescaling + offsetting valuesāRGB thing. Thatās because of some bug inā¦ I think either Android itself, or some framework/library for handling cameras that most apps seem to use on Android, that causes any (most?) hardware-accelerated camera view to have RGB values rescaled - as if they were using ālimited/tvā-ranged RGB signals (values in the 16 - 235 range) instead of āfull/pcā-ranged values (0-255).
Because of this, by default brights are super bright (going over 1.0) and darks are super dark (going below 0.0). I simply correct for this, and thank goodness the initial scaling (which shouldnāt have taken place to begin with) must be done GPU-side, since I can indeed recover those negative and over-bright values (they werenāt chopped off). Iāve also noticed that just rescaling the RGB values back to full range causes some items to appear more dull than they should be, so I do the scaling in YUV instead.
Besides that, uhā¦ Hm. I think the only other obvious āwhy in the world are you doing this?ā part of the code is that I use the SMPTE 170M (same as Rec. 709) transfer characteristics for the input colorspace. If I donāt, everything still seems too dark and thereās a wide range between ābrightā and ādarkā that should be a medium gray but is too dark of a medium gray to really match what I see in person.
I also attempt to āditherā colors at the end to make up for so many transformations being applied to the color, to reduce banding. I have no idea if it really helps or not, nor do I know if I actually do it properly. Itās around in the same part as when I ācalibrateā the output picture using the 1D LUT (basically quick-and-dirty VCGT).
I think thatās most of what Iād thought of to say? Either way this post feels like itās already way too long. Iāll stop here and let you decide how much of this rambling mess is worth responding to. I should probably go through and reorganize/rewrite bits of this post, butā¦ Almost every time Iāve thought that sort of thing in the past when responding to others, I end up not actually responding at all. Youāre someone Iāve actually heard of and have a lot of respect for, so I feel I really should give a response, even if itās poorly organized and mashed out of a keyboard just before dinner all in one sitting.