Feature request: save as floating-point

If I compare LUT 212 of “RT_sRGB-V2-srgbtrc212.icc” with @saucecontrol “212-point curve”, it is the same values :slight_smile:

I need a little help to follow these calculations …

  • shouldn’t 0x000ffff = 1.000000000 decimal ? … So 0x00026666 = 2.400030518044 ?
  • what are these a,b,c ??. OK a = 1 - b … but where is the a=0.055 of the standard ?

Very cool :slight_smile:

I’m not seeing that. The curve from RT_sRGB-V2-srgbtrc212.icc has the following points:

0,24,48,72,96,120,144,168,192,216,242,270,300,331,365,400,437,476,517,560,605,652,701,752,805,860,918,978,1040,1104,1170,1239,1310,1383,1459,1536,1617,1700,1785,1872,1962,2055,2150,2247,2347,2450,2555,2663,2773,2886,3002,3120,3241,3365,3491,3621,3752,3887,4024,4165,4308,4453,4602,4753,4908,5065,5225,5388,5554,5723,5895,6070,6248,6429,6613,6800,6989,7182,7379,7578,7780,7985,8194,8406,8620,8838,9060,9284,9512,9742,9976,10214,10454,10698,10945,11196,11449,11706,11967,12230,12498,12768,13042,13319,13600,13884,14171,14462,14756,15054,15356,15660,15969,16281,16596,16915,17237,17563,17893,18226,18563,18903,19247,19594,19946,20300,20659,21021,21387,21756,22130,22507,22887,23272,23660,24052,24447,24847,25250,25657,26067,26482,26900,27323,27749,28179,28612,29050,29492,29937,30386,30840,31297,31758,32223,32692,33165,33642,34122,34607,35096,35589,36086,36587,37092,37601,38114,38631,39152,39677,40206,40740,41277,41819,42364,42914,43468,44027,44589,45155,45726,46301,46880,47463,48051,48642,49238,49838,50443,51051,51664,52281,52903,53529,54159,54793,55432,56075,56722,57373,58029,58690,59355,60024,60697,61375,62057,62744,63435,64130,64830,65535

The first 9 points match the curve I listed above, but there are quite a few differences after that. And the error stats are quite a bit worse

Points | Max Error | Mean Error | RMS Error | Max DeltaL | Mean DeltaL | RMS DeltaL | Max RT Error
   212 |  0.003217 |   0.000153 |  0.000471 |   0.010683 |    0.000861 |   0.006460 | 0

Sorry, I should have listed the nominal values in fractional form instead of decimal. Basically the parametric curve version has all the division refactored to multiplication. So (x + 0.055)/1.055 is rewritten as x * 1/1.055 + 0.055/1.055. And x/12.92 is switched to x * 1/12.92

a should be 1/1.055
b should be 0.055/1.055
c should be 1/12.92

Those numbers are stored in the profile as ICC s15Fixed16Number format, in which the first 16 bits are the signed integer portion of the number and the second 16 bits are the fractional part multiplied by 2^16.

In that format, 1 is expressed as 0x00010000. So 0x00026666 is 2 + (26214/65536)

I push a commit. I have change the values that are differents in LUT212, about 20 with a delta of 1, example 331 instead of 332 or 12498 instead of 12497
I will look in RT code why these small differences,

jacques

1 Like

I didn’t mean to completely disappear from this discussion, but I’ve been having (still having) some “not so easy to type” days from discomfort in my hands. Anyway, I’m trying to sort out practical conclusions and recommendations.

It’s reassuring to see that standard V4 parametric curves - using the actual parameters specified in the sRGB color space specs - produces TRCs that are more accurate than the V2 1024-point curves.

It’s not surprising to see that the V2 4096-point curves are less accurate in the shadows than the V2 1024-point curves.

The “nudging” of the s15Fixed16Numbers for the parametric curve numbers that are stored in ICC profiles to produce more accurate parametric curves is similar to the “nudging” of this same type of number for the RGB XYZ matrix values, that is done by ArgyllCMS code when making profiles that are “well-behaved” (and that I adapted for use with LCMS by doing back-calculations from well-behaved XYZ values produced by ArgyllCMS code, to get xy values to feed to LCMS). “Well-behaved” in this context means a=b=0 when R=G=B, and for R=G=B=1.0 floating point, the corresponding LAB value is (100,0,0).

Some practical “what to do” questions:

This discussion has focused on the sRGB point and parametric TRC. But we also have the Rec.709 and LAB-companding-curve TRCS, which also presumably might be made more accurate by nudging the parameters to get a best fit.

Is the increase in accuracy worth nudging the parameters when making V4 profiles? The reason I ask is that I’m fairly sure that if I change any of those parameters in my profile-making code, I’m going to get a few emails saying “gee Elle, your TRC parameters used to be correct and now they are wrong - why did you change them?” Also GIMP-2.10’s V4 internal sRGB profile would then be out of step with my V4 sRGB profile, and similarly for @Carmelo_DrRaw 's PhotoFlow V4 profiles. Similar considerations apply to making V2 profiles with 212-point instead of 1024-point V2 point curves - such revised point TRCs would be out of step with ArgyllCMS and other older V2 ICC profile point curves.

Thanks, now it’s clear even for (us) lazy gyus …

What’s the reference TRC you use ?. The one of the IEC 61966-2-1 standard (Ko=0.4045, Φ=12,92) which has non continuous slope at the transition ?.
My (still buildiing) opinion is that
For color managed use cases, use point TRCs matching the “exact/continuous” one (Ko=0.0392857, Φ=12,9232102) … same for parametric TRCs. Mentioning that it is not the IEC61966-2-1 standard but the exact/continuous version of it. Do the precision evaluations accordingly …

For RT’s “no_ICM_sRGB” and generally non_color_managed use cases, we should use the standard IEC 61966-2-1 as used by the industry (browsers, Adobe, AgryllCMS etc) to ensure correct rendering.

For precision metrics … isn’t DE2000 closer to human perception than DE94 ?.
As I understand, in your evaluation, the differences at darks take a strong weighting which may be is not exactly correct because human perception is softer at darks (see Barten JNDs 8 References )

A question … a lot of the 212 points success is based on matching with the linear part’s slope …
What about a denser TRC (say around 424 points) which matches again with the slope at the linear part and also gives more precision at the exp part (due to denser points) ?

Maybe I misunderstand what you are saying, but see this thread, and especially the post by Graham Gill: http://gimp.1065349.n5.nabble.com/unusual-babl-babl-util-h-sRGB-tone-curve-values-td35116.html

Personally I’m not planning to change the sRGB TRC used in my sRGB profiles from what is in the current sRGB color space standard. If someone else wants to make and release such a profile, that’s another story, though hopefully that “someone else” isn’t any of the devs that make the color-managed software that I use for image editing :slight_smile: .

Some really good points there…

I had intended to set my curve-fitting solver loose on the ProPhoto and Rec709 curves as well to see what it finds. I’ve never encountered the LAB-companding-curve, but I can look into it as well. I can report back on those when I have time to play with them.

My personal opinion is that the tweak the V4 sRGB g parameter is worth it, and that people would be generally accepting of the change given that it has a mathematical basis (same as Argyll’s XYZ tweaks). There is the issue of compatibility with other profiles, though, and there’s certainly no guarantee everyone decides to change.

My general stance is that the profiles are meant to mimic the standards, so the closer they can get to those standards, the better. If that means some non-obvious tweaks to overcome limitations in the profile technology, so be it.

I have some developing thoughts on the XYZ tweaks made to make the profiles ‘well-behaved’ as well. I need to do some more testing to make sure I understand the mathematical basis for those, but ultimately there are two conflicting definitions of ‘well-behaved’ in my mind. The one you introduced me to is the idea that a conversion to L*a*b* using the profile should produce neutral shades of grey. The second one I’m considering is round-trip accuracy. That is, if I have two profiles that theoretically describe sRGB, and I convert from one to the other, do I get the same values out as went in? Changing the primaries involves adapting them to the rounded D50 values stored in the profile rather than the actual specified D50 values from the ICC spec. That makes the math work in L*a*b* conversions, which take that stored whitepoint value into account. In a matrix->matrix relative colormetric profile conversion, however, the whitepoint is never used or considered, so it doesn’t matter that the primaries are adapted to the profile whitepoint. What really matters is that the profiles were adapted to the same whitepoint. If there were general agreement on those whitepoint values and the adaptation method, round-trip accuracy would be maintained, but there is no such agreement. I’m going to do some experimentation and see if I can figure out which is the lesser of two evils.

Oh, and I’m sorry to hear about your hand issues. I hope you feel better :slight_smile:

I used the IEC61966-2-1 standard values for my reference curve. For me, the tweaks that are worth making come down to a question of intentional imprecision vs imposed imprecision. The sRGB standard intentionally used rounded values and then compensated for them, intentionally creating the slope discontinuity. Any software implementing the spec will implement the same discontinuity. The error imposed by the ICC profile format is, however, unintentional, and I believe it should be corrected or compensated for if possible.

A perfect example of that would be the D50 whitepoint given in the ICC’s own spec. They give explicit values of X=0.9642, Y=1.0000, Z=0.8249 in the spec, but then the value stored in every profile header is 0x0000f6d6, 0x00010000, 0x0000d32d, which works out to 0.9642028809, 1.0, 0.8249053955. Which is it? And what does that mean when one piece of software calculates with the value in the spec and another calculates with the value in the profile?

The basis I’m working from is this:
Consider you have a piece of software (App A) that implements the sRGB spec and the Adobe RGB spec natively as defined. Then you have a second piece of software (App B) that knows nothing of the colorspace specs but knows how to do profile-based conversion according to the ICC rules.

I create an image in App A in the (real) sRGB colorspace and save a copy with an sRGB profile assigned. I then convert my working copy to (real) Adobe RGB and save a copy of that. If I were to load the sRGB copy in App B and convert it to the Adobe RGB ICC profile, I should get the same result as the copy saved from App A.

That is, the processing with the profiles should mimic processing with the actual specs. If we can tweak the profiles from their obvious values to get closer to that goal, I believe that’s the right thing to do. Getting more clever than that and trying to second-guess the specs themselves isn’t for me.

Oops, forgot about the second part of your comment…

The revisions to ΔE had solely to do with compensating for difference in perception at different hues. Yes, they make it more accurate, but there was never a problem in the grey part of the L*a*b* space. And L* already takes care of the human perception of light vs dark colors. It was specifically designed to be perceptually uniform.

I did try curves with more points but was unable to find a better fit than 212. My first issue was that if you tune for 8-bit input and give the solver more than 256 points, it basically uses throw-away points to change the slope of the lines that make up the curve to make the used points give better interpolated results. At one point, it identified a 506-point curve that had absolutely ridiculous stats with 8-bit input, but then if you used it with more samples, it was a mess. It basically cheated, which was cool and unexpected but not particularly useful.

In fitting to 1024 samples, it picked out a 424-point curve that was very good but not a real improvement over 212. I kept that one in for reference, so I can give you its stats next to the 212

8-bit input

Points | Max Error | Mean Error | RMS Error | Max DeltaL | Mean DeltaL | RMS DeltaL | Max RT Error
   212 |  0.001650 |   0.000119 |  0.000361 |   0.005960 |    0.000675 |   0.004825 | 0
   424 |  0.001530 |   0.000079 |  0.000200 |   0.006067 |    0.000568 |   0.004868 | 0

16-bit input

Points | Max Error | Mean Error | RMS Error | Max DeltaL | Mean DeltaL | RMS DeltaL | Max RT Error
   212 |  0.001905 |   0.000130 |  0.000381 |   0.006330 |    0.000715 |   0.000311 | 5
   424 |  0.001695 |   0.000083 |  0.000205 |   0.006649 |    0.000598 |   0.000319 | 5
2 Likes

Graeme compares the rounded values 0.03928, 12.92 which give worse continuity than the standard … but the proposed non_rounded 0.0392857, 12.9232102 give perfect continuity … no ?

Here is what Graeme Gill said in his post:

Current IEC specification values:
0.04045 / 12.92 = 0.003130804954
((0.04045 + 0.055)/1.055)^2.4 = 0.003130807229
continuity error of 1 part in 1.3e6

Draft IEC sRGB & util.h values
0.03928 / 12.92 = 0.003040247678
((0.03928 + 0.055)/1.055)^2.4 = 0.003039492412
continuity error of 1 part in 4e3

That’s a very small “discontinuity” in the current IEC specs, that comes from the seeming widespread practice of using 4-decimal-place values in specs. I don’t see how an sRGB profile variant made using non-standard TRC parameters based on an old draft of the sRGB specs - which draft has long since been replaced by the current sRGB specs - would be of any practical benefit to people who use ICC profiles and need an sRGB profile.

Yes, awhile back I chased that issue of “what’s in the specs” vs “what happens when the specs are used to make an ICC profile” around and around:

Of course it means that results are different. And not just for the profile illuminant, but also for the RGB XYZ matrix values, which also are rounded when stored in an ICC profile, vs “not rounded” before being store in an ICC profile.

“Results are different” can lead to very unhappy editing results, as per the following GIMP bug report, where the value of solid white before and after a profile conversion from a profile “on disk” (rounded) to nominally “the same” profile made and held in memory (unrounded), results in a change of what used to be “white” to “not white”:

https://bugzilla.gnome.org/show_bug.cgi?id=727185#c6

In GIMP, to avoid these “unhappy” editing results (white turning to some other color) now there is code that takes the built-in profile “in memory” and then effectively (not actually) save it to disk to get a profile that matches “the same” profile once it’s been saved to disk.

Your proposed solution is to tweak the XYZ values in the actual profile - illuminant and also the RGB-XYZ values - to minimize the differences between “in memory” and “saved to disk”.

AdobeRGB is an interesting case, because the actual AdobeRGB specs give not just 4-digit spec values, but in Appendix A also the expected ICC profile values, which if you check an actual profile made according to the specs, are already “nudged” to produce a well-behaved in the sense of “neutral gray axis” - checked using ArgyllCMS xicclu utility, I’ve never made a spreadsheet to produce more accuracy than the xicclu utility calculations.

https://www.adobe.com/digitalimag/pdfs/AdobeRGB1998.pdf

Hmm, unless your proposed definition of “well-behaved” also produces ICC profiles that are well-behaved in the sense that the gray axis is neutral, with white and black respectively being at LAB=100,0,0 and LAB=0,0,0 - then it might be better to come up with a different term. Well, a different term would also make discussion more simple as people wouldn’t have to guess “which definition of well-behaved is being used”.

The topic of Elle Stone’s “made up” terminology came up in another post recently, regarding the word “unbounded” :slight_smile: , for which I provided links to ICC documents showing that Elle Stone didn’t make up this terminology at all. Along these same lines, for prior use of “well-behaved” in the sense of “neutral gray axis”, see:

“The role of working spaces in Adobe applications”
https://www.adobe.com/digitalimag/pdfs/phscs2ip_colspace.pdf

Quoting from the above PDF:

These RGB working spaces are mathematically constructed to provide a color space that provides useful and flexible editing qualities. For example, one benefit of synthetically constructed RGB working spaces is that it is easy to define a neutral color. When each value of red, green and blue is equal anywhere within the entire color space, that color is neutral. . . . In a synthetic RGB working space, you are assured that a color is neutral gray when all three values are equal. For example, R5/G5/B5 as well as R200/G200/B200, or any identical set of RGB numbers defines a neutral color. This behavior is one reason RGB working spaces are often referred to as “well behaved” . . .

“Well-behaved” in the sense of “neutral grayscale” is crtically important when editing images in the digital darkroom. It’s not super important at 8-bit integer precision because small deviations in “neutral gray axis” are masked at 8-bit precision. But at 32-bit floating point one really doesn’t want a relative colorimetric ICC profile conversion between RGB matrix working spaces to produce a tint in what the user intended to be “white”, R=G=B=1.0f.

OK, turning to the topic of “round-trip accuracy” as you define as quoted below:

Only if the two profiles have the same RGB-to-XYZ matrix (and TRC, of course), and same D50 illuminant values, and in V2 profiles also the same source “white point” tag, or in V4 profiles the same chromatic adaptation matrix, which are relevant when making absolute colorimetric conversions.

I think you are interested in comparing “precision of values before being saved to the ICC profile” to “precision of the values after being saved to the ICC profile” - yes? Such a comparison is made more complicated by the need to decide between “rounding the RGB-XYZ values” and “rounding and then nudging the RGB-XYZ values to make the resulting ICC profile well-behaved”. ArgyllCMS does this “rounding and nudging” for all ICC profiles. AdobeRGB has the “nudged to be well-behaved” profile values right in the spec. LCMS doesn’t do the nudging and rounding, only does the rounding.

I think it’s less confusing to consider this as a two-step process: first chromatically adapt the sRGB primaries from the sRGB spec D65 values to the ICC spec D50 values, and then consider the rounding that happens when the values (the D50 ICC profile illuminant and also the RGB-XYZ tags) are stored as an ICC profile. If you keep the profile “in memory”, there is no rounding. As soon as you write the profile to disk, there is rounding.

As you say, if people agree on the source and destination white point values when making the profile, and also on the adaptation method (the ICC-recommended but not required method is Bradford adaptation for V2 and V4 profiles), and also on the values of the actual primaries (eg for WideGamut there are different primaries used by different people), and also on whether to “nudge to get a neutral gray axis”, then the resulting profiles will match. They’ll match before being saved to disk in the form of an actual ICC profile, and they’ll match after being saved to disk. But the “before saved to disk” and “after saved to disk” versions won’t match because of the way ICC profiles store XYZ values.

I’m confused as to how “round-trip accuracy” is relevant. Two cases:

  1. When converting from source RGB matrix ICC profile A to destination RGB matrix ICC profile B using relative colorimetric conversion, and assuming an appropriate image precision (8-bit precision quantizes to the point where small differences disappear), the resulting RGB channel values in the converted image won’t match the unconverted image channel values unless the RGB matrix (and the TRC) are the same for profile A and profile B.

  2. At 32-bit floating point using unbounded ICC profile conversions to do a round-trip conversion from A to B and back to A, and assuming the TRC in profile B is the sort that doesn’t clip channel values below 0, then round-trip accuracy really isn’t a problem. This article is where Marti Maria introduced the idea of unbounded ICC profile conversions: http://www.littlecms.com/CIC18_UnboundedCMM.pdf . And this article shows the procedure I used to see whether these conversions actually do work: https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html

Hey @Elle, thanks so much for all the references.

What precipitated my current round of research was that I noticed the sRGBz profile uses a different set of primaries than Argyll’s and your profiles use, but those primaries are also well-behaved according to xicclu. That sent me off looking to see if I could figure out which is ‘more correct’ if there is such a thing.

I also ran across this document

which has a very specific D65-based XYZ->RGB matrix, presumably calculated from the numbers given in the standard, although I haven’t been able to replicate them. It also has a specific recommended D65->D50 Bradford Adapation matrix and the resulting D50-adapted XYZ->RGB matrix, with lots of decimal places of precision. I’m doing some math to try to figure out where their numbers came from, whether they’re correct according to any standard numbers, and whether they are or can be made to be well-behaved.

I didn’t mean to hijack this thread with my search for the One True sRGB Profile, so I’ll start up a new thread when/if I figure anything interesting out and tag you.

@ilias_giarimis , I went back and reviewed the ΔE-CIE2000 specs again after I replied to your last comment, and I saw that they had made a small adjustment to the ΔL* scaling factor in that version that gives more weight to the midtones and less at the light and dark ends.

I did some re-evaluation using the adjusted ΔL*, and there were a few interesting differences. Basically, the curves over 100 points optimized the same, and 212 was still the best overall fit, but the smaller curves had some other variants that performed better according to the new measure. I’ll be updating my blog post with that info when I have time, so thanks for making me look at that again :slight_smile:

Hmm, maybe check the difference between the “calculated floating point” values and the values in ArgyllCMS sRGB.icm profile, and repeat with the “calculated floating point” values and the values in sRGBz profile?

Yes, keep in mind that the ICC does “update specs” to bring old specs (the specs that are/were actually used in V2 workflow) in closer alignment with new V4 specs. I’ll add some links later (still having a “bad typing day”).

Again, as soon as I have the energy I’ll send along some other links and such (profiles and such) published by the ICC, as I never was able to figure out how they got their figures. I’m also curious as to how they get their Bradford adaptation matrix values. Maybe with your mathematical expertise I’ll finally know the answers!
But it seems to me that to the extent that the color.org matrix sRGB profiles deviate from a plain-jane spreadsheet calculation of the Bradford-adapted sRGB XYZ values, I just don’t see how their matrix profiles can be considered “by the specs”.

My spreadsheet calculations are good, at least until I get to the hex stuff as you pointed out in your email - if I understand you correctly, if the hex math is corrected, my spreadsheet calculations will produce exactly the same values as the XYZ values in the ArgyllCMS sRGB.icm profile:

https://ninedegreesbelow.com/files/spreadsheets/sRGB_specifications_to_ICC_profile.ods

To anyone following this thread this far - the spreadsheet I linked to above can easily be modified to accomplish all kinds of RGB/XYZ calculations. Well, I’ve done so and I just totally dislike using spreadsheets, so it can’t be that difficult :slight_smile: - just input the right values for the RGB color space in question, along with the appropriate source and destination white points. And if you are willing to add the extra equations (also not difficult, see Bruce Lindbloom’s website), you can also extend the calculations to go from RGB to XYZ to LAB, and so on.

The really interesting threads on pixls.us forum inevitably flow over the boundaries of the original topic into other areas :slight_smile: - but for this particular thread I think the remaining practical question is what profiles should RawTherapee use.

Personally I’m not all that excited about modifying the parametric curves from the standard input values. But I do think a V2 profile is better off using 1024 points than 4096 points, and @saucecontrol 's very careful analysis seems to back up this point.

Whether it makes sense to “go further” - all the way to the 212-point curve at least for sRGB, this would depend on whether “smaller and/or closer to the actual sRGB TRC” is more important than “same profile TRC as ArygllCMS/etc is using, and older V2 software also did use”.

I would recommend that the current RT profiles be replaced with V4 parametric curve profiles that have primaries that match the primaries in my github profiles, possibly also including V2 variants if there are still people using RT who also use “only V2” software. But these aren’t my decisions, of course.

As to “which” of my profiles to include with RT (if any), I’d recommend the standard TRC, plus the corresponding linear gamma profile, but only for the profiles that RT devs think are important enough to ship with RT. If anyone wants “all of elle’s profiles” they have the option to download directly from github. But again, this isn’t my decision :slight_smile: and RT will continue being an absolutely awesome raw processor regardless of what ICC profiles are supplied.

Threads do drift. As the originator of this thread, I’m happy with the drift. It has helped me understand that standards such as sRGB are fuzzy things – they leave room for interpretation, hence the never-ending search for the one true sRGB.

If a number is stored as sign bit plus 15 bits for the whole number plus 16 bits for the fraction, we can expect an error in the fraction of plus or minus 1/(2^17), which is 7.6e-6, which will cause obvious differences in small numbers. (EDIT: I mean, numbers close to zero.)

My original request for RT to save to floating point with no clipping may have been resolved, but I’ve had no chance to test it yet.

That would be fixed point, not floating point.

Yes. I was thinking of the s15Fixed16Number type in ICC files: http://color.org/specification/ICC1v43_2010-12.pdf

Ok, misunderstood. I thought you referred to the topic

My mistake, for lack of clarity. The RT issue is probably okay. I was trying to understand where imprecision in sRGB ICC comes from, and fixed-point 16-bit fractions seems an obvious candidate.

It’s still wip. Save as floating point is implemented by @agriggio. But there’s still some work on unbounded processing.