# Did I correctly implement the math for converting from CIE L*a*b* to sRGB (or other RGB colorspaces) in this shader?

I’ve been trying to wrap my head around L*a*b*, so figured the best way to do so would be to write a shader in GLSL (both since I know that language, and because it’s ideal for per-pixel vector/matrix operations) which maps x/y coordinates to ab coordinates, and then provides an L* adjustment. I then set it to hide any colors outside of sRGB’s gamut.

However, I’m not entirely convinced that my implementation of the algorithms in question are accurate, especially since I don’t really understand why ‘(color + 16)/116’ and ‘color*116 - 16’ seem to be necessary; if I take them out, the output just gets a bit larger.

This does indeed cause individual colors to be converted in a way where the numbers don’t match up with what existing tools indicate, but so does changing the whitepoint adaptation matrix and other parameters. The difference can even be accounted for simply by dividing by 116 to begin with, then multiplying by 1.16 before applying the transfer curve.

I feel like it wouldn’t have been included in the specifications (or at least, Wikipedia’s interpretation of the specifications) if it weren’t important, and the fact that I can get identical results without it means it’s quite possible that I’m doing it wrong.

I’ve also made a 3D variant, where it basically uses the 2D variant to create a bunch of ‘slices’, each drawn on top of the other. That was partly inspired by another shader on Shadertoy (the website I’m posting these on) which does the same thing, but I originally wanted my 2D shader to act in a way that was similar to my xyY chromaticity diagram shader, but I couldn’t figure out how to show the whole sRGB gamut in that sort of way. So instead I made it render all the slices so I could see the shape, then later added perspective (which you can see some discussion about in the comments).

So, anyway, I hope I can get some feedback on this from you guys! I don’t have a copy of the actual CIE L*a*b* specifications or anything, so it’s all just put together from information on Wikipedia and other free resources I can find. I also apologize if any of the comments are flat out wrong, as sometimes I’d change how something worked and forget to edit the comment to match.

I’d guess @hanatos, @Elle, @anon11264400, and @David_Tschumperle might be the ones who could answer…

Did you check Bruce Lindbloom’s pages?

Hermann-Josef

Don’t forget @pippin the master of babl and gegl.

I did check Bruce Lindbloom’s site, and his math is a mess. He seems stuck in the thought process of treating each channel as independent of the others, which is a huge mistake to do inside GLSL (and causes duplicate code).

While it was nice to see some proof that I did the math regarding the L* trc correctly (his `κ` and `ϵ` variables match my `slope` and `cutoffToGamma` variables, except his `κ` is multiplied by 100), the fact that he implements it differently for each channel (unlike Wikipedia, which has its own issues that caused me to have to do my own math to begin with for calculating those variables’ values) caused a ton of issues for me early on.

On top of that, he has an extra step in his RGB↔XYZ matrix calculation math that is unnecessary and can simply be left out (dividing the chromaticity coordinates for each primary by their ‘y’ component). I’m seriously glad I learned that math from Ryan Juckett’s page on the subject instead of from Bruce Lindbloom’s page, since it gives a thorough and detailed explanation and reasoning for each step and doesn’t result in any unnecessary steps.

I have no doubt that Bruce Lindbloom is very intelligent and probably understands all of this more than most, but his pages are severely lacking in detail and explanation - and in at least one case that’s probably led to some unnecessary work for some people. After all, the math he shows isn’t wrong, it’s just got an unnecessary step in it.

All that said, part of my uncertainty over L*a*b* conversion is because I don’t know what is and isn’t unnecessary duplicate work in this context. It almost looks like the ‘+16’ thing is meant to be part of the standard L* trc, if one were doing it in the 0-100 range instead of 0-1. Since my code expects to apply a trc on a 0-1 scale, it adds 0.16 as the ‘offset’ parameter, so might be taken care of there.

But it’s really hard to determine if that’s the case or not, since Wikipedia also has ‘+16’ bits in various parts of the per-channel equations, even though it (like my code) also abstracts the trc part as a single equation to apply to all 3 channels identically. So I still feel like I might not be fully understanding the math, and might be doing it incorrectly.

I don’t have a clue what most of the above posts in this thread are actually talking about. My only programming language is “c” and I’m not all that good at reading or writing “c” code. But Lindbloom’s equations are correct. I used his equations to rewrite babl’s XYZ<->LAB equations quite awhile ago, using double floating point. See this file:

And specifically look at these two functions:

static inline void XYZ_to_LAB
static inline void LAB_to_XYZ

The same file has some floating point functions that are more efficiently though less transparently written.

The babl CIE.c code produces the same LAB values as ArgyllCMS xicclu utility, so I’m confident that it’s correct. But it’s correct for ICC profiles which are not the same as the color spaces from which they were made because ICC profiles are chromatically adapted to D50 from whatever white point is given in the color space specs (D65 for sRGB). Please note that Lindbloom’s equations are correct but his values for D65 and D50 are not - he uses ASTM values instead of the values given in the sRGB specs and the ICC specs.

3 Likes

For the “16 thing” you might want to consult Charles Poynton’s book. He writes “the scale factor of 116 and the offset of -16 modify the pure power law function such that the best-fit pure power function has an exponent of 0.42, not 1/3.” if that is what you are talking about.

Hermann-Josef

1 Like

Ok, I think I should probably explain some of my code, since GLSL is weird in a few ways, but also since I’m doing things in ways that seem significantly differently than other code (such as your own) and formulas suggest. I’ve been studying the code you linked to, especially `XYZ_to_LAB` as it applies the gamma/transfer curve identically to all channels (and apparently it’s like that in Bruce Lindbloom’s XYZ→L*a*b* page as well; wish I’d looked at it and not just the page for the other direction). It looks like we might be doing the same thing, but I’m still unsure, so I’ll try converting what my code does into something more similar.

Firstly, GLSL runs on GPUs, and as such, it’s apparently considered bad practice to have code that branches much. So within my `toGamma` and `toLinear` functions, instead of `if` and ternary statements, I have it simply compute both ‘branches’ and then ‘mix’ the results based on a per-channel conditional. So, lets convert this GLSL code to C:

``````vec4 toGamma(vec4 color, const transfer trc)
{
bvec4 cutoff = lessThan(color, vec4(trc.cutoffToGamma));
vec4 higher = (1.0 + trc.off)*pow(color, vec4(1.0/trc.power)) - trc.off;
vec4 lower = color*trc.slope;

color = mix(higher, lower, cutoff);

return color;
}
``````

In C, the definition of the `transfer` struct (very easy to find as it’s the first thing I define in my shaders) would remain the same, except I might change `float` to `double` (in GLSL, doubles are floats, you just change the precision - which is literally the first few lines of code in my shader).

So when we rewrite `toGamma` in such a way that the C version is similar to `XYZ_to_LAB`, we’d end up with something like this:

``````void toGamma(double r, double g, double b, transfer trc, double *to_r, double *to_g, double *to_b)
{
*to_r = r >= trc.cutoffToGamma ? (1.0 + trc.off) * pow(r, 1.0 / trc.power) - trc.off : r * trc.slope;
*to_g = g >= trc.cutoffToGamma ? (1.0 + trc.off) * pow(g, 1.0 / trc.power) - trc.off : g * trc.slope;
*to_b = b >= trc.cutoffToGamma ? (1.0 + trc.off) * pow(b, 1.0 / trc.power) - trc.off : b * trc.slope;
}
``````

So, prior to receiving these responses, I had made a ‘private’ shader on Shadertoy that applied my L*a*b*→XYZ code backwards. I first convert each pixel in an image from RGB to XYZ, then from there I worked through my code backwards to go from XYZ to L*a*b*. After all that, I slapped in my original L*a*b*→XYZ code and checked to see if it gave me the original image back.

I admit that at first it didn’t, but that was because I’d swapped around the order that 2 things were in (or rather, I kept the original order instead of having them occur in the opposite order). After fixing that, I indeed got the original image back. After that, the darkest, closest to black pixels were for some reason labeled as ‘out of gamut’ and were thus turned into that gray color from the original shader, but getting rid of that ‘out of gamut’ check fixed that.

So, since it’s easier to compare XYZ→L*a*b*, ever since your response I’ve been working to compare your version with mine. Here’s the basic GLSL code for just the XYZ→L*a*b* conversion (originally just inlined inside `mainImage`, but here I’ve split it into a separate function and included the definitions (and their comments) of a few things for convenience):

``````// Parameters for transfer characteristics (gamma curves)
struct transfer {
// Exponent used to linearize the signal
float power;

// Offset from 0.0 for the exponential curve
float off;

// Slope of linear segment near 0
float slope;

// Values below this are divided by slope during linearization
float cutoffToLinear;

// Values below this are multiplied by slope during gamma correction
float cutoffToGamma;
};

// Creates a whitepoint's xyz chromaticity coordinates from the given xy coordinates
#define white(x, y)\
vec3(x, y, (1.0 - x - y))

#define Bright(w)\
((w)/w.y)

// Standard illuminant D50 according to ICC standards (they specify a hex value
// for the 16-bit integer representation, as well as a specific way to decode it,
// so I did some math to figure out exactly the values they expect)
const vec3 whiteD50I = white(31595.0/91393.0, 32768.0/91393.0);

// Gamma for the CIE L*a*b* Lightness scale
const transfer gamLab = transfer(3.0, 0.16, 243.89/27.0, 0.08, 216.0/24389.0);

// Color Appearance Model (or 'reference') white point
const vec3 whiteCam = whiteD50I;

vec4 XYZ_to_LAB(vec4 color)
{
color.xyz /= Bright(whiteCam);
color = toGamma(color, gamLab);
color.xyz = (color.xyz*100.0 + 16.0)/116.0;
color.xz = (color.xz - color.y)*vec2(500, -200);
color.y = color.y*116.0 - 16.0;

return color;
}
``````

If I convert it to C (like earlier), I get something like this:

``````#define D50_WHITE_REF_x (31595.0 / 91393.0)
#define D50_WHITE_REF_y (32768.0 / 91393.0)

#define D50_WHITE_REF_X (D50_WHITE_REF_x / D50_WHITE_REF_y)
#define D50_WHITE_REF_Y (1.0)
#define D50_WHITE_REF_Z ((1.0 - D50_WHITE_REF_x - D50_WHITE_REF_y) / D50_WHITE_REF_y)

#define LAB_GAMMA       (3.0)
#define LAB_OFFSET      (0.16)
#define LAB_SLOPE       (243.89 / 27.0)
#define LAB_CUT_GAMMA   (216.0 / 24389.0)

void XYZ_to_LAB(double X, double Y, double Z, double *to_L, double *to_a, double *to_b)
{
double xr = X / D50_WHITE_REF_X;
double yr = Y / D50_WHITE_REF_Y;
double zr = Z / D50_WHITE_REF_Z;

// Doing this inline, but I could instead pack LAB_* into a 'transfer' struct and reuse 'to_gamma()'
double fx = xr >= LAB_CUT_GAMMA ? (1.0 + LAB_OFFSET) * pow(xr, 1.0 / LAB_GAMMA) - LAB_OFFSET : xr * LAB_SLOPE;
double fy = yr >= LAB_CUT_GAMMA ? (1.0 + LAB_OFFSET) * pow(yr, 1.0 / LAB_GAMMA) - LAB_OFFSET : yr * LAB_SLOPE;
double fz = zr >= LAB_CUT_GAMMA ? (1.0 + LAB_OFFSET) * pow(zr, 1.0 / LAB_GAMMA) - LAB_OFFSET : zr * LAB_SLOPE;

fx = (fx * 100.0 + 16.0) / 116.0;
fy = (fy * 100.0 + 16.0) / 116.0;
fz = (fz * 100.0 + 16.0) / 116.0;

// I could instead set '*to_L' before modifying 'fy' above by simply multiplying by 100
*to_L = fy * 116.0 - 16.0;
*to_a = (fx - fy) * 500.0;
*to_b = (fz - fy) * -200.0;
}
``````

I think it’s safe to say that your code has fewer instructions than mine, so would run much faster! However, I’d have to write a special ‘gamma’ function specifically for L*a*b*, rather than reusing the one I use for sRGB, Rec. 709, etc. At this point I suppose it depends on how much faster it is overall; I know mobile devices choke on the 3D version of my gamut shader, so if it ends up being enough of a speed boost to raise my phone’s framerate up from 15fps (which is after setting `steps` to 30, down from 120), that’d be a pretty good win.

I feel it’s worth pointing out that my `toGamma` function takes values in the same range as it outputs (0 to 1), and so the offset and multiplier are scaled accordingly (to `0.16` and `1.16` (calculated as `1.0 + trc.off`)). Also, I could shorten the gamma/transfer curve bit by just using the cube root function (`cbrt()`, which your code uses and I had to look up because I wasn’t familiar with it) instead of taking the value to the 1/3rd power. However, while I did inline that part of the code instead of reusing it, I decided to keep its content intact. Shows what the other code would likely turn into if inlined, at the very least.

So, it seems as if my code does at least take the +16 stuff into account when using that function, but now to figure out if it does so in a way that leads to correct results!

Note: I have not tested any of the C code I wrote above. I’m assuming it’s correct as I basically performed a literal translation from GLSL to C (roughly in the style your code’s in), but I may have changed something while editing it and caused the rest of it to break due to forgetting to change all the other things to match. I’ll test things more properly after I go eat lunch.

Yeah, in my code that’s taken care of by the `toGamma` and `toLinear` functions, with an offset of 0.16 and a multiplier calculated from that (which ends up being `1.16`). I give more details about that in my response to Elle, along with a translation of what my GLSL code does to C.

I sadly don’t have a copy of that book, and don’t currently have any source of income, but it’s really neat to hear that it gives all the explanations and details for things like that! I have read Poynton’s Color and Gamma FAQs on his website, though. I’ll keep my eyes out for it, see if it’s on sale or anything.

Lindbloom has errors in the functions he uses for the XYZ → Lab (or the other way around, I don’t remember) conversions. When reviewing darktable color spaces conversions lib, at first I thought the lib was wrong, but after checking other sources, turns out there are some mistakes in Lindbloom’s website.

It’s also easy to simply misinterpret his equations. There’s a reason why I described them as ‘a mess’. For example, he defines everything backwards - constants are usually at the bottom, variables directly using those constants are above them, and variables using those variables are above that.

At some point I think there was some mixed around and not strictly backwards, but I really can’t remember for sure. I have trouble with spatial relations anyway, so it could be I thought they were not in strictly reverse order even though they were.

Can you give some specifics? Even an example set of, for example, sRGB RGB channel values that don’t convert properly using the Lindbloom equations? Just saying “oh, there are errors in Lindbloom’s equations” doesn’t really help, doesn’t say anything at all to be honest.

@Tynach - you might try the colour-science python code to check accuracy of your code. Example commands are found scattered through this thread:

2 Likes

There are several mathematically equivalent ways of doing XYZ to L*a*b*, some more convoluted than others!

For XYZ to L*a*b*, the non-linear x’ y’ z’ values are offset by 16 (i.e. range from 16/116 to 116/116), which is then scaled and shifted to the range 0 … 100 for the L* value, but not for the computation of the a* and b* values. Note that this offset is also implicit in result of the power curve applied to values above 0.008856451586 - i.e. the power curve is shifted to make it meet the linear segment used for small values.

From argyllcms/icc/icc.c icmXYZ2Lab():

``````void icmXYZ2Lab(icmXYZNumber *w, double *out, double *in) {
double X = in, Y = in, Z = in;
double x,y,z,fx,fy,fz;

x = X/w->X;
y = Y/w->Y;
z = Z/w->Z;

if (x > 0.008856451586)
fx = pow(x,1.0/3.0);
else
fx = 7.787036979 * x + 16.0/116.0;

if (y > 0.008856451586)
fy = pow(y,1.0/3.0);
else
fy = 7.787036979 * y + 16.0/116.0;

if (z > 0.008856451586)
fz = pow(z,1.0/3.0);
else
fz = 7.787036979 * z + 16.0/116.0;

out = 116.0 * fy - 16.0;
out = 500.0 * (fx - fy);
out = 200.0 * (fy - fz);
}``````
1 Like

Since we are already on a pinging spree, I will also ping @jdc in case he would like to comment.

Took me a bit of time to learn the API, but using this script (where I manually specify everything except the Bradford transformation - which I don’t specify because I checked to ensure it was identical to the transformation matrix I use in my code), I managed to mostly confirm my XYZ→L*a*b* code:

``````import colour

whiteD65S = [0.3127, 0.329]
whiteD50I = [31595.0/91393.0, 32768.0/91393.0]
toXyz = [[506752.0/1228815.0, 87881.0/245763.0, 12673.0/70218.0],
[87098.0/409605.0, 175762.0/245763.0, 12673.0/175545.0],
[7918.0/409605.0, 87881.0/737289.0, 1001167.0/1053270.0]]

rgb = [0, 0, 1]
xyz = colour.RGB_to_XYZ(rgb, whiteD65S, whiteD50I, toXyz, 'Bradford')
lab = colour.XYZ_to_Lab(xyz, illuminant=whiteD50I)

print("XYZ:")
print("X: {:.15f}".format(xyz))
print("Y: {:.15f}".format(xyz))
print("Z: {:.15f}".format(xyz))
print()
print("L*a*b*:")
print("L*: {:.15f}".format(lab))
print("a*: {:.15f}".format(lab))
print("b*: {:.15f}".format(lab))
``````

This gives the following output:

``````XYZ:
X: 0.143044519701569
Y: 0.060609958905422
Z: 0.713903816952745

L*a*b*:
L*: 29.565833519650816
a*: 68.285587891529943
b*: -112.033082056164830
``````

Feeding my C code from the other post some XYZ values that were precisely calculated as fractions (using Qalculate, which also was used to calculate the fractional form for the RGB→XYZ matrix used in the Python code), I got this output:

``````XYZ:
X: 0.143044519701569
Y: 0.060609958905422
Z: 0.713903816952745

L*a*b*:
L*: 29.565833519650823
a*: 68.285587891529858
b*: -112.033082056164801
``````

Seems quite a few significant digits survived! Had to specify so much information into Colour-Science though… Otherwise each value along the way would be slightly different than expected, and by the time it got to L*a*b* things were off after the first digit following the decimal place.

If I used the same RGB→XYZ matrix used by most (rather than calculating my own from the chromaticities of the primaries/white point), it probably would have turned out the same from the start.

Not sure what you are doing exactly with the fractional nuttery here, but Colour has plenty of defines to get the values you are manually putting in. Also note that there are two methods to calculate the NPMs for RGB to XYZ, and Colour provides both specification and derived as required.

You’ve wasted a good deal of effort manually entering values that Colour already provides with tremendous accuracy.

Well, it seems my L*a*b* code seems to agree with at least Colour-Science (not so sure about xicclu, which gives yet different results compared to either, even when Colour-Science is not using my pre-calculated matrix).

But, it’s interesting that the offset is applied to L*, but not a* and b*. That might explain a lot, and overall it’s likely that maybe some of the extra work I’m doing in my own code is adjusting for that or something. Or might not be, I really am not sure.

But I’m still somewhat confused - your post states, “For L*a*b* to XYZ, …”, but your code is for the other way around. And that paragraph of your post starts with talking about x’, y’, and z’, then goes into L*, a*, and b*.

Your code snippet also seems to apply the offset in the case that they are below the 0.008… value, but not to the gamma-corrected value - which I think might cause continuity issues if it’s not corrected for elsewhere

… Wait a minute, that’s not the normal ‘slope’ value. That’s… A close approximation for the slope of the cubic root curve at `216.0/24389.0`? If I’m doing my math right (which I probably am not; I’m just messing around with offsets and division instead of dealing with derivatives), a more accurate slope would be `841.0/108.0`. That gives `7.78703703703703...`

I’ve not seen that figure given in any of the equation variants, but it’s interesting nonetheless. It does remove the discontinuity between the power function and the linear fragment of the line, though, so scratch that part of this post out up there.

Yeah, I saw some mentions in the API docs about that, but couldn’t figure out how to use that part of the API. Meanwhile I already knew how to calculate the matrices in Qalculate, so I just did that.

If you could show me how to compute them in Colour, or point me to the proper part of the documentation (which I probably accidentally skipped right over without realizing it), that’d be really helpful! The library is so far pretty fantastic, I just don’t really know how to use it!

In particular, I could not find where the chromaticities for different white points (like D50 or D65) were kept - only their spectral power distributions (which might result in much more precise chromaticity coordinates, but also would likely result in inaccuracies when comparing to how standards like sRGB and ICC v4 define white points like D65 and D50).

the y_r equation is wrong, it’s supposed to be the same as x_r and z_r: