I’ve got away from the daily job, and now have some to play.
The outcome is a bit of (Java) code (I’m a business dev, sorry). It’s not something useful, it’s just a learning tool for me. Since I’m a silly old man, it’s hard for me to keep track of the type of data an array of numbers represents, so I wrapped everything into objects. Yep, I know it’s baroque. Anyway, Java arrays are also not too efficient, being full-fledged objects with all the overhead that comes with it.
The code is here:
What works now:
load the linear Rec2020 input
perform one of the following mappings:
– null mapping: simply apply the Rec2020->XYZ->(linear) sRGB matrix. Will probably die, see below.
– B&W using L: Rec2020->XYZ->Lab; null out a and b; Lab->XYZ->sRGB. So a B&W version. Not a gamut mapper, but I wanted something simple to start with.
– global saturation reduction:
— Rec2020->XYZ->Lab->LCh(ab) (alternatively, Luv → LCh(uv))
— for each pixel, take L and h;
---- find the maximum value of C such that (L, Cmax, h) is at the boundary of the sRGB gamut
---- find the compression ratio needed to take the pixel to pixel inside sRGB
---- find the maximum of such compression ratios
---- apply it uniformly to the whole image
— convert back from LCh(ab/uv) to Lab/Luv → XYZ → sRGB
– clip chroma (C) of LCh(ab/uv):
— Rec2020->XYZ->Lab/Luv->LCh(ab/uv)
— for each pixel, take L and h;
---- find the maximum value of C such that (L, Cmax, h) is at the boundary of the sRGB gamut
---- if the pixel’s C is higher than Cmax, clip it
— LCh(ab/uv) → Lab/Luv->XYZ->sRGB
check for out-of-gamut colours; this is why the null mapper will most probably die
I think there are a couple of things to learn from these:
As CIELUV is chromaticity linear (as is filmic) it will show the ”salmon” color in yellow-to-red range when the highlights get desaturated.
CIELAB’s nonlinear equations manage to dodge this.
Even the CIELAB version shows some quite pale highlights in the brightest parts. This was as expected, because it was done at constant lightness L which also means constant luminance Y. Earlier experiments with constant luminance have showed similar results. Filmic also currently handles the ”upper boundary” at constant luminance.
Number 3 tells to me that it would be necessary to let the user control the tradeoff between luminance and chrominance. I have made some experiments related to this and I hope to publish them at some point. Need to check a couple of things first.
Thanks for sharing again, @kofa! Btw, would be pretty interesting to see this file rolled through the same process - first interpreting the data as Rec.709 primaries, then as Rec.2020.
A few years ago, I encountered a paper tackling gamut compression and linear chromaticity. I could not find it in the same year when I tried to retrieve it for a closer reading. From what I can remember, the authors created a new space solely for linear gamut mapping, compression and expansion. Oh well. It is good to see people exploring it here.
That’s an EXR file, so I’ll have to process it in darktable first. My code can only deal with SDR (0…1) images, and the simple Java loader I used supports only the following formats:
BMP, GIF, JPEG, PNG and TIFF.
It is my understanding that Rec709 primaries are the same as those used by sRGB, so if I export as Rec709, and modify the tool to assume the input is in Rec709, the gamut mapping won’t be triggered (only the B&W ‘mapper’ would change anything).
How do you want me to pre-process this in darktable? I could set filmic like this:
Heh… binary search is not the right solution. For example, using LCH(ab):
L = 97.65954289907357, h = 1.841743803484896 rad (105.524147 degrees)
R is out of gamut with > 1 for C = 38…54, peaking at C=46; also out of gamut with C < 0 for C = 405…
Then, we are in gamut until C reaches 65, because from then on:
G is out of gamut with > 1 for C >= 65.
B is out of gamut with < 0 for C >= 98.
My search finds maxC = 64. I’ve now changed this to search again between 0 and the first found solution.
The search, as I implemented it, only works for monotonically increasing functions.
Added optional darkening. The following are from the HDR inputs SonyF35.StillLife.exr and t004100.tif (for both: no processing; exported as a linear floating-point TIFF).
L is treated using the same curve as C, the only difference is the starting point of the shoulder. Obviously, this is not a substitute for a real tone mapper.
If you have a TIFF, please send it. No need for the Rec709 one, I think, since the gamut of R709 is the same as that of sRGB, isn’t it?
Anyway, I’ve added a simple Lab-based tone mapper (if I understand correctly, that’s completely wrong, since Lab is no good for HDR). This is the output:
The more I think about it, converting that one to a TIFF that has bounded range doesn’t really convey what I was going for. But your tone mapping experiment showed the same thing quite well - I was anticipating to see the skew to purple that is produced by CIELAB. That is very well visible in the sweep images you posted.
If you are going to do more tests, Oklab certainly would be worth testing. No added complexity over CIELAB, but better results.
A value of OKLCh(0.999933, 0, 0), or OKLab(0.999933, 0, 0), using the matrices provided on the page, is translated into XYZ(0.950279, 0.999799, 1.088081) in double maths, which is sRGB(1.000222, 0.999754, 0.998999). So, starting from a neutral colour below white (L < 1), I end up outside sRGB.
In 16-bit linear terms, that’s about 65550, so my safety checks kill the processing.
Your result seems to be consistent with this table (which contains rounded values) XYZ - OKLab pairs
But yeah, it’s odd that it doesnt convert to an achromatic in gamut value. Do the matrices to go from OKLab to linear-sRGB do the same thing? lin-sRGB OKLab
EDIT:
this may be due to the fact that OKLab is defined by the LMS cone fundamentals and not in terms of the old XYZ1931 definition of color-matching-functions. In this regard OKLab is a “new” design just like Yrg is, if I recall correctly.
This would mean that if you convert from XYZ1931 to sRGB that you can get out of gamut colors, as the “newer” XYZ that you are in technically needs a different matrix that converts to sRGB. I assume your matrix for converting from XYZ to sRGB leans on a XYZ1931 definition.
Also: I could still be missing more technicalities here.
The matrices were updated 2021-01-25. The new matrices have been derived using a higher precision sRGB matrix and with exactly matching D65 values. The old matrices are available here for reference. The values only differ after the first three decimals.
Depending on use case you might want to use the sRGB matrices your application uses directly instead of the ones provided here.
It could be factors such as precision, choice of transforms, illuminants and constants. G’MIC is a good resource for experimentation.
Using the matrices from A perceptual color space for image processing, that causes a failure:
Comparison failed for actual = Xyz[0.950470, 1.000000, 1.088300] and expected = Xyz[0.950456, 1.000000, 1.089058]
However, Bruce Lindbloom’s page Welcome to Bruce Lindbloom's Web Site lists values from ASTM E308-01, and with those, the test passes: 0.95047, 1, 1.08883
That white point breaks some other tests, where I took the test data from Online Color Converter - Colormath, which apparently uses the other D65 definition.