Mapping to output gamut

Hi,

I’ve got away from the daily job, and now have some to play.
The outcome is a bit of (Java) code (I’m a business dev, sorry). It’s not something useful, it’s just a learning tool for me. Since I’m a silly old man, it’s hard for me to keep track of the type of data an array of numbers represents, so I wrapped everything into objects. Yep, I know it’s baroque. Anyway, Java arrays are also not too efficient, being full-fledged objects with all the overhead that comes with it.

The code is here:

What works now:

  • load the linear Rec2020 input
  • perform one of the following mappings:
    – null mapping: simply apply the Rec2020->XYZ->(linear) sRGB matrix. Will probably die, see below.
    – B&W using L: Rec2020->XYZ->Lab; null out a and b; Lab->XYZ->sRGB. So a B&W version. Not a gamut mapper, but I wanted something simple to start with. :slight_smile:
    – global saturation reduction:
    — Rec2020->XYZ->Lab->LCh(ab) (alternatively, Luv → LCh(uv))
    — for each pixel, take L and h;
    ---- find the maximum value of C such that (L, Cmax, h) is at the boundary of the sRGB gamut
    ---- find the compression ratio needed to take the pixel to pixel inside sRGB
    ---- find the maximum of such compression ratios
    ---- apply it uniformly to the whole image
    — convert back from LCh(ab/uv) to Lab/Luv → XYZ → sRGB
    – clip chroma (C) of LCh(ab/uv):
    — Rec2020->XYZ->Lab/Luv->LCh(ab/uv)
    — for each pixel, take L and h;
    ---- find the maximum value of C such that (L, Cmax, h) is at the boundary of the sRGB gamut
    ---- if the pixel’s C is higher than Cmax, clip it
    — LCh(ab/uv) → Lab/Luv->XYZ->sRGB
  • check for out-of-gamut colours; this is why the null mapper will most probably die
  • write the sRGB output with TRC applied.

Plan: add a smooth roll-off as C approaches the in-gamut maximum. I’ve already coded the curve but have not hooked it up. (Tone Curve Math Question - #6 by Thanatomanic)

I think LCh(uv) seems to produce the ‘salmon’ reds (see the fire, the sunflower) that so many dislike about filmic.

HDR image from: Comparing filmic color science v5/v6 - #72 by age
File from darktable:
t004100_01.tif (11.9 MB)

sRGB clipping:


Clipping C in LCh(uv):

Clipping C in LCh(ab):

Raw from: Possible case of filmic's preserve chrominance not applied in exporting
File from darktable:
_DSC8850.tif (10.1 MB)
sRGB clipping:


Clipping C in LCh(uv):

Clipping C in LCh(ab):

Raw from: Sunflower Sagas and Solutions
File from darktable:
0L0A3314.tif (10.1 MB)
sRGB clipping:


Clipping C in LCh(uv):

Clipping C in LCh(ab):

Thanks to those members who originally shared them! Have to run now.

7 Likes

Of course this has been solved by others:

I’ve also considered reducing exposure for those saturated colours, but did not dare to touch the tonality. Maybe later.

1 Like

Thanks for sharing this. Nice experiment indeed.

I think there are a couple of things to learn from these:

  1. As CIELUV is chromaticity linear (as is filmic) it will show the ”salmon” color in yellow-to-red range when the highlights get desaturated.
  2. CIELAB’s nonlinear equations manage to dodge this.
  3. Even the CIELAB version shows some quite pale highlights in the brightest parts. This was as expected, because it was done at constant lightness L which also means constant luminance Y. Earlier experiments with constant luminance have showed similar results. Filmic also currently handles the ”upper boundary” at constant luminance.

Number 3 tells to me that it would be necessary to let the user control the tradeoff between luminance and chrominance. I have made some experiments related to this and I hope to publish them at some point. Need to check a couple of things first.

Thanks for sharing again, @kofa! Btw, would be pretty interesting to see this file rolled through the same process - first interpreting the data as Rec.709 primaries, then as Rec.2020.

A few years ago, I encountered a paper tackling gamut compression and linear chromaticity. I could not find it in the same year when I tried to retrieve it for a closer reading. From what I can remember, the authors created a new space solely for linear gamut mapping, compression and expansion. Oh well. It is good to see people exploring it here.

1 Like

That’s an EXR file, so I’ll have to process it in darktable first. My code can only deal with SDR (0…1) images, and the simple Java loader I used supports only the following formats:
BMP, GIF, JPEG, PNG and TIFF.

It is my understanding that Rec709 primaries are the same as those used by sRGB, so if I export as Rec709, and modify the tool to assume the input is in Rec709, the gamut mapping won’t be triggered (only the B&W ‘mapper’ would change anything).

How do you want me to pre-process this in darktable? I could set filmic like this:

image

image

image

That results in the following TIFF:
Sweep_sRGB_Linear_Half_Zip.tif (11.9 MB)

Sidecar:
Sweep_sRGB_Linear_Half_Zip.exr.xmp (5.1 KB)

RGB clipping:

Clipping chroma in LCh(ab):

Clipping chroma in LCh(uv):

To me the top and bottom red → orange → yellow row looks obviously different.

Switching filmic to normal blending & v6 + power norm:
Sweep_sRGB_Linear_Half_Zip_01.tif (11.9 MB)

Sidecar:
Sweep_sRGB_Linear_Half_Zip_01.exr.xmp (5.9 KB)

RGB clipping:

C in LCh(ab):

C in LCh(uv):

1 Like

I will send you a pair of TIFFs tomorrow. The idea would be to apply no processing on the image beforehand.

OK.
Meanwhile, I’ve added the curve, so now I can gradually dampen C:

  • maxC is determined from L and h;
  • C/maxC ratio is calculated;
  • if that ratio is below a threshold, nothing is done;
  • otherwise, the ratio is passed through the curve: https://discuss.pixls.us/t/tone-curve-math-question/28978/6; the gradient is set to 1, the shoulder is set to the threshold
  • C is set to the output of the curve (between threshold and 1) * maxC.

Ratio of original C to maxC vs ratio of dampened C to maxC:

Zoomed in to 0…1:

Some examples:

Shoulder(threshold) = 0%:

50%:

70%

90%:

0%:

50%:

70%:

90%:

0%:

50%:

70%:

90%:

2 Likes

I feel like I am back in biochemistry learning Michaelis-Menten kinetics… :slight_smile:

3 Likes

Heh… binary search is not the right solution. For example, using LCH(ab):
L = 97.65954289907357, h = 1.841743803484896 rad (105.524147 degrees)
R is out of gamut with > 1 for C = 38…54, peaking at C=46; also out of gamut with C < 0 for C = 405…
Then, we are in gamut until C reaches 65, because from then on:
G is out of gamut with > 1 for C >= 65.
B is out of gamut with < 0 for C >= 98.

My search finds maxC = 64. I’ve now changed this to search again between 0 and the first found solution.
The search, as I implemented it, only works for monotonically increasing functions.

Added optional darkening. The following are from the HDR inputs SonyF35.StillLife.exr and t004100.tif (for both: no processing; exported as a linear floating-point TIFF).
L is treated using the same curve as C, the only difference is the starting point of the shoulder. Obviously, this is not a substitute for a real tone mapper.


1 Like

Hi,

If you have a TIFF, please send it. No need for the Rec709 one, I think, since the gamut of R709 is the same as that of sRGB, isn’t it?

Anyway, I’ve added a simple Lab-based tone mapper (if I understand correctly, that’s completely wrong, since Lab is no good for HDR). This is the output:

LCh(ab):

LCh(uv):

The SonyF25 scene with the Lab version:

1 Like

In Björn Ottosson’s writeup of his OKLab design, he touches on the problems that CIE-Lab has (I’m sure there are more and better sources for this but his writeup is very nice). It’s more than just not-good-for-HDR. for CIE-Lab the hue skews in the blues are readily visible when white is mixed in, reds skew very orange, lightness prediction is so-so etc.
https://bottosson.github.io/posts/oklab/
And more specific
https://bottosson.github.io/posts/oklab/#blending-colors
and for completeness OKLCh which kind of needed a different L that aligns more with LCh(ab)
https://bottosson.github.io/posts/colorpicker/#intermission—a-new-lightness-estimate-for-oklab

Sorry if you knew all this already and should I sound like a broken record.

1 Like

A few papers address this topic if you want to explore CIELAB and HDR. :wink:

The more I think about it, converting that one to a TIFF that has bounded range doesn’t really convey what I was going for. But your tone mapping experiment showed the same thing quite well - I was anticipating to see the skew to purple that is produced by CIELAB. That is very well visible in the sweep images you posted.

If you are going to do more tests, Oklab certainly would be worth testing. No added complexity over CIELAB, but better results.

(Re)read all the blog posts while you are at it. :wink: (@PhotoPhysicsGuy already linked to specific sections.)

OKLab is giving me a real headache.

A value of OKLCh(0.999933, 0, 0), or OKLab(0.999933, 0, 0), using the matrices provided on the page, is translated into XYZ(0.950279, 0.999799, 1.088081) in double maths, which is sRGB(1.000222, 0.999754, 0.998999). So, starting from a neutral colour below white (L < 1), I end up outside sRGB. :frowning:
In 16-bit linear terms, that’s about 65550, so my safety checks kill the processing.

Your result seems to be consistent with this table (which contains rounded values)
XYZ - OKLab pairs
But yeah, it’s odd that it doesnt convert to an achromatic in gamut value. Do the matrices to go from OKLab to linear-sRGB do the same thing?
lin-sRGB OKLab

EDIT:
this may be due to the fact that OKLab is defined by the LMS cone fundamentals and not in terms of the old XYZ1931 definition of color-matching-functions. In this regard OKLab is a “new” design just like Yrg is, if I recall correctly.

This would mean that if you convert from XYZ1931 to sRGB that you can get out of gamut colors, as the “newer” XYZ that you are in technically needs a different matrix that converts to sRGB. I assume your matrix for converting from XYZ to sRGB leans on a XYZ1931 definition.

Also: I could still be missing more technicalities here.

The matrices were updated 2021-01-25. The new matrices have been derived using a higher precision sRGB matrix and with exactly matching D65 values. The old matrices are available here for reference. The values only differ after the first three decimals.

Depending on use case you might want to use the sRGB matrices your application uses directly instead of the ones provided here.

It could be factors such as precision, choice of transforms, illuminants and constants. G’MIC is a good resource for experimentation.

I took my D65 XYZ values from IEC 61966-2-1, Illuminant D65 - Wikipedia
0.9504559, 1, 1.0890578

Using the matrices from A perceptual color space for image processing, that causes a failure:
Comparison failed for actual = Xyz[0.950470, 1.000000, 1.088300] and expected = Xyz[0.950456, 1.000000, 1.089058]

However, Bruce Lindbloom’s page Welcome to Bruce Lindbloom's Web Site lists values from ASTM E308-01, and with those, the test passes:
0.95047, 1, 1.08883

That white point breaks some other tests, where I took the test data from Online Color Converter - Colormath, which apparently uses the other D65 definition. :stuck_out_tongue:

See also https://www.colour-science.org/, I think it has all of these color spaces and transforms defined, and more: GitHub - colour-science/colour: Colour Science for Python