So, I’ve tried the Uchimura curve on a few images (it was applied per-channel). I loaded them into darktable, exported as 32-bit linear Rec2020 floating-point TIF, tone-mapped them, and then gamut-compressed them to the sRGB gamut (using gradual compression to bring out-of-gamut values inside the gamut).
OK. Some gamut compression charts.
We are rotating in 30-degree steps around the D65 white point xy coordinates, so each of these contains 12 groups of bands.
For each angle and Y luminance value:
we take the xy coordinates at the edge of the Rec2020 gamut (this is determined via a simple search), and note its distance from the white point (maxDistanceRec2020)
Then, we shorten that distance according to a fixed multiplier; each chart has its own, starting from 40% and going up to 100%, in steps of 10%. We treat this as a kind of ‘saturation’. distance = saturation * maxDistanceRec2020
Then we calculate the x and y of the unmapped pixel: x = cos(alpha) * distance + D65_WP_x y = sin(alpha) * distance + D65_WP_y
We convert this xyY into XYZ, then to Rec709RGB, and clamp the components to [0, 1], to demonstrate the naive approach. These values form the first line in the band.
Then we take maxDistanceRec709, the max. distance for the given angle for the Rec709 space (again, determined via a search).
We calculate ratioToMax709Distance = distance / maxDistanceRec709
We pass it through a curve that’s linear until 0.8, and smoothly converges to 1 above that. This gives us compressedRec709RatioFromMax.
We calculate compressedDistance = maxRec709Distance * compressedRec709RatioFromMax. If the original distance was less than 80% of the maximum, it remains unchanged; otherwise, it gets compressed more and more, but will never exceed the max. distance.
Then we calculate the x and y of the mapped pixel: x = cos(alpha) * compressedDistance + D65_WP_x y = sin(alpha) * compressedDistance + D65_WP_y
We convert this xyY into XYZ, then to Rec709RGB, and clamp the components to [0, 1]. These values form the second line in the band.
The third line in the band is simply Rec2020 at 30% saturation, as that saturation never needs compression. It’s included as a hue reference.
The fourth, monochromatic line indicates the extent of the compression, 1 - compressedDistance / distanceRec2020. Black means no compression.
Right. So, here are the charts (from 40%, as 30% needs no compression):
Rec2020 at 40% (only one of the greens needs some compression):
If you have good wide-gamut (but not HDR, so Y should not exceed 1) input files, I’d appreciate them, to check how this behaves with real-world images. Flowers (maybe @ggbutcher ?), hummingbirds, whatever.
And hue angle / saturation series, with increasing Y (from 0.1 to 0.9, in increments of 0.1), again mapping Rec 2020 to Rec 709 (saturation is meant as above, WP → pixel xy distance relative to the Rec2020 gamut boundary, then smoothly compressed into Rec 709 as described above):
Flowers etc may have vibrant colours we cannot display on a monitor, or represent in an sRGB JPG; that’s when this could be useful.
I have versions of this algorithm that tries to figure out the required amount of compression based on image content. This series was done with a fixed level, as it’s based on synthetic test images.
So, I’ve found a few shots. All of these were exported in Rec 2020 floating point, no contrast, curves, no tone mapping (filmic, sigmoid), and no saturation changes in darktable (I did leave color calibration’s default gamut compression at 1).
_DSC0488.NEF from Chrysanthemum flowers
This one had bright highlights. If I clip Y to 1, I get this:
there is a language problem, and that causes the communication issue;
you are ‘artistic’;
you really want to know what those numbers mean;
you are a troll (sorry for my bluntness, I cannot rule out this possibility).
If there is a language barrier: please use an online translator service, or write in a language you know well and that can be translated online.
If you are ‘artistic’: yes, when working with computers, numbers are everything. You don’t need to always understand their meaning, in order to use software (I understand way less than I would like to). Only you can decide you are interested or not. If you aren’t, please do not take part in technical discussions, since the outcome does not really interest you, and you are wasting the time of others. If you are interested, please ask more concrete questions – if possible, please first check other sources online, there is a lot described about the basics of colour in different blogs, Wikipedia etc.
If you are a troll, then there’s nothing I can say. If you do not chose one of the options above, this is the only one remaining.
Ah, I remember the nut; hard to keep most “normal” processing from going cyan. This was one of troy_s’s major concerns about my SSF LUT work, and I never found time to dig into it.
Nice work; the color conversion part of raw processing is probably the least-understood, you give a nice step-thru of your approach that lays flat a lot of the generic stuff.
Now, you’ve got me re-interested in studying the overall march from camera response to rendition gamut. Unless one wants to go all-out on ACES, the ICC gonkulator is the context for that…