During the past few days I have finally managed to implement a feature that I had in the back of my mind since quite a while, but for which I didn’t have a clear implementation at hand. The feature is the automated gamut compression from a wide-gamut colorspace to a smaller one (for example, from ACEScg to sRGB).
Some of you might at this point say that this is redundant, as ICC transforms provide the perceptual and saturation rendering intents that are exactly meant for solving such a problem. This is true in theory, but not always in practice. In fact, such rendering intents are not implemented in matrix-type ICC profiles like the standard sRGB. If you select the perceptual and saturation intents with a matrix-type destination profile, the CM will silently fall back to relative colorimetric.
Why this is important?
It is often recommended to edit images in a wide-gamut colorspace (like Rec.2020 or ACEScg), because it is less likely that color adjustments generate negative RGB channel values which are hard to deal with (I am a bit over-simplifying here, but this gives the basic idea).
However, in most cases the image is then finally saved to sRGB for web display, and some of the saturated colors that you generated with your edits might get lost in the conversion.
The solution is to bring such colors back into the sRGB gamut before the sRGB conversion, so that the final image retains as much as possible the nuances of colors of the edited output.
There are basically two possibilities in order to bring the colors back into gamut:
- keep the luminance constant, and reduce the saturation, or
- scale the luminance so that the saturation can be at least partly preserved.
This can of course be done by hand, but it is a rather complicated and tedious job. Moreover, the size of the destination gamut is often a function of the Hue, and therefore the amount of saturation/luminance adjustment that is needed to bring a given color back into the destination gamut depends on the specific Hue of the color being considered.
While this can be somehow achieved via a combination of Saturation and Hue masks, the whole procedure can become quite time consuming.
Hence, the idea to automate the whole process, such that each color is adjusted by the right amount depending on its Hue and the gamut of the destination colorspace.
Here is an example of what I am talking about. The screenshot shows one of the ACES test images (you can get it from here) being converted to sRGB through a straight relative colorimetric transform:
The input image is an EXR file in ACES-P1 colorspace, and contains highly saturated yellow and red colors that fall outside of the destination sRGB gamut. Some RGB channels are therefore clipped, which results in shifted colors and rather ugly bright areas.
Next, is the result of the first version of gamut mapping, in which the luminance is preserved and the saturation is reduced as needed:
The details are in this case preserved, but colors in the bright areas are too much washed out…
One can do the opposite, and scale the luminance instead of the saturation to bring the colors back into gamut:
The luminance is in this case scaled far too much, and while the hue of the colors is preserved, details are almost completely lost…
In this case, an half-way solution seems to be quite appropriate, with the saturation slide set to 50%:
The gamut mapping is actually a complement, and not a replacement, of the tone mapping tool, which should still be used to compress the highlights and re-adjust the overall tone of the image. However, the tone mapping should be applied to image data in a wide-gamut colorspace like ACEScg, while the gamut mapping can be used to further compress the gamut into the colorspace for which the edit is being targeted. Here is the result of both tools applied together:
The new code is already committed to GitHub, and updated packages are available for download if anyone wants to play with this new feature…
Digression: what is “Hue”?
The tool described here, and the tone mapping tool when the “preserve hue” checkbox is activate, try to preserve the hue from input to output, so that color shifts are avoided in the processing.
To do that, they rely on a color representation in cylindrical coordinates, where the cylinder axis is represented by the luminance, the radius by the chroma, and the angle around the luminance axis by the hue. However, it is important to keep in mind that such a representation is not univoque, and that in particular the hue angle depends on the actual color model being used to decompose the coordinates (HSL, HSV, Lab, Jzazbz, IPT, etc…).
Each model has a different degree of accuracy, that can be somehow quantified in terms of how much the perceived hue is constant, when moving along the luminance and/or chroma axis at fixed value of H. In other words, some models introduce perceived hue shifts when for example the chroma value is changed at fixed luminance and H. The smaller the shift, the better the model.
Then code in PhF is currently using Lab/LCh for the calculations, but I am already testing an alternative code based on the Jzazbz model, which, according to the literature, should provide better Hue linearity. While one should not expect dramatic changes when going from Lab to Jzazbz, this might further improve the color accuracy of the output, so why not…