New feature: automated gamut mapping

During the past few days I have finally managed to implement a feature that I had in the back of my mind since quite a while, but for which I didn’t have a clear implementation at hand. The feature is the automated gamut compression from a wide-gamut colorspace to a smaller one (for example, from ACEScg to sRGB).

Some of you might at this point say that this is redundant, as ICC transforms provide the perceptual and saturation rendering intents that are exactly meant for solving such a problem. This is true in theory, but not always in practice. In fact, such rendering intents are not implemented in matrix-type ICC profiles like the standard sRGB. If you select the perceptual and saturation intents with a matrix-type destination profile, the CM will silently fall back to relative colorimetric.

Why this is important?

It is often recommended to edit images in a wide-gamut colorspace (like Rec.2020 or ACEScg), because it is less likely that color adjustments generate negative RGB channel values which are hard to deal with (I am a bit over-simplifying here, but this gives the basic idea).
However, in most cases the image is then finally saved to sRGB for web display, and some of the saturated colors that you generated with your edits might get lost in the conversion.
The solution is to bring such colors back into the sRGB gamut before the sRGB conversion, so that the final image retains as much as possible the nuances of colors of the edited output.

There are basically two possibilities in order to bring the colors back into gamut:

  • keep the luminance constant, and reduce the saturation, or
  • scale the luminance so that the saturation can be at least partly preserved.

This can of course be done by hand, but it is a rather complicated and tedious job. Moreover, the size of the destination gamut is often a function of the Hue, and therefore the amount of saturation/luminance adjustment that is needed to bring a given color back into the destination gamut depends on the specific Hue of the color being considered.
While this can be somehow achieved via a combination of Saturation and Hue masks, the whole procedure can become quite time consuming.

Hence, the idea to automate the whole process, such that each color is adjusted by the right amount depending on its Hue and the gamut of the destination colorspace.

Here is an example of what I am talking about. The screenshot shows one of the ACES test images (you can get it from here) being converted to sRGB through a straight relative colorimetric transform:

The input image is an EXR file in ACES-P1 colorspace, and contains highly saturated yellow and red colors that fall outside of the destination sRGB gamut. Some RGB channels are therefore clipped, which results in shifted colors and rather ugly bright areas.

Next, is the result of the first version of gamut mapping, in which the luminance is preserved and the saturation is reduced as needed:

The details are in this case preserved, but colors in the bright areas are too much washed out…

One can do the opposite, and scale the luminance instead of the saturation to bring the colors back into gamut:

The luminance is in this case scaled far too much, and while the hue of the colors is preserved, details are almost completely lost…

In this case, an half-way solution seems to be quite appropriate, with the saturation slide set to 50%:

The gamut mapping is actually a complement, and not a replacement, of the tone mapping tool, which should still be used to compress the highlights and re-adjust the overall tone of the image. However, the tone mapping should be applied to image data in a wide-gamut colorspace like ACEScg, while the gamut mapping can be used to further compress the gamut into the colorspace for which the edit is being targeted. Here is the result of both tools applied together:

The new code is already committed to GitHub, and updated packages are available for download if anyone wants to play with this new feature…


Digression: what is “Hue”?

The tool described here, and the tone mapping tool when the “preserve hue” checkbox is activate, try to preserve the hue from input to output, so that color shifts are avoided in the processing.

To do that, they rely on a color representation in cylindrical coordinates, where the cylinder axis is represented by the luminance, the radius by the chroma, and the angle around the luminance axis by the hue. However, it is important to keep in mind that such a representation is not univoque, and that in particular the hue angle depends on the actual color model being used to decompose the coordinates (HSL, HSV, Lab, Jzazbz, IPT, etc…).

Each model has a different degree of accuracy, that can be somehow quantified in terms of how much the perceived hue is constant, when moving along the luminance and/or chroma axis at fixed value of H. In other words, some models introduce perceived hue shifts when for example the chroma value is changed at fixed luminance and H. The smaller the shift, the better the model.

Then code in PhF is currently using Lab/LCh for the calculations, but I am already testing an alternative code based on the Jzazbz model, which, according to the literature, should provide better Hue linearity. While one should not expect dramatic changes when going from Lab to Jzazbz, this might further improve the color accuracy of the output, so why not…

8 Likes

Oh gee, you struck a nerve here. In the shooting I’ve done this past year, my nemesis has been extreme blues in stage lighting. In fact, I’d just decided this morning to dig into the dcamprof gamut compression switches to find a decent camera profile to specifically address that part of the spectrum, but I’m going to digest your missive along with that to incorporate such into my “trade space”. From a workflow standpoint I think I’d rather deal with such on a per-image basis with a rawproc tool than to mess with it earlier in a camera profile… You’re also pulling me to start using PhotoFlow in my workflow…

1 Like

@Carmelo_DrRaw

if one would create an sRGB profile that is based on LUTs (instead of matrices), which should in theory be possibel, this problem would be solved within the context of ICC profiles, wouldn’t it?

Hermann-Josef

I tend to agree with that… however, some unnaturally saturated colors are generated by the camera profile, and do not exist in the original scene.

In principle yes, however the gamut mapping tables in ICC profiles are some sort of “black box” that you have to trust, while an analytical approach as a processing tool is 100% transparent.
Moreover, there would be no way to integrate any intermediate color model other than Lab for the actual mapping. Several knowledgeable people already suggested to switch to Jzazbz as a better alternative…

2 Likes

Where does preserve hue, clipping and black point compensation fit into all of this? Seems like there is more and more things that we need to keep track of…

Seems like it is time for a detailed blog post about all that :wink:

  • preserve hue: when you apply a non-linear curve to the RGB channels, you always distort the colors. That is, if you start from RGB = 0.9, 0.2, 0.1 and you apply a film-like curve, the R channel will be compressed while the other two will stay the same because they are on the linear section of the curve.
    Let’s say you end up with RGBout = 0.7, 0.2, 0.1
    This is a different color with a different Hue.
    The preserve hue option rotates the three RGB coordinates so that the output luminance does not change, but the input hue is restored. The final color will have the same hue as the input one, but the same luminance and chroma as RGB=0.7,0.2,0.1

Here is an example from the same image as above. The tone mapping output is compared to the underexposed original image, to better see the hue differences.

“protect hue” disabled:

“protect hues” enabled:

  • clipping: in PhF all colorspace conversions are “unbounded” whenever possible. That is, if you start from a wide-gamut image with all RGB channels within the [0, 1] limits, you might end up with negative or >1 channels after the conversion to a smaller gamut colorspace, if some source colors don’t fit.
    The clipping options ensures that <0 and >1 values get clipped to 0 and 1 respectively, as if they were sent to a physical display device (you cannot have a negative light intensity, and you cannot exceed the maximum intensity).

  • black point compensation: this ensures that the black level of the source colorspace is mapped to the black level in the output. That is, RGB=(0,0,0) in the source image is always mapped to RGB=(0,0,0) in the output. This should be kept activated for all normal situations, and is usually disabled only in special cases, like soft-proofing.
    This option could probably be removed from the UI, but since it is part of the options in the ICC transforms I opted to make it available for whoever might need it…

To summarize, here is what I suggest to do if you have an high-contrast image and you want to prepare it for web display:

  1. convert the image to a suitable wide-gamut linear colorspace. Rec.2020 and ACEScg are two good choices.
  2. adjust the exposure so that mid-gray is at the right brightness level. This will probably push the highlights above the display limit, but we will recover this in the next step
  3. apply your preferred tone mapping method. Blender’s Filmic has no option for hue protection, and the output is directly in sRGB colorspace. My own tone mapping generates an output image in the same colorspace as the input, and provides the “hue preservation” option for better color matching
  4. add a colorspace conversion and select your target colorspace (for example, sRGB). Activate the channel clipping and use the “gamut warning” option to see how big are the out-of-gamut areas. If they are important, de-activate the gamut warning and activate the “gamut mapping” option. Adjust the saturation slider to the right if highlights become too washed out.
  5. export your image in sRGB colorspace

In future I will add the possibility to apply point 4. directly from the image export dialog, with real-time preview of the colorspace conversion result. For the moment, this has to be done as part of the image editing process…

Hope this helps!

3 Likes

Yes, I was wondering about their roles in a standard workflow.