This post is a collective answer to https://discuss.pixls.us/t/color-calibration-test-and-some-thoughts/ and some others on the same topic.
What is a color space ?
https://darktable-org.github.io/dtdocs/special-topics/color-management/color-spaces/
TL;DR : a color space is a shortcut to represent a light spectrum as a linear combination of 3 reference lights : the primaries. That representation has been chosen because it’s what the 3 types of cone cells of the retina do as well. It means that we can create a color space whose primaries match the perceptual ones : the LMS spaces, used for chromatic adaptation.
But all RGB spaces are not created equal, and their primaries, once seen by a human observer, may look more or less saturated. Some primaries may even allow to create combinations that don’t actually exist in human vision : we call them imaginary colors, and they are tricky because they use valid code values.
This graph shows the visible locus (the colored horseshoe) and the triangle overlay is the locus of the Prophoto RGB space (each corner of the triangle is a primary). Where the triangle escapes the horseshoe is where the colorspace can encode imaginary colors with perfectly valid RGB values.
From the triangular geometry of the color space gamut, you can deduce that no RGB colorspace will ever be able to perfectly match the visible locus : you either allow imaginary colors in or discard visible colors out. Like for Rec2020 :
For reference, the sRGB gamut :
And one camera RGB gamut with standard color profile (from Adobe DNG converter), later chroma-adapted to D50 using various methods:
The largest is not the bestest
There is this belief that the largest space is the best. Well, it depends the best for what. For archival purposes (that is, saving end products or sharing renderings between apps), maybe. But for any kind of color grading (that is, artistic and corrective color modifications), any space that allows imaginary colors might create problems by allowing users to push colors out of the gamut. Which is a bit sad if your original colors were in fact in gamut, and will also make gamut mapping at export time much more challenging.
Also, the primaries matter for perceptual consistency. For example, here is a hue gradient derivated from sRGB through HSL :
And a hue gradient derivated from a special RGB space, designed for color-grading, through Yuv :
See how secondary colors (yellow and magenta) get almost no range in sRGB while green sucks more than 1/4th of the range for itself ? Also, see how blue and red looks actually much more saturated than the rest of the gradient, in sRGB, while yellow feels brighter ? In the color-grading RGB, the saturation is more even and each of the 3 primary and 3 secondary colors get almost 1/6th of the range, which is our goal to get a perceptually-even space. Grading in a such space will behave much more uniformly (notice in both cases, the actual gradient is interpolated in linear Rec 709, only the color steps are defined from HSL or Yuv).
To close on the largest space sausage party, remember the vast majority of reflective colors (meaning material colored surfaces reflecting light produced by something else) lies well within sRGB. The very-high-saturation colors are usually colored light sources, that is, directly emitted light filtered by some filter or even lasers. That’s why HDR goes with large gamut, because while SDR aimed at representing reflective materials (the bare minimum to do photography), now HDR tries to represent light sources too.
Gamut clipping is (kind of) a myth
Any properly managed application converts from color space to color space through a Color Management System (CMS), that will perform gamut mapping using various strategies : the intents. When converting to a shorter RGB space, one will have to make concessions on the color accuracy to fit in the destination space. The least bad concession, among the computationally reasonable methods, aka the perceptual intent, is to decrease the chroma at same hue and luminance, meaning we compress all colors toward the white point. This will preserve the smoothness of gradients and avoid creating solid color blocks where progressive transitions are expected, because they would look fake and won’t match the rest of the image.
Since gamut is actually a volume (so the above graphs are projections over a plane), here is what it looks like in 3D :
So what we do when gamut-mapping is pushing colors inside the double cone, toward the vertical grey axis, while staying on an horizontal plane (same luminance) and a vertical plane (same hue). The way the darktable color calibration gamut compression does it is all colors gets pushed by a step, but the step is larger as the colors are further away from achromatic. That allows us to preserve the saturation gradients while barely affecting the low-chroma colors, that were valid in the first place.
But… most of you are using sRGB-able screens, and few have Adobe RGB-able ones. In any case, they are both much smaller than the camera space. Which means whatever you see on your screen is already the product of some gamut shrinking. If that looks good, there is no reason to get alarmed about gamut at all, since everything seems to hold on nicely. So the gamut-clipping anxiety is mostly a geeky concern from people who understand some of the problem but don’t really know about the solutions that are already built-in to solve it.
2 kinds of color spaces
We have the color spaces tied to a medium, that represent the range of colors physically rendered by the medium. That kind of spaces are necessarily bounded, meaning the white luminance is defined by the medium itself (backlighting power for a screen, or whiteness and gloss of the paper), so no RGB value can exceed the display peak luminance.
But the black luminance is also defined by the medium (remanent brightness of the LED panel or ink density for paper), and that luminance is never zero. So RGB values should not be below that black threshold either, which is handled by black point compensation in some properly managed apps, but is less known and randomly implemented.
Any RGB value below medium black luminance and without black point adaptation will be part of a solid black blob on the print. Notice that black point adaptation raises the whole luminance range uniformly, so people who complain about darker-than-screen prints should blame the lack of black point adaptation in their printer driver first.
Then, we have the reference color spaces : Rec 2020, Adobe RGB, ProPhoto RGB. These are not tied to a particular medium and can be used unbounded. They are only data encoding. Black will usually be encoded at 0 but be careful about that, because that holds no information on the original luminance of “what we call black” on the original scene, and it is to be taken as “the darkest value recorded” to which the retoucher will arbitrarily assign a luminance (as in filmic “black relative exposure”) until it looks good®.
Your usual raw processing pipeline goes from camera space (medium-tied, super large) to working space (reference, large but smaller than camera’s), to output/export space (reference or medium, as large as needed for reference, but usualy super small for media). Every conversion needs to gamut-map properly, but the assumptions and the methods are a bit different if we convert a final product to medium space or an intermediate working material to reference space.
Bottom line, gamut clipping exists as a thing, but will be avoided 90% of the time if you are using a serious application, through gamut mapping. Which means, as an user, you don’t need to have nightmares about it. The only problem that may arise is that colors far away from gamut will be handled in sub-optimal ways by the gamut mapping, because they are pushing it too far. But such cases should pop up in your face at editing time.
What does “out of gamut” mean anyway ?
From a medium space perspective, where white and black are bounded, out of gamut can either mean:
- brighter than display peak-emission (white)
- darker than display peak-density (black)
- too high chroma at current luminance (luminance is valid, but color is too far from achromatic).
Problem being, most gamut alerts won’t make the difference between the 3, so you don’t know if you need to fix the exposure, the black level or the chroma/saturation, or any combination of the 3.
From a reference space perspective, out of gamut only means too high chroma, since we don’t have bounds.
Why gamut-map early ?
This will only nail what we just laid out: to ensure our working color space contains only visible colors. No UV, no IR, no imaginary colors. Because if you manually push saturation/chroma on imaginary colors, you will create problems for later, at export time. Also, we don’t know what saturated UV light looks like in real life, compared to desaturated UV, but we know what it looks like in pictures : flat blue blobs at display peak blue emission. Or worse : blue gradients that move toward cyan through indigo.
But… we don’t need to gamut-map at the beginning of the pipe using the export color space as our gamut volume reference. That’s way too conservative. Also, we will have to sacrifice too much the valid colors to salvage 100% of the image in sRGB. Which brings us to the next step…
Don’t turn into gamut-alert freak
Having 2% of your picture out of gamut (whatever gamut you compute against) is no concern at all. Having 10% of your picture out of gamut neither, as long as your sRGB control monitor shows decent gradients. (I’m just throwing made-up percentages here, please don’t take them literally).
Having more than 25% of your picture out of gamut, or some colors really far away from achromatic (very high chroma), or color banding/posterization or flat color blobs where there should be gradients is a problem.
But the truth is people have a hard time assessing the qualia of an image and will not spot the visual issues in the frame. So your average social-media sunset degrades from amber-orange to rat-piss yellow and nobody takes offense:
https://www.instagram.com/p/CJWG56VLyBi/ (Just a random example picked from the latest results to a #sunset query on Instagram)
While, in the meantime, people turn up the gamut compression like crazy, looking at gamut alerts that don’t say if it’s a luminance or a chroma clipping, to fix issues that don’t exist since your export CMS should nicely take care of 90% of issues.
Also, gamut alerts don’t show gamut clipping. They show out-of-gamut pixels, which may or may not get clipped at export, depending on how clever your app is at managing color.
In what space to show histograms ?
Any. It doesn’t matter. What we look for, when looking at histograms, is the spread. If you really want to see the middle grey in the middle of the graph, then choose some space that has a power 2.4 as OETF, like sRGB. Although, the sRGB OETF has a linear slope for low-lights, and a power above, so it kind of messes up the spread.
But the truth is, if you wonder in what space to show an histogram, you most likely don’t know how to read it, so don’t bother at all, hide it and look at the picture instead.
Especially on opensource forums, where the crowd tends to bee geeky by nature, people overthink scopes and numbers so much that they end up retouching numbers instead of photos.
Conclusion
Stay analytical, assess what you see and criticize it. But don’t overthing gamut issues, or make them up based on theoritical concerns you half-understand.
Ensure your apps are properly color-managed, meaning they do gamut mapping (perceptual or relative colorimetry) and black point compensation when converting to other spaces (BTW, darktable doesn’t black point).
Check your apps assumptions : some drivers have built-in LUTs to do the perceptual intent that expect input images saved as sRGB or Adobe RGB, so exporting directly from your raw editor to printer space may result in undefined states (please harass your printer/driver vendor to provide that kind of information and tell them that black boxes are not cool until they get it).