Color calibration test and some thoughts

Just wait until consumer HDR screen become ubiquitous, and you will thank me. Not having to re-edit all your images for HDR is going to save you a lot of time. But now it feels like unnecessary complication, I get it.


I won’t wait for that and thank you just now :wink:

This is where I kind of halt and get confused: if the first goal is to have a scene referred edit, not tied to any output device, what determines how much colors should I bring back into gamut, in other words, how a “wrong” trade off made in the scene reconstruction phase (the one aiming to produce the master) could negatively affect later edits - later in the pipe, and/or later in time, as the example you’ve mentioned. By this reasoning, I could be tempted to push gamut compression all the way up to save a master as free as possible from gamut issues, so further creative/output modules, now or in the future, won’t suffer from a bad decision down the pipe.
It’s still difficult for me to think of editing an image without the immediate feedback from a medium, or, in other words, without introducing some kind of output bias in the scene referred settings, as it seems to have happened in my edit above.

Rawtherapee, exported as rt v4 rec2020, and then darktable
DSC_7355.nef.pp3 (14.4 KB) DSC_7355.nef-1.tif.xmp (19.5 KB)


1 Like

Little late to the party, waiting for the batch process of today’s Christmas photos to conclude.

With rawproc:

Used my Nikon D7000 SSF profile, made with the rawtoaces data; seems to hold true the hues in the blue shadows.

My D7000 is in the corner, staring at the image, and muttering, “Pick me up, you fool!!!”

Edit: almost forgot to point out - what you’re looking at here is just one color transform, camera profile -> sRGB at output. No intermediate working space…


holy smokes! making those ssf profiles really payed off.

Gamut compression in color calibration aims at ensuring all RGB values are contained within the visible spectrum, hence the whole surface within the horseshoe:

That’s because cameras tend to record a bit of UV and make it pose as blue, plus color profiles and chromatic adaptation may push colors out of visible spectrum. So we cleanup after the CAT. And, as a metric of visible spectrum, the closest we have are Rec2020 and ProPhoto RGB.

If every pixel lies in the visible spectrum, then we can work. If not, slider-pushing on things like saturation will only make it worse, and the gamut-mapping at output becomes a fiesta of randomness.


This is unexpected…
unbreak color profile, rgb levels and rgb curve without any gamut compression

DSC_7355.nef.xmp (8.7 KB)


Yes, it’s expected. You apply a channel-wise non-linear brightening, it is known to desaturate. Change the concavity of the curve, now you darken and saturate. Hence the color preservation modes in the non-linear modes.

Problem is the saturation changes don’t happen at constant hue.

1 Like

@aurelienpierre, I think an article or video about using gamut compression and soft-proofing, gamut check would be really helpful.
E.g. your video seems to suggest that one should push gamut compression to bring back all colours into the gamut ( Also, one can see at 35:45 that the soft-proof profile is linear Rec2020, and the histogram is AdobeRGB (which, if I remember correctly, you mention to be a close match for your display). Above, in one of the comments, you advise to

Change softproof profile and histogram profile to Rec2020 PQ or HLG.

I’m sure each has its uses, but when to use which is a real mystery to me.

It is my understanding that at output colours are pretty much ‘truncated to the gamut’, meaning that if I have two different areas in the photo that have the same ‘colour hue’ but different ‘saturation’ they may well be mapped to the same sRGB values (on the boundary of the sRGB gamut) in the output, leading to a loss of detail. So shouldn’t we care about that?

I’m just taking family snapshots, so pinpoint colour is not important for me, really, but I still find the question itself interesting.

1 Like

Me too. Eg this idea of editing freely to begin with, to future-proof your work, then tailoring to the actual output medium at the end. Suppose in a few years I have an HDR screen and I pull up an old edit which needed a fair bit of work to get the sky ok. I can’t see it being anywhere near optimal for the new screen. When I originally edited it, I could only see what my sRGB (or AdobeRGB) screen allowed, so it could never have been subtle enough for HDR surely?

This doesn’t necessarily happen and there is “perceptual” and “relative colorimetric” processing which gives choices over how the output is fitted to the gamut. The Cambridge colour site and others explain this (and a third process).

1 Like

The pipeline color space is the human visible spectrum. Always. The closest RGB spaces from that are Rec2020 (which is a bit shorter) and ProPhoto RGB (which is a bit wider aka has some imaginary colors).

That’s all we care about when I talk about pipeline gamut. Visible gamut is the goal/principle, Rec2020 is the closest technological tool from that goal. Forget about Adobe RGB, sRGB and the likes. These are output media, that’s for export, not for retouching.

Then, histogram space doesn’t really matter. Just choose something that puts middle grey reasonably on the middle for legibility (that is, a non-linear color space). Anyway, it’s only a scope.

No, we don’t care. The only concern is if a color gradient looses its gradient to become a flat blob, that’s ugly. But this happens when gamut is clipped, aka the whole surface gets “rounded” to the same color. With gamut mapping, sure we will have to make some sacrifices and loose saturation, but gradients should remain gradients, although less saturated, so the image will still look believable. And 2 pixels at same hue but different saturation will still have different saturation in sRGB if they are mapped.

Just cool down. For the past 5 years or so that I have been doing opensource photography forums, people are way too concerned about gamut on a theoritical level while still producing the infamous rat-piss yellow sunsets (aka unable to see the actual gamut escapes that are right in their face while fantasizing about manual corrections they should apply to take care of gamut issues that don’t happen). Truth is gamut is handled trough intents in a semi-clever way at output, and anyway your screen most likely displays sRGB so what you see on screen is already gamut-clipped and/or gamut-mapped. If it looks good, then don’t worry.

Again, we handle gamut mapping in color calibration only to cleanup after the chromatic adaptation because we know it will push colors out of the visible gamut. It’s only intended to not make things worse and start retouching with legitimate colors in the pipe.


I’m a hobby photographer annoyed by prints which look different to that what I see on the screen.
That’s the reason I started recently to dig into color management.
Up to now I understand what an input and an output profile is, a working profile does and I have an idea of what softproof needs to be used for. I even calibrated my display.
What’s still a mystery for me is, is the thing about all the different clipping indications. We have gamut check and softproof for which I can choose the different profiles for softproof, display, preview display and histogram.

When I switch on clipping indication (preview mode full gamut) and then toggle between different
profiles for the histogram the indication differs. As also for different softproof and display profiles.
To be honest I’m a bit lost.
When it doesn’t matter which settings lead to proper clipping indication why does it change the indication itself?

Don’t get me wrong, I think it’s great to have all these options.
I would like to understand what I’m doing, and darktable, as difficult it is, helps to this. But until the tipping point of understandment it’s a pain…

1 Like

exposure compensation -1EV
highlight reconstruction off
tone curve film-like
local contrast


Can you recommend settings for dummies like me? I don’t work in a lab, but in our living room; sometimes, I develop my photos at night, beside a warm white desk light; sometimes during the day, by the window, and outside it may be sunny, or it may be raining. Therefore, simply trusting my eyes may not lead to consistent results.

  • I’d think that at least the ‘clipping indication’ should be useful. The defaults are, I think, ‘full gamut’, -12.69 EV (8-bit sRGB), 99.99%.
  • For the histogram (as I never print and just post online), should sRGB be good choice?
  • Or should it be one of Rec2020 PQ and HLG? And which one? They are not even remotely similar:

Based on those screenshots, will my highlights be clipped in output? Will my blacks be grey, or have large black spots? Do I make full use of sRGB, which looks like this (not too far off from HLG Rec2020):

  • Should I just ignore gamut checking?
    – Or only use it when I adjust OOG colours in CAT?
  • Which setting (softproof or histogram) does gamut checking use? (Update: experimenting shows it uses softproof.)
  • Does the clipping indicator use the same one? (It seems to use a weird combination of display + histogram, see the different sets of pixels highlighted by the clipping indicator.)


You seem to be hitting this issue:
i.e. the display profile affects the clipping indications and histogram even if it shouldn’t.

I’ve found the ISO 12646 colour assessment conditions (the lightbulb icon in the same toolbar as the clipping and softproofing toggles) pretty useful in this kind of a situation for better judging the middle grey exposure and the highlights. Also I usually adjust the (laptop) display brightness to match the environment.


Thnaks for the explanation, I’ve realized only today that is possible to brighten the image with the level tool before the profile conversion and it works too.

DSC_7355.nef.xmp (9.4 KB)

1 Like

But why are you hurting yourself like that ? It does not “work”, you are merely exploiting a drawback of some algo to desaturate in an uncontrolled way and the profile is designed to work on linear RGB input, while you are now feeding it non-linear RGB, so it’s basically wrong.

People at the aces central forum have developed a fast rgb gamut compression
Really great stuff :blush:

I’ve implemented a semplified version for g’mic.
There are only two parameters, gamut that is the strength of compression and threshold, under this value the distance (“saturation”) is untouched

the steps are
1)found achromatic , same as Value in HSV
2)found distance, similar to Saturation in HSV
3)apply tonemapping on “distance”
4)come back to rgb

Here it’s explained very well

This is the the G’mic code, however it works in Photoflow but not in Gimp :thinking:

-fill \
compressed_distance=(distance* (1+distance/(gamut*gamut)))/(1+distance);\   

DSC_7355.pfi (22.7 KB)


simplified rgb gamut compression

1 Like

How is blue degrading to magenta “great” ?


A lot, at least with the idea behind it we could:

a) use the formula sat= (max - min)/max as a mask for the detection of oog colors in the current rgb working space, only values higher than 1 are outside of the gamut.
We could even clamp this mask to 0-1 and it would be useful for less global desaturation where it’s not needed.

b)restore luminance and hue from the original not desaturated image

It’s easy to keep the “rgb hue” locked, just use the same strategy used in the filmlike curve in Rawtherapee and Lightroom.
Basically apply the tonemapping function on the max channel, in this special case the minimum channel is always normalized to 0 and linearly interpolate the middle value with the formula : middle= (middle / oldmax) * newmax

In this way the new middle value keep the same relative distance from the max and min channels (same hue).