ACES p0 Out of gamut color

in these years I’m trying to understand more about color management and color profiles…
Yesterday when I was working on an flower image with neutral profile,I changed the adviced working profile to the others ACES p1 and ACESp0.
I set the highlight clipping threshold to 255 and with the ACES p0 which would have to comprise all colors I have a clipping indicator on high red

If ACES p0 contains all human visible colors …

what does it mean??
Are they immaginary colors captured from camera?

you click on a yellow spot and receive a blue sample point on your chromaticity diagram?

you can get crazy values when using a 3x3 input colour matrix profile. input colour transforms cannot be expressed as a 3x3 transform, these are always only fitted to the most convenient/important/moderately saturated colours. this may result in weird artifacts closer to the gamut boundaries, including pushing out perfectly valid points into the nirvana.

not sure i understand exactly the colour pipeline you’re using there though.

I don’t know if the auto-matched is 3x3 matrix


What do you need know?? :neutral_face:

The fact that the working profile is able to show all the visible colors doesn’t mean an image won’t ever be overexposed.

It happens that yellow flowers are easily overexposed in the red channel, I think mostly because the camera histogram leads you to believe that the exposure is perfect, but in fact it’s ok for the green channels, but overexposed for the blue, red, or both channels. That is, with the exposure set on camera, the photosites below the red filters of the color filter array receive the highest amount of light they are built to register, and can’t go any higher level no matter if they keep receiving light. Thus, the red channel gets clipped. There’s no profile that can correct that.

On the other hand, you can get different results with a different demosaicing algorithm (instead of amaze, try rcd or any other to see if you still get clipped reds). Or you can try reducing the exposure in RT, and maybe the red channel is just clipped a little bit, so you won’t loose much information.

:pensive: I’m in bad again … I misunderstood overexposed vs out of gamut.

My intention was how to view out of gamut colors respect working profile in RT. Is it possible?
I see you can view OOG respect printer,out profile and monitor profile but not vs working profile.

If you use ACES AP0 as a working profile and can see out of gamut colors, just please, please tell me how you can do that! :sweat_smile: :wink:

P.S.: taking into account that ACES AP0 can show all the visible colors :smiley:

On the other hand, if you set a smaller gamut working profile (e.g., REC2020), to my knowledge what you will see are clipped channels, but I don’t know of any setting that will blink when out of gamut colors are present.

The only way I’ve found is lowering the exposure: if the histogram has an abrupt ending, then the sensor had channels clipped. If you get a darker image but a nice right ending on the histogram, then you had out of gamut colors.

Anyway, take those explanations with a grain of salt, as I’m not really sure about it

@dafrasaga Curious how you have arrived at the chromaticity diagram and the placement of the sample point. To my understanding AP0 is for transport and AP1 is for editing, but people have disputed that. This is mainly because there is no point introducing false colours or non-data, which you would end up discarding.

Nobody will prevent you from using AP0 as your working profile: AFAIK, the main problem have always been the quantization errors. If you have a large number of possible colors and try to code them into a small, fixed digital number (as could be 16bit integer, or worse 8bit integer), there will be chances to miss the correct conversion to a digital number. And if you do this lots of times, as in developing an image, you will end up with lots of artifacts.

The fact is that nowadays at least the main raw developers work with 32bit floating point engines, so the chances to miss the conversion is negligible.

There would be a good amount of the possible coded colors that simply won’t be visible colors (they are also called not real, imaginary, …). Those are the values that fall outside the CIE horseshoe range of colors.

On the other hand, while editing the image you won’t have to worry about out of gamut colors, because AP0 has them all (there won’t be any out of visible gamut colors).

And of course, it would be a nonsense to save the final image with any of those huge color spaces, just because there’s no device capable of outputting such amount of colors.

About the chromaticity diagram shown, most probably is just one of those images you find in Internet.

Not quite. The larger the space, the less accurate the colours would be at the fringes. This applies to colours within the chromaticity horseshoe, so anything outside would only be worse.

I find that out of gamut and out of range results happen during colour space conversions not mainly because of quantization but because of what @hanatos implied. Math will inevitably cause them to happen and the common way of dealing with it is to clip those colours. Of course, a few of our devs here have explored recovering them, which is no easy task because we can only guess what the colours could be.

I would love to read some references that make you say that. My maths doesn’t work that way, so most probably I will have to update them.

I will defer to the pros. My mind has been cloudy lately. Feel free to start a deep dive thread or at least ask people who are in the know. :stuck_out_tongue_closed_eyes:

1 Like

xyY is a projection of XYZ, which models human vision. The three channels of XYZ have spectral responses (how much light energy at each frequency creates the same stimulus). The camera has (I suppose) three coloured (RGB) filters over the sensels, and these also have spectral responses, which are not the same as those of XYZ.

An accurate mapping from camera RGB to XYZ would be complex [*]. So we use a simple 3x3 matrix mapping that is not accurate, but is optimised to give reasonably accurate results for common (unsaturated) colours, at the expense of greater inaccuracy in rarer colours, and these tend to be the saturated colours.

They can be so inaccurate that they are outside the CIE horseshoe, and even outside AP0 (for example, because some results have x<0).

In the short term, we can clip values, or do some other transformation to pull colours inside the horseshoe.

In the long term, we need better cameraRGB -> XYZ transformations.

[*] EDIT to add: it may not even be possible, because two different spectrums may look the same colour to humans but different colours to a camera, or different to humans but the same to a camera. But even if a perfect mapping is not possible, I’m sure we can do better than we do currently.

1 Like

This might be worth a read:

To date, I’ve only played with matrix profiles in dcamprof, but the matrix shaper LUT and gamut compression pique my interest…

re: anything to read. i always point people to pascal’s thesis: he laid out the problem quite clearly i think. see for instance p26 what the kodak gamut looks like if fitting a 3x3 to the IT8 patches.

re: impossible: it is always possible and well defined to go from spectrum -> XYZ coordinate. the other way around not so much, luckily that’s not what we’ll need everyday for photography (but there are many, also fast, methods to do this by now).

google use a 2D radial basis function for this purpose, see appendix C.3 in the supplemental material to [Liba et al. 2019]. this is similar in spirit to the colour checker lut module we use in darktable, only that they normalise out brightness and explicitly use it for ground truth calibration, not to emulate jpg styles.

Hi All,
:cry: now we are going beyond my knowledge :grin:

RT hasn’t a direct OOG for working profile but we can create a own ACES p0 profile with ICC profile creator immagine or choose a built in as RTv2,4_ACES-AP0 as output profile and not setting a printer profile and click on soft-proofing and output of gamut warning.


I think it works… doesn’t it?

with ACES P0 with clip out-of-gamut colors into exposure tab

with ACES P0 with NO clip out-of-gamut colors into exposure tab

with ACES P1

therefore there’are some colors out of AP0 gamut which would have to comprise all visible colors :neutral_face:

It was from internet for gamut comparation visible colors and AP0 :wink:

This statement arises a question in my mind…

Out there , around us , there’re a lot of colors visible and not ( think about visible spectre and infrared or ultra violet) … these colors are filtered by the RGB filters on bayer sensor therefore captured fotons are related to the visible spectre… am I right? :neutral_face: :roll_eyes:
Hence if after I have colors out of the CIE chomaticity is for by conversions and nothing else…OK??

I think you have found your workaround :smiley:

To my knowledge that’s partly correct.

I was always talking about working profiles, with no conversions in the middle, but just working with color spaces and pixel values.

As I said my maths are pretty simple, and each pixel is just a set of 3 numbers, one for each primary color. If some algorithm changes those values, I end up with a pixel with 3 new values, that’s it.

Then it happens that those 3 values should represent a visible color, and here start the problems, because if those values are outside the boundaries of the chosen color space, then that color doesn’t exist in that color space. But what happens if one tool throws a color outside the gamut and the next one brings it back inside? If after the first tool the pixel gets clipped, then the next one can’t bring back information (details).

To me, the fact that a pixel is out of gamut while in the working space is not a problem. That’s just maths. And when the image gets converted into the output color space, then all the unreal or clipped colors will be discarded, but not before the full processing has been finished.

In this sense, if I choose ACES P0 as my working profile, and while I’m within the working profile, that part of the color space that are not real colors to human eyes is still useful, because while inside the gamut of the working profile I can work with pixel values that mathematically will be correct, and only while exporting to the final image I will be worried about unreal or out of gamut colors. In the end, if I export an image into sRGB, there will be plenty of ACES P0 colors that will fall outside the sRGB gamut, but that is another problem, that’s the output problem, not the working profile gamut problem.

But as suggested before, I’m most likely wrong, and have an urgent need to update my maths and fully understand what happens with those 3 axis ICC profiles, and working with highly saturated colors, and so on. So I have some reading to do…