About choosing the right working profile

I’m answering here a question raised in another thread which was out of topic there:


Short answer: Not too much. :wink:

You could choose a large working profile like ProPhoto, use some software with a 32-bit floating point rendering engine (RT, dt, …), set the Relative Colorimetric rendering intent, and be gentle while editing your images. Most probably you will get what you need without worrying about OOG colors.

(Really) long answer:

Let’s start by acknowledging I’m just a plain user, and I’m reaching my grey zones, where I think I know the answers, but I can be wrong.

A lot has been said about color management even in this very forum, but I don’t think it would be bad to explain it again, from a different perspective: what a user expects, and what the user gets from software.

Let’s start by saying that working with digital colors is just working with numbers, no more, no less. Numbers doesn’t have limits, unless you tag them (real numbers, decimal numbers, negative numbers, …), so a user would expect that multiplying color values by a ridiculous number will just throw another color. Maths allow this, but sadly the truth is that you won’t get what you expect.

The main reason of all the color management problem is that we (users) wish to easily get what we think is a straightforward transformation. Far from the hard truth: our eyes-brain system performs complex psychophysical transformations that help us relate with the world we are living in, making changes in the real data our eyes get, so we have the information we need. That is, e.g. we have higher sensitivity to lightness than colors, so we can still see at dusk. And we have higher sensitivity to yellow greenish color (although I really don’t know why).

Among that, we can only see a certain range of colors, no further than violet (ultraviolets), no below red (infrareds), and scientists have plotted the colors we see into a 2D graph to ease the comprehension of color theory and the transformation of real colors with maths: the CIE xy chromaticity diagram (as you can guess, this quick explanation is really much more scientifically complex).

So we have a chart where colors are located with 2 values (x and y), and we have maths: where’s the problem, then? The problem is that to our eyes sometimes 2+2 is not 4, so the algorithms have to cope with that. The problem is now a bit more complex.

Is that all? Well, no. Now we start dealing with devices: sensors, displays and printers.

In an ideal world camera sensors would react to light exactly as our eyes-brain complex does. Far from truth: the different photosites aren’t sensitive to the same range of colors as our retina cells, and with the same intensity to lightness. And the sensor captures light in a linear fashion, while the eye-brain transforms the captured light into a short of gamma curve, mainly to enhance shadows without loosing much information from highlights.

And what our eyes really do is scanning small areas our visual field and sending the partial snapshots to our brain, which join them altogether in a short of HDR image. So coupling these behaviors to devices is really complicated.

Now, we have camera sensors that are able to capture a certain range of colors, usually represented in a CIE diagram as a triangle. Then we have displays that have to show images so we can see them, and they are also able to show only a certain range or colors (another triangle, or gamut). Finally, when we are happy with our images we want to print them in a device that inherently has a very different gamut that those of sensors and displays.

(image taken form EIZO website)


With modern cameras, we can capture a range of colors that exceed the gamut of even the best displays, but only in certain color areas, while lacking sensitivity in other areas (e.g.: a sensor may be more sensitive in blues, or oranges, while not being able to capture some of the highly saturated greens). On the other hand, with printers we will be able to get much more saturated cyans and magentas than on displays, because those are primary colors for printers, but saturated greens will be a problem.

Ideally we would use a working profile (a color space) that is much bigger than any sensor, display or printer gamut. Even one that holds the entire range of colors our eyes can see (ACESp0). But then we face the problem of the conversion between profiles and the fidelity of colors after each transformation: maths again. There’s a problem called quantization error, which generally speaking means that when converting one value in one range into another range, the resulting value is not equivalent to the starting value. In other words: when converting one color from sensor profile to working profile, the resulting color is not the same (to our eyes). When you convert it to the output profile, it’s again not the same.

The problem comes from the way primary colors are encoded within profiles (with hexadecimal values): the bigger color spaces (and profiles) hold much more colors than smaller spaces (sRGB is the most common), so when you convert a color from a bigger color space to the smaller one, there’s a chance that the resulting color will have a shift in its hue. Some say that’s not a problem with modern raw developers, because they work with 32-bit floating point numbers. That means to a plain user we have infinite decimals, so infinite values, but sadly that’s not true. It really means a lot more numbers than 32-bit integer numbers, but not what we would expect from real decimal numbers: there’s a limit in the possible values in 32-bit floating point numbers.

Anyway, in our images, as amateurs, all of this means that current raw processing software is absolutely capable of making us forget quantization errors, but if you are seeking the absolutely perfect precission of colors, you have to thread with care. And I’m not ready (I have no idea) to tell you how to get what you need, so you’re on your own.

For the rest of us, we have awesome raw processors, so we just choose a big color space and we’re done. Aren’t we? Not really. There’s yet the problem of colors that fall outside the gamut of the chosen color space. You may say that’s not possible at all, but it is, indeed: if for example you process an image as if you want to get from a disposable camera the quality you get from a professional one, well, you will be pushing values (and colors) a bit too much. In the end, remember that it’s all about maths, and you may end up with math operations that push a value outside the allowed range of colors. And here is when you take into account if your processor clips colors or not. Clipping colors means that when a certain operation throws a value outside the allowed limits, it just gets the maximum allowed value, no matter which was it’s calculated value.

The problem is: if you afterwards perform another aperation that brings back that exceeding value into range, if you didn’t clip it you will get the proper color. If on the other hand you clipped the value, when you bring back that value, the resulting color will not be the one you expected.

Example: you have a pixel with the RGB color (100,200,240), in a range that doesn’t allow values over 255. You push that pixel into (127,225,266), and the raw processor clips it, so as a result you get (127,225,255). If you further down in the pipeline of your software push back that pixel, on the unclipped scenario you would get (112,211,251), while with the clipped scenario you will end up with (112,211,240). A noticeable different hue.

Finally, if you’re not seeking precisely perfect colors, there’s a feature that comes to our rescue (at last!): the Relative Colorimetric rendering intent. Whenever there are colors that fall outside of the color space (Out Of Gamut colors), it makes a proportional transformation of the clipped colors and those that are near clipping, so they all can be seen without full saturation. The resulting color gradients will not be faithful to the original colors (will have a little shift), but luckily our eyes are not so sensitive to really saturated colors differences than we are to pastel tones.

So, summing up: unless you are facing a really demanding task, with color precission being a must, you will be fine with:

  • a large working color space (ACES, ProPhoto, even REC2020)
  • processing your images in a reasonable way (not crazy settings)
  • always using Relative Colorimetric rendering intent. If you ever find some odd colors while editing, you may check the out of gamut colors with the appropriate button, and perform the appropriate editing changes to put everything where it belongs

Hi XavAL,
I wish to thank you for you time and the good explanation of the matter…but :slightly_smiling_face: I have more questions

I known when we had raw developer with 16 or worse 8 bit it was better to have a working profile more precise possible… and with 32 bits floating point is not more necessary but I had doubts too…we need to not to waste levels in our coding, don’t we??

Therefore is better disable “Clip out-of-gamut colors” into Exposure Tag?

Now I understood that option into Exposure tab , if it’s that.

Excuse me … wasn’t Precpetual the intent which performs that?? Relative colorimetris is as Absolute which push OOG values to the boundary with white balance??

And more… where I set the intent into working space… in the color tab-color management option I can select only the working profile but not the intent


Somehow I expected it :wink:

This is a hell of a complicated matter, even though it seems simple to our eyes: we just look and that’s it…, we see.

Let’s start by agreeing that no matter what you set in any modern program, it will convert your raw image into a 32 bits floating point image, and will work with that data from start to end, until it’s exported. So you won’t ever be able to see that internal image, and it will always be coded in 32 bits floating point.

Now, by levels I understand posible colors within range of the chosen color space. That is, if we choose ACESp0 we won’t ever be able to capture, display or print every possible color inside that gamut.

I have to admit this is one of my grey areas: to me, with current computers and bit-depths available, it shouldn’t be a problem. We should just set ACESp0 and forget about it, but people keep telling that is a bad idea, that it’s much better choosing the smallest possible color space (working profile) able to hold all the colors you will be working with (from camera to printer).

I can’t fully understand why is that if we have quite a lot of possible values between colors (thanks to the 32 bit floating-point encoding), so to my understanding even using half the values possible for a color space, there are much more than needed possible values (decimals) between each level. But as there are people much wiser than me that suggests it’s better using a not so big color space, so be it.

One definitive solution would be to save our images as 32 bit floating point data (a possibility in RT), but that would lead to huge file sizes, so generally speaking currently it’s not an option.

I let it always disabled.

To us, if we are not looking for perfectly rendered/printed colors, the options are Perceptual and Relative colorimetric. In both cases we loose color fidelity, but in different ways: with Perceptual intent the whole set of colors are modified in their saturation so the relationship between colors is maintained while compressing them into a smaller gamut. This should result in smoother transitions between colors.

On the other hand, with Relative colorimetric intent, pastel tones are better preserved (are more faithful) and colors are scaled to the output profile white point, while the OOG saturated tones are compressed to the nearest color within gamut. So in that saturated area the relationship/gradation between tones is mainly lost. All of this means to me that while I will loose the most saturated colors, everything will look better because pastel colors look as I expect they should look, and everything is adjusted to the new white point, so as a whole the image looks as my eyes expect.

As I explained, in my images there aren’t lots of highly saturated colors and I’m not so worried about those than to pastel tones, so I’m fine with Relative colorimetric. If that’s not your case, if you need accurate saturated gradations, you may well be better using the Perceptual intent. That’s something you have to test with your own images.

In RT, you set the default rendering intent in:

  • for your display: Preferences > Color Management > Monitor > Default rendering intent:
  • for your printer: Preferences > Color Management > Printer (Soft-Proofing) > Rendering intent:

And again in RT, while editing an image, you can change the rendering intent used per image with this button:

(next to the soft-proofing and oog buttons)

The rendered icon in the screenshot is the Relative colorimetric rendering intent

@XavAL - Merci Xavier for your long explanation.

I’m in the process of learning this color management part of photography and I try to really understand exactly why using this or that working profile (and if that should always be the same or be related to the photo in question, etc.). But as you said, this is complicated stuff, but articles like yours (there are others as well on this forum) shines some more light on this topic. So thanks!

1 Like

Ok hence in my situation I can to choose Prophoto as working profile which holds all my camera colors quite precisely


I think it is inside floating point mathematics ( I do not know it)

Ok , I misunderstood… I agree

I meant if there was an intent setting from cameras to working profile too… but as logic there isn’t. All colors OOG of working profile are pushed onto the WP boundary because Prophoto,ACES or sRGB don’t have intents as they are matrix profiles :thinking:

If you have the mettle to re-enable Flash, this is the best illustration of rendering intent behavior I’ve yet encountered:


Someone else posted it recently here in another thread…

Someone correct me if wrong, but the two colorimetric intents are called that because they maintain the original color encoding for all colors that are in both the input and output colorspaces of the transform. Only the colors that are in the input space but not in the output space are messed with.

To the intent of the thread, recently I’ve not been using an intermediate working profile in my processing; I work the camera data in the original space all the way to output, and only then transform it to sRGB for the JPEG rendition using the matrix primaries for the camera. Seems to come out okay, but my images typically do not stress the extremes of color. And, it is my preference to know what the camera thinks should be the color, whitebalance notwithstanding…

The “Stanford-Levoy Parrot” (the subject of the link I posted) visually illustrates the translation of in-input/not-in-output colors per the various intents; relative colorimetric essentially draws a line from the color to the white point and drags the color along that line until it is in the output space. Thus, the hue of the color is maintained. But the cost is in the precision of the hue, as all those shades are now lost in mapping to a single point along the line.

I have one image that does vex blues, a shot of a theater stage that is lit predominantly with tungsten-level floods but is accented at the walls and ceiling with those nasty blue LED spotlights. I use this image to play with gamut compression tactics, here’s the result of the recent musing. First screenshot is of a segment of the image as a result of transform from camera (matrix) to display space (which on this tablet computer is close to sRGB):

Blotchy, posterized, the result of that relative colorimetric drag along the line to the white point to a single place in the destination space.

So, bear-of-little-brain here thought that, maybe, just dialing back on the saturation would pull the extreme colors into the ouput gamut before I did the display transform. Here’s that result using a plain old HSL saturation tool dialed down to halfway to total desaturation:

Better, the blues now have finer gradation, but this is at the expense of the rest of the image. There are ways to mask saturation tools to specific hue ranges, but rawproc doesn’t do that, yet…

But the best result has been with using a LUT profile to describe the camera space. I don’t have a target with enough colors to make such a thing, but I was fortunate to find a spectral sensitivity dataset for my Nikon D7000. From that I made a camera profile using dcamprof that, instead of the 3x3 matrix of primaries, has a LUT that does the relative colorimetric transform (A0toB0 is the LUT name, in ICC-speak). Here’s the result of replacing the camera matrix with that LUT in the transform to display space:

Nicer yet; kept a lot of the gradation as well as a richer hue. Also, the rest of the image didn’t suffer…

So that’s a bit of illustration regarding rendering intent. As for using a working profile, I think there’s probably value in that intermediate transform by providing an intermediate step in the journey from broad camera space to tiny display/rendition space, but I haven’t tested that yet…


Well, as I understand it, a rendering intent is only applied when you convert from one profile to another, or more precisely, from one color space to another color space.

If I’m not wrong, a profile doesn’t have a rendering intent. Instead, as illustrated by @ggbutcher, it has a way to convert from color space A to color space B, using a particular rendering intent. In his example the profile has a LUT to convert from camera space to display space using a Relative colorimetric transformation.

A way to easily understand what a rendering intent does is asking: How the heck are we going to put all the boxes from this drawer into that matchbox?

@ggbutcher : really useful link :smiley: Thanks!

If I’m not wrong Absolute colorimetric preserves the original color encoding, but Relative colorimetric adjusts colors to the white point of the target space (so they still look natural)

On the other hand, even when I was advised not to do so, I am curious to see what benefit would I have using my input color profile as a working profile (I know it’s well behaved and it can be used as such), all within RT. To my understanding the least conversions from color space to color space, the better. I will still be working in 32 bits floating-point, with unclipped values.

Will see…

1 Like

Actually, the rendering intent a profile can support is determined by how the color transform information is specified. The colorimetric intents are supported by matrix primaries, while the perceptual and saturation intents are supported by LUTs. A good thread about all this is here:

in which the discussion is carried by heads bigger than mine, @elle, @gwgill, and @Jossie

1 Like

Hi XavAL,
I’m playing with clip option in exposure tab…
With this flower I set the working profile to Prophoto and highlighted OOG color respect to RT_Large (Prophoto)… ; Printer profile is not set. Therefore I see which pixels from my image are OOG Prophoto.

with Clip OOG disabled


This means those cyan colors are the pixels OOG respect Prophoto… OK??


When I enable the Clip option the cyan pixels disappear


Why with and without Clip OOG option the spot is the same value?



What do I miss??


My guess is (and it’s just my guess as I’m not a programmer and I can’t read c code) that either way RT displays the resulting values taking into account the output color space and the rendering intent.

Then if you have Clip out-of-gamut colors on, as the color has been clipped it’s not OOG anymore, so no cyan shows. On the other hand, if Clip out-of-gamut colors is off, the preview image still shows the same color, but warns you that it has been clipped.

1 Like