About choosing the right working profile

Somehow I expected it :wink:

This is a hell of a complicated matter, even though it seems simple to our eyes: we just look and that’s it…, we see.

Let’s start by agreeing that no matter what you set in any modern program, it will convert your raw image into a 32 bits floating point image, and will work with that data from start to end, until it’s exported. So you won’t ever be able to see that internal image, and it will always be coded in 32 bits floating point.

Now, by levels I understand posible colors within range of the chosen color space. That is, if we choose ACESp0 we won’t ever be able to capture, display or print every possible color inside that gamut.

I have to admit this is one of my grey areas: to me, with current computers and bit-depths available, it shouldn’t be a problem. We should just set ACESp0 and forget about it, but people keep telling that is a bad idea, that it’s much better choosing the smallest possible color space (working profile) able to hold all the colors you will be working with (from camera to printer).

I can’t fully understand why is that if we have quite a lot of possible values between colors (thanks to the 32 bit floating-point encoding), so to my understanding even using half the values possible for a color space, there are much more than needed possible values (decimals) between each level. But as there are people much wiser than me that suggests it’s better using a not so big color space, so be it.

One definitive solution would be to save our images as 32 bit floating point data (a possibility in RT), but that would lead to huge file sizes, so generally speaking currently it’s not an option.

I let it always disabled.

To us, if we are not looking for perfectly rendered/printed colors, the options are Perceptual and Relative colorimetric. In both cases we loose color fidelity, but in different ways: with Perceptual intent the whole set of colors are modified in their saturation so the relationship between colors is maintained while compressing them into a smaller gamut. This should result in smoother transitions between colors.

On the other hand, with Relative colorimetric intent, pastel tones are better preserved (are more faithful) and colors are scaled to the output profile white point, while the OOG saturated tones are compressed to the nearest color within gamut. So in that saturated area the relationship/gradation between tones is mainly lost. All of this means to me that while I will loose the most saturated colors, everything will look better because pastel colors look as I expect they should look, and everything is adjusted to the new white point, so as a whole the image looks as my eyes expect.

As I explained, in my images there aren’t lots of highly saturated colors and I’m not so worried about those than to pastel tones, so I’m fine with Relative colorimetric. If that’s not your case, if you need accurate saturated gradations, you may well be better using the Perceptual intent. That’s something you have to test with your own images.

In RT, you set the default rendering intent in:

  • for your display: Preferences > Color Management > Monitor > Default rendering intent:
  • for your printer: Preferences > Color Management > Printer (Soft-Proofing) > Rendering intent:

And again in RT, while editing an image, you can change the rendering intent used per image with this button:

rendintent
(next to the soft-proofing and oog buttons)

The rendered icon in the screenshot is the Relative colorimetric rendering intent

@XavAL - Merci Xavier for your long explanation.

I’m in the process of learning this color management part of photography and I try to really understand exactly why using this or that working profile (and if that should always be the same or be related to the photo in question, etc.). But as you said, this is complicated stuff, but articles like yours (there are others as well on this forum) shines some more light on this topic. So thanks!

1 Like

Ok hence in my situation I can to choose Prophoto as working profile which holds all my camera colors quite precisely

immagine

I think it is inside floating point mathematics ( I do not know it)

Ok , I misunderstood… I agree

I meant if there was an intent setting from cameras to working profile too… but as logic there isn’t. All colors OOG of working profile are pushed onto the WP boundary because Prophoto,ACES or sRGB don’t have intents as they are matrix profiles :thinking:

If you have the mettle to re-enable Flash, this is the best illustration of rendering intent behavior I’ve yet encountered:

https://graphics.stanford.edu/courses/cs178/applets/gamutmapping.html

Someone else posted it recently here in another thread…

Someone correct me if wrong, but the two colorimetric intents are called that because they maintain the original color encoding for all colors that are in both the input and output colorspaces of the transform. Only the colors that are in the input space but not in the output space are messed with.

To the intent of the thread, recently I’ve not been using an intermediate working profile in my processing; I work the camera data in the original space all the way to output, and only then transform it to sRGB for the JPEG rendition using the matrix primaries for the camera. Seems to come out okay, but my images typically do not stress the extremes of color. And, it is my preference to know what the camera thinks should be the color, whitebalance notwithstanding…

The “Stanford-Levoy Parrot” (the subject of the link I posted) visually illustrates the translation of in-input/not-in-output colors per the various intents; relative colorimetric essentially draws a line from the color to the white point and drags the color along that line until it is in the output space. Thus, the hue of the color is maintained. But the cost is in the precision of the hue, as all those shades are now lost in mapping to a single point along the line.

I have one image that does vex blues, a shot of a theater stage that is lit predominantly with tungsten-level floods but is accented at the walls and ceiling with those nasty blue LED spotlights. I use this image to play with gamut compression tactics, here’s the result of the recent musing. First screenshot is of a segment of the image as a result of transform from camera (matrix) to display space (which on this tablet computer is close to sRGB):

Blotchy, posterized, the result of that relative colorimetric drag along the line to the white point to a single place in the destination space.

So, bear-of-little-brain here thought that, maybe, just dialing back on the saturation would pull the extreme colors into the ouput gamut before I did the display transform. Here’s that result using a plain old HSL saturation tool dialed down to halfway to total desaturation:

Better, the blues now have finer gradation, but this is at the expense of the rest of the image. There are ways to mask saturation tools to specific hue ranges, but rawproc doesn’t do that, yet…

But the best result has been with using a LUT profile to describe the camera space. I don’t have a target with enough colors to make such a thing, but I was fortunate to find a spectral sensitivity dataset for my Nikon D7000. From that I made a camera profile using dcamprof that, instead of the 3x3 matrix of primaries, has a LUT that does the relative colorimetric transform (A0toB0 is the LUT name, in ICC-speak). Here’s the result of replacing the camera matrix with that LUT in the transform to display space:

Nicer yet; kept a lot of the gradation as well as a richer hue. Also, the rest of the image didn’t suffer…

So that’s a bit of illustration regarding rendering intent. As for using a working profile, I think there’s probably value in that intermediate transform by providing an intermediate step in the journey from broad camera space to tiny display/rendition space, but I haven’t tested that yet…

3 Likes

Well, as I understand it, a rendering intent is only applied when you convert from one profile to another, or more precisely, from one color space to another color space.

If I’m not wrong, a profile doesn’t have a rendering intent. Instead, as illustrated by @ggbutcher, it has a way to convert from color space A to color space B, using a particular rendering intent. In his example the profile has a LUT to convert from camera space to display space using a Relative colorimetric transformation.

A way to easily understand what a rendering intent does is asking: How the heck are we going to put all the boxes from this drawer into that matchbox?

@ggbutcher : really useful link :smiley: Thanks!

If I’m not wrong Absolute colorimetric preserves the original color encoding, but Relative colorimetric adjusts colors to the white point of the target space (so they still look natural)

On the other hand, even when I was advised not to do so, I am curious to see what benefit would I have using my input color profile as a working profile (I know it’s well behaved and it can be used as such), all within RT. To my understanding the least conversions from color space to color space, the better. I will still be working in 32 bits floating-point, with unclipped values.

Will see…

1 Like

Actually, the rendering intent a profile can support is determined by how the color transform information is specified. The colorimetric intents are supported by matrix primaries, while the perceptual and saturation intents are supported by LUTs. A good thread about all this is here:

in which the discussion is carried by heads bigger than mine, @elle, @gwgill, and @Jossie

1 Like

Hi XavAL,
I’m playing with clip option in exposure tab…
With this flower I set the working profile to Prophoto and highlighted OOG color respect to RT_Large (Prophoto)… ; Printer profile is not set. Therefore I see which pixels from my image are OOG Prophoto.

with Clip OOG disabled

image

This means those cyan colors are the pixels OOG respect Prophoto… OK??

OK!!

When I enable the Clip option the cyan pixels disappear

Question:

Why with and without Clip OOG option the spot is the same value?

image
image

image
image

What do I miss??

Cheers
Gabriele

My guess is (and it’s just my guess as I’m not a programmer and I can’t read c code) that either way RT displays the resulting values taking into account the output color space and the rendering intent.

Then if you have Clip out-of-gamut colors on, as the color has been clipped it’s not OOG anymore, so no cyan shows. On the other hand, if Clip out-of-gamut colors is off, the preview image still shows the same color, but warns you that it has been clipped.

1 Like

Ok, I suppose that there is a certain process for setting up the colour space before starting working with an image.

From my point of view that is setting first the Input colour profile (to that of your camera sensor), leaving default the Working Profile to ProPhoto and then comes the Output profile… which I cannot determine yet to what it should be set to.

One might respond that it depends and I will go along with that (eg. if continuing working with the same image in Photoshop for example you need to set it to ProPhoto) but I am particularly interested in printing the image.

@XavAL mentioned for printer going to preferences and setting up the colour management but I am not sure if this is the same for setting up the output colour space/profile. i.e. going to the Colour tab and setting in the Output Profile the analogous/appropriate profile for our printer.

Though by setting a different Output Profile there (Colour tab) impacts the histogram and that is my greater concern. Do we work with RTv4_sRGB to have a relevant histogram close to sRGB and then before exporting to .jpg switch to printer profile (or ProPhoto, I do not know) or do we work with our printer profile in the Output Profile section completed from the beginning?

And finally would be wise to set also the Output Profile to something big such as ProPhoto if we are to print the image (supposing that we have the best printer in the market today) or should we look for our printers profile to set it up in the Output Profile list?

The Output Profile should always be set with the capabilities of the rendition medium in mind. if you’re printing, you at least have an idea of what is the destination device; ideally you’d specify a custom profile for the particular printer, but if the printing service specifies using JPEG or some other profile using that would be prudent.

If you’re outputting a JPEG for the web, that’s somewhat problematic, you don’t really know what devices will be used. sRGB is still a safe assumption; even though more folk are buying high-gamut monitors, it turns out browsers (Chrome, at least) are defaulting to converting your precious image to sRGB for display. What’s really important then is to make sure a profile corresponding to the image data is embedded in the file, else the browser won’t have the information it needs to do a conversion. I intermittently consider switching my output profile for JPEGs to AdobeRGB; not that bad to look at raw on cheapee sRGB monitors, and provides a decent gamut for whatever conversion browsers or other software might be making.

Don’t know what to tell you about the RT histogram, 'cept may be to just use the Raw mode all the time. My software lets me sample the histogram at any stage of the pipeline, and even then I don’t use it for much except maybe white balance casts, for which a histogram of the raw data would be more useful anyway…

2 Likes

Jpeg can save max to 8bit depth. Does that means that the max color space that I can export to my file is that of sRGB? Meaning that if I want to print on a good printer I have to use another format (eg. tiff 16bit)?

In the mean time I’ve used some .icc profiles of relatively fine printers on an online site comparing those profiles to sRGB and saw that they fall inside it most of the time.

And thanks for the advice for RAW histogram usage.

1 Like

I’ve never tested such, but it is prudent to think that 8 bits won’t allow the same expression of gradation as 16 bit or float. I do know that for matrix transforms the out-of-gamut data is just piled up just inside the destination gamut, and that’s not likely to look different in any bit-depth.

1 Like

@markman8

It’s been a while since the original posts! :smile:

I really hope you’re using the latest dev version of RT. It is mostly what the next release (5.9) will be, so better use it.

Now to colour matters… I will write the shortest possible answers, so if you don’t understand something, just ask.

Let’s start saying that colour management is about preserving the original colors from start (raw image) to end (output image). And yes, there are no «colours» in a raw image, but they become visible when decoded after demosaicing.

Let’s agree that no processing which alters colors is color management. And here is when color profiles come in handy: they do their job to preserve colors as much as possible.

And let’s face it from the start: you will never see the «raw image», nor the image inside the engine (inside RT). Never ever. And that’s because there’s currently no display capable of such a wide gamut as cameras can record (around 14bits), and no display capable of showing 32-bit per channel images (such as the «image» inside de engine).

So you are left with histograms, vectorscopes and waveforms to get a sense of what’s going on.

About the histogram: if you turn on the raw histogram (histogram-type-histogram-raw-small), what you will see is the raw values with a little bit of processing (just a bit so it makes sense). And no matter what you do afterwards with the image, you won’t see any change in the histogram. So the raw histogram is good to know what you’re dealing with at the start of the processing, but little more. You will know if your image has clipped channels in the raw data itself, and maybe if the image is too underexposed. I use it mainly to know how clipped is the raw image.

Then you have to change it to the usual (non-raw) histogram, so you know what your processing is doing to the image.

It is now that you have the choice to see the histogram with the values in the engine (working profile, Gamut-on ) or the values the image would have if you exported it with the output profile.

I usually choose the working profile, so I know if my image is clipped in the engine itself while I’m processing it.

Only when I’m about to export the image I change the histogram to the output profile, so I know if the TRC (gamma) of the output profile will clip my highlights.

So let’s recap: use your camera profile for input profile, set a wide enough working profile, use a calibrated profile for your display and set an output profile depending on what you’re going to do with the output image (as @ggbutcher said).

About printing: it seems there’s much to say here, but I have little knowledge about fine tuning the printing process. Let’s just say you can’t print with RT and you will be using some other means to get your print.

No matter how you print your image, you may wish to know how it will look on paper. Well, sadly that’s not possible, as your display emits light, while paper just reflects it. Besides, displays work with 3 colors (primaries, RGB), while printers work with at least 4 colors (CMYK). And so do ICC profiles: there are 3 primaries ICC profiles for displays, and there are different 4 primaries ICC profiles for printers. RT can only use 3 primaries profiles. RT is designed to work with displays, not printers.

But you can see how your print will look (toolbar_soft-proofing), more or less… Again, you will get a feeling about how it will look, but it won’t be exactly the same as a real print.

Another recap: process your image as you like it, using the profiles, histograms and scopes as if they were meant to be shown on a display, but then, from time to time turn on the soft-proofing (toolbar_soft-proofing) and judge if your image is going where you want it to.

When you’re finished, you will have 2 options: print it yourself, and sending it to a third party.

If you print at home export with at least 16bits, a non compressed format (tiff or png), and an output color profile wide enough to encompass the printer profile. Then in a proper color managed application, use the printer profile. Your exported image will be RGB (there’s no way to avoid this in RT), and here the higher bit count and wider gamut will come in handy when the conversion to the printer profile is made.

If you send to a third party, pay attention to what they are expecting to receive. It may be an 8bits sRGB image (not ideal, but anyway), so you will export in 8bits and with RTv2_sRGB. No matter what, don’t let them do the conversion, nor in the bits side, neither in the color space, unless you really trust them. In spite of this, they will perform the conversion from RGB to printer profile, so the results may not be what the soft-proofing suggested…

HTH

5 Likes

While formally, the HLG standard only allows 10 bit representation, Sony cameras shot 8-bit HLG and it worked GREAT - that’s the HLG luma transfer function, and Rec. 2020 gamut.

It is possible to go “too wide” with a gamut and start seeing quantization errors that manifest as “blotchy” colors - Sony’s S-Gamut mixed with 8 bits is problematic, which is why lots of people recommend shooting HLG on 8-bit Sonys - while in theory, S-Log2 is a bit better suited for encoding luma than the HLG transfer function in terms of making the most of your limited code values, S-Gamut and 8 bits don’t play nice together, and Sony cameras don’t let you mix Rec2020 gamut with S-Log2.

In general, I always use sRGB for JPEG outputs, under the assumption that a non-color-managed app will not display the image correctly if it is anything other than sRGB transfer function and gamut. If I’m targeting a wider/nonstandard gamut I’ll export as TIFF at higher bit depth. (Which I frequently follow by encoding to an H.265 10-bit HLG video, because that’s really the only widely supported format for delivering content with dynamic range and gamut beyond sRGB.) HDR/wide-gamut stills are still a disaster with regards to content delivery pipeline.

1 Like

I do use one of the later dev versions yes. Thank you for the detailed explanation. Very useful information indeed!

I did contact some tests myself on Output profiles. I did change only the o/p profile for the same RAW file keeping all other settings from RT the same. Exported 4 files altogether. 2 jpegs and 2 tifs (16bit). One using the RTv4_sRGB o/p profile and one using the ProPhoto one respectively.

I found that in the jpegs actually the file size was reduced by using the ProPhoto profile, wile for the tifs there is a slight increase.

Following is the comparison between the jpg files…

comp2

And the comparison between the tif files…

File properties remain the same as shown bellow:

Finally the files at least on my monitor look almost the same, all of them (including the tif with the RTv4_sRGB o/p profile), except the .tif file that was exported with ProPhoto profile.

I will include only two of them the ProPhoto exported ones both for .jpg and .tif.

comp4

comp5

I hope that one can see the difference on those pictures too.

Concluding I agree that the viewing medium is vital for comparisons (and the viewer him/herself) but filewise and sizewise I suppose that jpg is not suitable for supporting a big variety of color profiles.

If someone could explain why the file size for the jpg picture was reduced while using the ProPhoto o/p profile it would be very useful.

Hmm, just compared an image I’d saved in different profiles for another purpose, and while not significant, the larger-gamut profile jpgs are a bit smaller. Hazarding a WAG, the JPEG DCT compression algorithm is finding more to consolidate in the larger-gamut images… ???

Wild ass guess?

Using wider gamut means smaller values - what was e.g. a 255 saturated red in sRGB is now e.g. 197 in ProPhoto. Any compression scheme now has an easier job as the span of values it sees is reduced (and probably their distribution/spread as well).

Bad idea for 8-bit containers, as you are making the already limited dynamic range and gamut quantization even worse, thus increasing the risk of posterization.

Edit: It’s even worse than I thought: sRGB (255,0,0) in ProPhoto becomes (135,25,4). Slightly better utilization for green and blue… Bottom line - if you use extremely wide (and virtual) gamuts like ProPhoto/ACES for archiving, better use float/half float, or at least 16-bit (and might as well leave as linear then). For other export, you want to use a gamut that is not as wasteful but still has good (or complete) coverage of Pointer’s gamut, like Rec2020 (still a bit wasteful if not used with more than 8 bits; it’s really intended for 10-bit and above HDR content), or at least DCI-P3 (“more crimsons”) or AdobeRGB (“more aquas”, better match for printer CMYK inks) if your content is clipping at sRGB.

4 Likes

Yep.

2 Likes

Most excellent explanation!

Occurs to me while reading it, I think it’s important to note that the gamut compression campaign from camera → working → rendition space is really “one-way”, once a transform to a smaller gamut is inflicted upon an image, the operation cannot be reversed with the transformed image as input. That fine gradation of hues is now lost, and any attempt to recover it is just making stuff up…

2 Likes