About choosing the right working profile

I’m answering here a question raised in another thread which was out of topic there:


Short answer: Not too much. :wink:

You could choose a large working profile like ProPhoto, use some software with a 32-bit floating point rendering engine (RT, dt, …), set the Relative Colorimetric rendering intent, and be gentle while editing your images. Most probably you will get what you need without worrying about OOG colors.

(Really) long answer:

Let’s start by acknowledging I’m just a plain user, and I’m reaching my grey zones, where I think I know the answers, but I can be wrong.

A lot has been said about color management even in this very forum, but I don’t think it would be bad to explain it again, from a different perspective: what a user expects, and what the user gets from software.

Let’s start by saying that working with digital colors is just working with numbers, no more, no less. Numbers doesn’t have limits, unless you tag them (real numbers, decimal numbers, negative numbers, …), so a user would expect that multiplying color values by a ridiculous number will just throw another color. Maths allow this, but sadly the truth is that you won’t get what you expect.

The main reason of all the color management problem is that we (users) wish to easily get what we think is a straightforward transformation. Far from the hard truth: our eyes-brain system performs complex psychophysical transformations that help us relate with the world we are living in, making changes in the real data our eyes get, so we have the information we need. That is, e.g. we have higher sensitivity to lightness than colors, so we can still see at dusk. And we have higher sensitivity to yellow greenish color (although I really don’t know why).

Among that, we can only see a certain range of colors, no further than violet (ultraviolets), no below red (infrareds), and scientists have plotted the colors we see into a 2D graph to ease the comprehension of color theory and the transformation of real colors with maths: the CIE xy chromaticity diagram (as you can guess, this quick explanation is really much more scientifically complex).

So we have a chart where colors are located with 2 values (x and y), and we have maths: where’s the problem, then? The problem is that to our eyes sometimes 2+2 is not 4, so the algorithms have to cope with that. The problem is now a bit more complex.

Is that all? Well, no. Now we start dealing with devices: sensors, displays and printers.

In an ideal world camera sensors would react to light exactly as our eyes-brain complex does. Far from truth: the different photosites aren’t sensitive to the same range of colors as our retina cells, and with the same intensity to lightness. And the sensor captures light in a linear fashion, while the eye-brain transforms the captured light into a short of gamma curve, mainly to enhance shadows without loosing much information from highlights.

And what our eyes really do is scanning small areas our visual field and sending the partial snapshots to our brain, which join them altogether in a short of HDR image. So coupling these behaviors to devices is really complicated.

Now, we have camera sensors that are able to capture a certain range of colors, usually represented in a CIE diagram as a triangle. Then we have displays that have to show images so we can see them, and they are also able to show only a certain range or colors (another triangle, or gamut). Finally, when we are happy with our images we want to print them in a device that inherently has a very different gamut that those of sensors and displays.

(image taken form EIZO website)


With modern cameras, we can capture a range of colors that exceed the gamut of even the best displays, but only in certain color areas, while lacking sensitivity in other areas (e.g.: a sensor may be more sensitive in blues, or oranges, while not being able to capture some of the highly saturated greens). On the other hand, with printers we will be able to get much more saturated cyans and magentas than on displays, because those are primary colors for printers, but saturated greens will be a problem.

Ideally we would use a working profile (a color space) that is much bigger than any sensor, display or printer gamut. Even one that holds the entire range of colors our eyes can see (ACESp0). But then we face the problem of the conversion between profiles and the fidelity of colors after each transformation: maths again. There’s a problem called quantization error, which generally speaking means that when converting one value in one range into another range, the resulting value is not equivalent to the starting value. In other words: when converting one color from sensor profile to working profile, the resulting color is not the same (to our eyes). When you convert it to the output profile, it’s again not the same.

The problem comes from the way primary colors are encoded within profiles (with hexadecimal values): the bigger color spaces (and profiles) hold much more colors than smaller spaces (sRGB is the most common), so when you convert a color from a bigger color space to the smaller one, there’s a chance that the resulting color will have a shift in its hue. Some say that’s not a problem with modern raw developers, because they work with 32-bit floating point numbers. That means to a plain user we have infinite decimals, so infinite values, but sadly that’s not true. It really means a lot more numbers than 32-bit integer numbers, but not what we would expect from real decimal numbers: there’s a limit in the possible values in 32-bit floating point numbers.

Anyway, in our images, as amateurs, all of this means that current raw processing software is absolutely capable of making us forget quantization errors, but if you are seeking the absolutely perfect precission of colors, you have to thread with care. And I’m not ready (I have no idea) to tell you how to get what you need, so you’re on your own.

For the rest of us, we have awesome raw processors, so we just choose a big color space and we’re done. Aren’t we? Not really. There’s yet the problem of colors that fall outside the gamut of the chosen color space. You may say that’s not possible at all, but it is, indeed: if for example you process an image as if you want to get from a disposable camera the quality you get from a professional one, well, you will be pushing values (and colors) a bit too much. In the end, remember that it’s all about maths, and you may end up with math operations that push a value outside the allowed range of colors. And here is when you take into account if your processor clips colors or not. Clipping colors means that when a certain operation throws a value outside the allowed limits, it just gets the maximum allowed value, no matter which was it’s calculated value.

The problem is: if you afterwards perform another aperation that brings back that exceeding value into range, if you didn’t clip it you will get the proper color. If on the other hand you clipped the value, when you bring back that value, the resulting color will not be the one you expected.

Example: you have a pixel with the RGB color (100,200,240), in a range that doesn’t allow values over 255. You push that pixel into (127,225,266), and the raw processor clips it, so as a result you get (127,225,255). If you further down in the pipeline of your software push back that pixel, on the unclipped scenario you would get (112,211,251), while with the clipped scenario you will end up with (112,211,240). A noticeable different hue.

Finally, if you’re not seeking precisely perfect colors, there’s a feature that comes to our rescue (at last!): the Relative Colorimetric rendering intent. Whenever there are colors that fall outside of the color space (Out Of Gamut colors), it makes a proportional transformation of the clipped colors and those that are near clipping, so they all can be seen without full saturation. The resulting color gradients will not be faithful to the original colors (will have a little shift), but luckily our eyes are not so sensitive to really saturated colors differences than we are to pastel tones.

So, summing up: unless you are facing a really demanding task, with color precission being a must, you will be fine with:

  • a large working color space (ACES, ProPhoto, even REC2020)
  • processing your images in a reasonable way (not crazy settings)
  • always using Relative Colorimetric rendering intent. If you ever find some odd colors while editing, you may check the out of gamut colors with the appropriate button, and perform the appropriate editing changes to put everything where it belongs

Hi XavAL,
I wish to thank you for you time and the good explanation of the matter…but :slightly_smiling_face: I have more questions

I known when we had raw developer with 16 or worse 8 bit it was better to have a working profile more precise possible… and with 32 bits floating point is not more necessary but I had doubts too…we need to not to waste levels in our coding, don’t we??

Therefore is better disable “Clip out-of-gamut colors” into Exposure Tag?

Now I understood that option into Exposure tab , if it’s that.

Excuse me … wasn’t Precpetual the intent which performs that?? Relative colorimetris is as Absolute which push OOG values to the boundary with white balance??

And more… where I set the intent into working space… in the color tab-color management option I can select only the working profile but not the intent


Somehow I expected it :wink:

This is a hell of a complicated matter, even though it seems simple to our eyes: we just look and that’s it…, we see.

Let’s start by agreeing that no matter what you set in any modern program, it will convert your raw image into a 32 bits floating point image, and will work with that data from start to end, until it’s exported. So you won’t ever be able to see that internal image, and it will always be coded in 32 bits floating point.

Now, by levels I understand posible colors within range of the chosen color space. That is, if we choose ACESp0 we won’t ever be able to capture, display or print every possible color inside that gamut.

I have to admit this is one of my grey areas: to me, with current computers and bit-depths available, it shouldn’t be a problem. We should just set ACESp0 and forget about it, but people keep telling that is a bad idea, that it’s much better choosing the smallest possible color space (working profile) able to hold all the colors you will be working with (from camera to printer).

I can’t fully understand why is that if we have quite a lot of possible values between colors (thanks to the 32 bit floating-point encoding), so to my understanding even using half the values possible for a color space, there are much more than needed possible values (decimals) between each level. But as there are people much wiser than me that suggests it’s better using a not so big color space, so be it.

One definitive solution would be to save our images as 32 bit floating point data (a possibility in RT), but that would lead to huge file sizes, so generally speaking currently it’s not an option.

I let it always disabled.

To us, if we are not looking for perfectly rendered/printed colors, the options are Perceptual and Relative colorimetric. In both cases we loose color fidelity, but in different ways: with Perceptual intent the whole set of colors are modified in their saturation so the relationship between colors is maintained while compressing them into a smaller gamut. This should result in smoother transitions between colors.

On the other hand, with Relative colorimetric intent, pastel tones are better preserved (are more faithful) and colors are scaled to the output profile white point, while the OOG saturated tones are compressed to the nearest color within gamut. So in that saturated area the relationship/gradation between tones is mainly lost. All of this means to me that while I will loose the most saturated colors, everything will look better because pastel colors look as I expect they should look, and everything is adjusted to the new white point, so as a whole the image looks as my eyes expect.

As I explained, in my images there aren’t lots of highly saturated colors and I’m not so worried about those than to pastel tones, so I’m fine with Relative colorimetric. If that’s not your case, if you need accurate saturated gradations, you may well be better using the Perceptual intent. That’s something you have to test with your own images.

In RT, you set the default rendering intent in:

  • for your display: Preferences > Color Management > Monitor > Default rendering intent:
  • for your printer: Preferences > Color Management > Printer (Soft-Proofing) > Rendering intent:

And again in RT, while editing an image, you can change the rendering intent used per image with this button:

(next to the soft-proofing and oog buttons)

The rendered icon in the screenshot is the Relative colorimetric rendering intent

@XavAL - Merci Xavier for your long explanation.

I’m in the process of learning this color management part of photography and I try to really understand exactly why using this or that working profile (and if that should always be the same or be related to the photo in question, etc.). But as you said, this is complicated stuff, but articles like yours (there are others as well on this forum) shines some more light on this topic. So thanks!

1 Like

Ok hence in my situation I can to choose Prophoto as working profile which holds all my camera colors quite precisely


I think it is inside floating point mathematics ( I do not know it)

Ok , I misunderstood… I agree

I meant if there was an intent setting from cameras to working profile too… but as logic there isn’t. All colors OOG of working profile are pushed onto the WP boundary because Prophoto,ACES or sRGB don’t have intents as they are matrix profiles :thinking:

If you have the mettle to re-enable Flash, this is the best illustration of rendering intent behavior I’ve yet encountered:


Someone else posted it recently here in another thread…

Someone correct me if wrong, but the two colorimetric intents are called that because they maintain the original color encoding for all colors that are in both the input and output colorspaces of the transform. Only the colors that are in the input space but not in the output space are messed with.

To the intent of the thread, recently I’ve not been using an intermediate working profile in my processing; I work the camera data in the original space all the way to output, and only then transform it to sRGB for the JPEG rendition using the matrix primaries for the camera. Seems to come out okay, but my images typically do not stress the extremes of color. And, it is my preference to know what the camera thinks should be the color, whitebalance notwithstanding…

The “Stanford-Levoy Parrot” (the subject of the link I posted) visually illustrates the translation of in-input/not-in-output colors per the various intents; relative colorimetric essentially draws a line from the color to the white point and drags the color along that line until it is in the output space. Thus, the hue of the color is maintained. But the cost is in the precision of the hue, as all those shades are now lost in mapping to a single point along the line.

I have one image that does vex blues, a shot of a theater stage that is lit predominantly with tungsten-level floods but is accented at the walls and ceiling with those nasty blue LED spotlights. I use this image to play with gamut compression tactics, here’s the result of the recent musing. First screenshot is of a segment of the image as a result of transform from camera (matrix) to display space (which on this tablet computer is close to sRGB):

Blotchy, posterized, the result of that relative colorimetric drag along the line to the white point to a single place in the destination space.

So, bear-of-little-brain here thought that, maybe, just dialing back on the saturation would pull the extreme colors into the ouput gamut before I did the display transform. Here’s that result using a plain old HSL saturation tool dialed down to halfway to total desaturation:

Better, the blues now have finer gradation, but this is at the expense of the rest of the image. There are ways to mask saturation tools to specific hue ranges, but rawproc doesn’t do that, yet…

But the best result has been with using a LUT profile to describe the camera space. I don’t have a target with enough colors to make such a thing, but I was fortunate to find a spectral sensitivity dataset for my Nikon D7000. From that I made a camera profile using dcamprof that, instead of the 3x3 matrix of primaries, has a LUT that does the relative colorimetric transform (A0toB0 is the LUT name, in ICC-speak). Here’s the result of replacing the camera matrix with that LUT in the transform to display space:

Nicer yet; kept a lot of the gradation as well as a richer hue. Also, the rest of the image didn’t suffer…

So that’s a bit of illustration regarding rendering intent. As for using a working profile, I think there’s probably value in that intermediate transform by providing an intermediate step in the journey from broad camera space to tiny display/rendition space, but I haven’t tested that yet…


Well, as I understand it, a rendering intent is only applied when you convert from one profile to another, or more precisely, from one color space to another color space.

If I’m not wrong, a profile doesn’t have a rendering intent. Instead, as illustrated by @ggbutcher, it has a way to convert from color space A to color space B, using a particular rendering intent. In his example the profile has a LUT to convert from camera space to display space using a Relative colorimetric transformation.

A way to easily understand what a rendering intent does is asking: How the heck are we going to put all the boxes from this drawer into that matchbox?

@ggbutcher : really useful link :smiley: Thanks!

If I’m not wrong Absolute colorimetric preserves the original color encoding, but Relative colorimetric adjusts colors to the white point of the target space (so they still look natural)

On the other hand, even when I was advised not to do so, I am curious to see what benefit would I have using my input color profile as a working profile (I know it’s well behaved and it can be used as such), all within RT. To my understanding the least conversions from color space to color space, the better. I will still be working in 32 bits floating-point, with unclipped values.

Will see…

1 Like

Actually, the rendering intent a profile can support is determined by how the color transform information is specified. The colorimetric intents are supported by matrix primaries, while the perceptual and saturation intents are supported by LUTs. A good thread about all this is here:

in which the discussion is carried by heads bigger than mine, @elle, @gwgill, and @Jossie

1 Like

Hi XavAL,
I’m playing with clip option in exposure tab…
With this flower I set the working profile to Prophoto and highlighted OOG color respect to RT_Large (Prophoto)… ; Printer profile is not set. Therefore I see which pixels from my image are OOG Prophoto.

with Clip OOG disabled


This means those cyan colors are the pixels OOG respect Prophoto… OK??


When I enable the Clip option the cyan pixels disappear


Why with and without Clip OOG option the spot is the same value?



What do I miss??


My guess is (and it’s just my guess as I’m not a programmer and I can’t read c code) that either way RT displays the resulting values taking into account the output color space and the rendering intent.

Then if you have Clip out-of-gamut colors on, as the color has been clipped it’s not OOG anymore, so no cyan shows. On the other hand, if Clip out-of-gamut colors is off, the preview image still shows the same color, but warns you that it has been clipped.

1 Like

Ok, I suppose that there is a certain process for setting up the colour space before starting working with an image.

From my point of view that is setting first the Input colour profile (to that of your camera sensor), leaving default the Working Profile to ProPhoto and then comes the Output profile… which I cannot determine yet to what it should be set to.

One might respond that it depends and I will go along with that (eg. if continuing working with the same image in Photoshop for example you need to set it to ProPhoto) but I am particularly interested in printing the image.

@XavAL mentioned for printer going to preferences and setting up the colour management but I am not sure if this is the same for setting up the output colour space/profile. i.e. going to the Colour tab and setting in the Output Profile the analogous/appropriate profile for our printer.

Though by setting a different Output Profile there (Colour tab) impacts the histogram and that is my greater concern. Do we work with RTv4_sRGB to have a relevant histogram close to sRGB and then before exporting to .jpg switch to printer profile (or ProPhoto, I do not know) or do we work with our printer profile in the Output Profile section completed from the beginning?

And finally would be wise to set also the Output Profile to something big such as ProPhoto if we are to print the image (supposing that we have the best printer in the market today) or should we look for our printers profile to set it up in the Output Profile list?

The Output Profile should always be set with the capabilities of the rendition medium in mind. if you’re printing, you at least have an idea of what is the destination device; ideally you’d specify a custom profile for the particular printer, but if the printing service specifies using JPEG or some other profile using that would be prudent.

If you’re outputting a JPEG for the web, that’s somewhat problematic, you don’t really know what devices will be used. sRGB is still a safe assumption; even though more folk are buying high-gamut monitors, it turns out browsers (Chrome, at least) are defaulting to converting your precious image to sRGB for display. What’s really important then is to make sure a profile corresponding to the image data is embedded in the file, else the browser won’t have the information it needs to do a conversion. I intermittently consider switching my output profile for JPEGs to AdobeRGB; not that bad to look at raw on cheapee sRGB monitors, and provides a decent gamut for whatever conversion browsers or other software might be making.

Don’t know what to tell you about the RT histogram, 'cept may be to just use the Raw mode all the time. My software lets me sample the histogram at any stage of the pipeline, and even then I don’t use it for much except maybe white balance casts, for which a histogram of the raw data would be more useful anyway…


Jpeg can save max to 8bit depth. Does that means that the max color space that I can export to my file is that of sRGB? Meaning that if I want to print on a good printer I have to use another format (eg. tiff 16bit)?

In the mean time I’ve used some .icc profiles of relatively fine printers on an online site comparing those profiles to sRGB and saw that they fall inside it most of the time.

And thanks for the advice for RAW histogram usage.

1 Like

I’ve never tested such, but it is prudent to think that 8 bits won’t allow the same expression of gradation as 16 bit or float. I do know that for matrix transforms the out-of-gamut data is just piled up just inside the destination gamut, and that’s not likely to look different in any bit-depth.

1 Like


It’s been a while since the original posts! :smile:

I really hope you’re using the latest dev version of RT. It is mostly what the next release (5.9) will be, so better use it.

Now to colour matters… I will write the shortest possible answers, so if you don’t understand something, just ask.

Let’s start saying that colour management is about preserving the original colors from start (raw image) to end (output image). And yes, there are no «colours» in a raw image, but they become visible when decoded after demosaicing.

Let’s agree that no processing which alters colors is color management. And here is when color profiles come in handy: they do their job to preserve colors as much as possible.

And let’s face it from the start: you will never see the «raw image», nor the image inside the engine (inside RT). Never ever. And that’s because there’s currently no display capable of such a wide gamut as cameras can record (around 14bits), and no display capable of showing 32-bit per channel images (such as the «image» inside de engine).

So you are left with histograms, vectorscopes and waveforms to get a sense of what’s going on.

About the histogram: if you turn on the raw histogram (histogram-type-histogram-raw-small), what you will see is the raw values with a little bit of processing (just a bit so it makes sense). And no matter what you do afterwards with the image, you won’t see any change in the histogram. So the raw histogram is good to know what you’re dealing with at the start of the processing, but little more. You will know if your image has clipped channels in the raw data itself, and maybe if the image is too underexposed. I use it mainly to know how clipped is the raw image.

Then you have to change it to the usual (non-raw) histogram, so you know what your processing is doing to the image.

It is now that you have the choice to see the histogram with the values in the engine (working profile, Gamut-on ) or the values the image would have if you exported it with the output profile.

I usually choose the working profile, so I know if my image is clipped in the engine itself while I’m processing it.

Only when I’m about to export the image I change the histogram to the output profile, so I know if the TRC (gamma) of the output profile will clip my highlights.

So let’s recap: use your camera profile for input profile, set a wide enough working profile, use a calibrated profile for your display and set an output profile depending on what you’re going to do with the output image (as @ggbutcher said).

About printing: it seems there’s much to say here, but I have little knowledge about fine tuning the printing process. Let’s just say you can’t print with RT and you will be using some other means to get your print.

No matter how you print your image, you may wish to know how it will look on paper. Well, sadly that’s not possible, as your display emits light, while paper just reflects it. Besides, displays work with 3 colors (primaries, RGB), while printers work with at least 4 colors (CMYK). And so do ICC profiles: there are 3 primaries ICC profiles for displays, and there are different 4 primaries ICC profiles for printers. RT can only use 3 primaries profiles. RT is designed to work with displays, not printers.

But you can see how your print will look (toolbar_soft-proofing), more or less… Again, you will get a feeling about how it will look, but it won’t be exactly the same as a real print.

Another recap: process your image as you like it, using the profiles, histograms and scopes as if they were meant to be shown on a display, but then, from time to time turn on the soft-proofing (toolbar_soft-proofing) and judge if your image is going where you want it to.

When you’re finished, you will have 2 options: print it yourself, and sending it to a third party.

If you print at home export with at least 16bits, a non compressed format (tiff or png), and an output color profile wide enough to encompass the printer profile. Then in a proper color managed application, use the printer profile. Your exported image will be RGB (there’s no way to avoid this in RT), and here the higher bit count and wider gamut will come in handy when the conversion to the printer profile is made.

If you send to a third party, pay attention to what they are expecting to receive. It may be an 8bits sRGB image (not ideal, but anyway), so you will export in 8bits and with RTv2_sRGB. No matter what, don’t let them do the conversion, nor in the bits side, neither in the color space, unless you really trust them. In spite of this, they will perform the conversion from RGB to printer profile, so the results may not be what the soft-proofing suggested…



While formally, the HLG standard only allows 10 bit representation, Sony cameras shot 8-bit HLG and it worked GREAT - that’s the HLG luma transfer function, and Rec. 2020 gamut.

It is possible to go “too wide” with a gamut and start seeing quantization errors that manifest as “blotchy” colors - Sony’s S-Gamut mixed with 8 bits is problematic, which is why lots of people recommend shooting HLG on 8-bit Sonys - while in theory, S-Log2 is a bit better suited for encoding luma than the HLG transfer function in terms of making the most of your limited code values, S-Gamut and 8 bits don’t play nice together, and Sony cameras don’t let you mix Rec2020 gamut with S-Log2.

In general, I always use sRGB for JPEG outputs, under the assumption that a non-color-managed app will not display the image correctly if it is anything other than sRGB transfer function and gamut. If I’m targeting a wider/nonstandard gamut I’ll export as TIFF at higher bit depth. (Which I frequently follow by encoding to an H.265 10-bit HLG video, because that’s really the only widely supported format for delivering content with dynamic range and gamut beyond sRGB.) HDR/wide-gamut stills are still a disaster with regards to content delivery pipeline.

1 Like

I do use one of the later dev versions yes. Thank you for the detailed explanation. Very useful information indeed!

I did contact some tests myself on Output profiles. I did change only the o/p profile for the same RAW file keeping all other settings from RT the same. Exported 4 files altogether. 2 jpegs and 2 tifs (16bit). One using the RTv4_sRGB o/p profile and one using the ProPhoto one respectively.

I found that in the jpegs actually the file size was reduced by using the ProPhoto profile, wile for the tifs there is a slight increase.

Following is the comparison between the jpg files…


And the comparison between the tif files…

File properties remain the same as shown bellow:

Finally the files at least on my monitor look almost the same, all of them (including the tif with the RTv4_sRGB o/p profile), except the .tif file that was exported with ProPhoto profile.

I will include only two of them the ProPhoto exported ones both for .jpg and .tif.



I hope that one can see the difference on those pictures too.

Concluding I agree that the viewing medium is vital for comparisons (and the viewer him/herself) but filewise and sizewise I suppose that jpg is not suitable for supporting a big variety of color profiles.

If someone could explain why the file size for the jpg picture was reduced while using the ProPhoto o/p profile it would be very useful.

Hmm, just compared an image I’d saved in different profiles for another purpose, and while not significant, the larger-gamut profile jpgs are a bit smaller. Hazarding a WAG, the JPEG DCT compression algorithm is finding more to consolidate in the larger-gamut images… ???

Wild ass guess?

Using wider gamut means smaller values - what was e.g. a 255 saturated red in sRGB is now e.g. 197 in ProPhoto. Any compression scheme now has an easier job as the span of values it sees is reduced (and probably their distribution/spread as well).

Bad idea for 8-bit containers, as you are making the already limited dynamic range and gamut quantization even worse, thus increasing the risk of posterization.

Edit: It’s even worse than I thought: sRGB (255,0,0) in ProPhoto becomes (135,25,4). Slightly better utilization for green and blue… Bottom line - if you use extremely wide (and virtual) gamuts like ProPhoto/ACES for archiving, better use float/half float, or at least 16-bit (and might as well leave as linear then). For other export, you want to use a gamut that is not as wasteful but still has good (or complete) coverage of Pointer’s gamut, like Rec2020 (still a bit wasteful if not used with more than 8 bits; it’s really intended for 10-bit and above HDR content), or at least DCI-P3 (“more crimsons”) or AdobeRGB (“more aquas”, better match for printer CMYK inks) if your content is clipping at sRGB.