Curves and how control the colors

Agreed, that’s simply crap :wink:

I guess that a color-managed browser will convert the image to your display profile, using what is set in the system-wide preferences. It might correspond to sRGB if your monitor is not calibrated…

Here you are confusing bit depth and gamut. Those are two separate things: you can edit an image at 32-bit floating point precision, or a ProPhoto image at 8-bit precision.

If the processing is done in floating point precision, as RT and most of the modern editors do nowadays, you can probably apply the same conversion back and forth a million of times without any noticeable degradation. What you are referring to are issues which were valid when processing images at 8-bit or 16-bit integer precision. By the way, in such cases going to a wider colorspace usually makes things worse, and that’s why sRGB has been invented in an era when floating-point processing was still too heavy resource-wise.

Again, what you describe is a limitation of the bit-depth of the image, not the colorspace. Your example is probably referring to the editing of a Jpeg image, which provides even less than 8-bits per channel.
If you start from a RAW file and do the processing in floating-point precision, you will hardly ever see gaps in the histogram.

The absolute colorimetric intent is a very special case, and is useful in rare occasions. It skips the chromatic adaptation in the conversion from one colorspace to another, and is likely to provide shifted colors. Don’t use it unless you really know what you are doing…

Most web browsers are not color-managed.

No such thing.

No one, except for the growing number of people with OLED smartphones who can actually view DCI-P3, Adobe RGB and other gamuts bigger than sRGB.

Colorspaces don’t have bit depths, and RawTherapee has a floating-point engine, IIRC as does LCMS2.

It’s the job of color management to make sure that the printed image matches the on-screen image, it’s not the user who is supposed to “adjust the view to the same level of saturation”.

That might explain why your photos above have a green tint where white should be. Absolute colorimetric is used for matching paper whiteness to screen, don’t use it as your monitor profile’s rendering intent.

That’s a really nice and concise definition!

Looks like if some one asks about a complicated subject there is no point trying to explain it especially if people choose to be pedantic about what was intended to be a rough outline.

The best answer is for such people to explain it all themselves.


The devil is in the details!

1 Like

Thanks for these comments. I’d like to modify my 3 suggestions to get them sensible, hopefully with concensus, then developers might decide if they want to implement them. However I’m a bit confused about what you say.
I see the neat trick for identifying conversion differences, and then when you said “If you use a perceptual intent in the first conversion…” I thought things were going well. You also said “…if the output profile provides an appropriate gamut mapping table”, so I thought this was extra data that could be bolted on to the profile, as I had imagined.

But then further on it seemed a Perceptual conversion was not possible – you said “perceptual intent is simply not foreseen for matrix profiles” and “there is nothing one can add [to the profile?]”

So I’m left wondering how could Perceptual conversion be achieved. Is it a matter of programming a calculation process which will work with current profiles, or do the profiles also need changing, or what?! I don’t think I can move forward on this until I understand if/how Perceptual could be implemented. For example, though you say item 3 is a valid request, I’m thinking that if Perceptual “just” requires a new calculation process, and it can be done with any output profile, then item 3 would be irrelevant!

I must learn not to dabble in these discussions! :slight_smile:

I think everyone is trying to help in their own way. Sometimes a difficult subject requires us to be teachable and look at things in multiple perspectives. There is the :thinking:, the :nerd_face:, the :confused: and the :crazy_face:. I love these discussions because I always manage to learn something new :slight_smile:.

I will limit myself to the algorithm “avoid color shift” that I set up in Lab mode :slight_smile:

First he will try to bring colors back into the gamut, by an algorithm close to a relative colorimetry (but of course without using LCMS)

Secondly, this function corrects the color defects of type “Munsell” for example the blue that becomes purple, or the red that becomes orange. Several LUTs are applied which correct this defect when the user changes the chroma significantly.


Hi guys, you know a lot more than me…! sorry

  • Yes Morgan, I can control the oversaturated aereas with this Lab tool doing the same curve than you! However this curve was always a bit cryptic for me, when I tried to use it the results was a bit empirical.

Why did you do these two nodes there?

  • Maybe more documentation in Rawpedia about how to use it could be apreciatted.

  • Also a new tool for control this problem in a easy way could be very interesting.

I never was carefull about it and it seem to be very important. Sorry.
I use PCLinuxOS, how can I found a profile for my old LG Flatron L1950SQ monitor?



So that only the most-saturated tones are affected. Use the curve pipette to see where a tone from the preview lies in a curve.

You don’t find one, you make one. You need a colorimeter (a piece of hardware) for that.

Yes I know pipette, but how to know where a color lies?
And after to find it? I should click on this color and make a node? You have two nodes.
Sorry, I don’t understand well the process. :frowning:

Ah, ok, I need to by a colorimeter.

We will wait for these improvements

If people want to behave like trolls adding to info provided would be far more productive rather than being critical.

Mentioning prophoto in detailed manner wouldn’t help Providing a link such as this one isn’t much help either as some might miss the fact that no monitor intended for human eyes can display it’s gamut.

It’s purpose in life is the ability to hold other gamuts with adequate colour/saturation resolution. It’s an artificial artefact is another way of putting it which is ok for 3 colour mixing.

People tend to associate bit depth with gamut size but disregard the colour resolution in that gamut so used the term bit depth.

My photo’s above have a marginal blue tint which could be fixed pretty easily. That’s in respect to whites and making some assumptions about clothing materials. Given mixed lighting all I wanted to do was show one method of correcting for it globally rather than locally on a photo. That must involve some colour shift. Green though ??. Really. Then decide which parts of an image needs more precise colouring. If I commented on it I would also add that more hair detail could and should have been bought out. If some one wants to disregard the method used fine by me but when some one else pointed it out to me I didn’t.

One difference between perceptual and absolute is that it will clip out of gamut colours. Perceptual wont. I don’t post out of gamut colours and only work in sRGB,

aRGB - can’t be bothered to type adobe every time or the gamuts more official name. Usually people realise what the ‘a’ means.

I fail to understand comments about the problems I mentioned with 8bit. They miss an important point. I often process jpg’s via 32 bit floating point. In that case changes can always be reversed. It’s a limitation of the engine that is doing the process not the format of the image.

Anyway lets see some other example of processing the image - all of it.

Saving with different rendering intents may or may not make a difference that is easily visible. Best go read how colour management works. Try to find one that doesn’t contain maths just work flows and what happens at each stage.


@Ajohn nonsensical babbling helps noone.


Mechanical ways of judging colour that don’t take care of the fact that eyes actually look at the shot aren’t really a good idea. The picked colour doesn’t look like the colour in the shot either does it. I’d advise you to wonder wjy.

AnywayI thought I had mentioned 5min work - in other words a starter for 10 including a pp3 to get it that far. Lets see your version done a different way and then I might take you seriously.

However if you want something different

As I mentioned mixed lighting so there will be some colour shift. The disturbing one to me is the white T shirt. Very probably has a high degree of artificial material in it. Another problem.


Here is just the inverse issue, shadows areas with less colors and saturation.


The raw and pp3 file:

_MG_3271.CR2.pp3 (10.4 KB)

The photo is a bit underexposed and the color temperature was low deliberately (some people said that a low color temperature let less color noise and you only has to push up in the edition…)

Camera profile was autoselected in RT.
The edition was also very simple, only a “S” curve.
You can also to see the diferences in blue tones, maybe because the diferences in color temperature in both raw processors.
The DPP settings were Contrst +1 Saturation +2 Color temperature 4300K Exposition +0,50

@Aleph what I do know is that RAW files save the information the sensor sees, with no processing done to them. This means that the white balance you set in camera, effectively does not matter at all if you shoot raw. What matters is the white balance you set in editing.

I talked to people in the past that tried to convince me that setting a different white balance in camera will make a difference, but the reason they think that is because they open the file in programs like Lightroom where it looks at the raw file’s metadata and reads what white balance the camera chose and will use that when you open.

You can change it though, and when you change it, it will be like the new white balance is the one you used from the start. So I shoot Auto white balance most of the time since it doesn’t matter much anyway.

Someone please correct me if I am wrong :stuck_out_tongue:

1 Like

I does matter. By shooting auto white balance you get an additional information in the metadata of your raw file, which may be useful. Apart from that it does not matter afaik.


I’ve put in a suggestion to improve things re. Perceptual by adding to this existing Issue -

1 Like

@RawConvert thanks Andrew! :slight_smile: