Very confused with color management and profiles

In my case I have taken Aurelien’s recommendation to set my histogram profile to rec2020 for gamut and clipping (so matching my working profile). I then set my softproofing to sRGB. Since profiles go input-working -output -display-histogram and also when you select soft proofing it uses the softproofing profile your display when I toggle to softproofing in srgb it will use that as my display profile and change the histogram to show srgb and no longer rec2020 . If I am wanting to check something like say a colorchecker rgb values I would have to do it in softproofing to get expected values since the colorpicker uses the histogram profile to retrieve the values or of course I could change the histogram profile but i find this easier. My monitor is not calibrated and it is noticeably different switching from from display to the standard sRGB profile. Apparently this is common when using the profile that comes with the monitor as monitors cant really display a true black so there is a bit of a toe in the display mapping…So in my case I can see a bit of a difference but I mainly use it as a means to toggle the histogram and colorpicker…rather than as a way to determine output There was a lot of discussion and some confusion revealed in this old issue…Choice of display profile affects histogram, color picker values and overexposure indicators · Issue #3271 · darktable-org_darktable.pdf (1.1 MB)

For me, it seems rather simple (conceptuallyat least):
First, you have to make sure that your image has the colours as close to correct as possible, which means on your side you use calibrated and profiled processing and display.

Next, you want the best option to show what you want to your users. But you have no control over what device and software they use. So export to sRGB and embed the profile you use. That is all you can do… A colour managed application on a profiled device will then show the colours you planned (more or less)

You cannot control which flavour of sRGB will be used by your end users (either now or next year), so don’t worry about it.

You can’t do anything about their device profiles and black point compensation(°), so just ignore that for exporting your image. Specifically, do not embed your device specific info in images you publish.

(°) A I understand it, black point compensation is also device dependant, so not something you can set for display on someone else’s device/medium.

1 Like

Like others have said: if you know that your pipeline is correct, you cannot worry about what others might do wrong in theirs.

1 Like

My understanding is that softproof is to see how the image will look in other profile than the one we’re editing. Specially for printing.

Yes, I think this is all we can do.

I think there is no BPC control in Darktable. Not sure what it does, but I think it does not do BPC.

Thanks to all for the patience!

L.

RawTherapee has a convenient button to switch between working/output profile in the histogram. Even if I see lots of clipping in sRBG and no clipping in rec2020 (my working profile), I don’t see any difference in the image, only in the histogram.

Indeed the best and only sensible approach.

Thanks!
L.

Yes I think that is the idea , ie that the output profile handles everything when you export and does the necessary gamut adjustments…

Understanding the essential operation of color transform is the most insightful thing I’ve done to understand color management. Once you get that, the rest is just stitching together transforms.

So, the camera records a scene, and that data has to be worked before it’s presentable in a way humans can appreciate. One of the things that needs to be done is to transform the data down from its spectral response to a colorspace that can be worked with or displayed. This is the first of a few color transforms, and it works like this:

  1. The raw data needs to be assigned a set of color primaries, usually 9 numbers that represent some notion of its color. These can be in a profile file (ICC or DCP), or come from a list stored internally by the raw processor.
  2. The transform is a two-step operation: The camera primaries are then used to transform the image data to the XYZ colorspace, which is based on the original 1931 CIE color-matching experiments. Really, the XYZ space is just a common intermediate in this process, keeps us from having to maintain camera profiles for every destination.
  3. The image data in XYZ is then transformed to the destination colorspace. This could be a working space like ProPhoto or Rec2020, or it could be direct to a display-sized colorspace like AdobeRGB or sRGB.

The above describes the general process of a transform, in the context of the first one that needs to be done out of camera space. I wrote something about a year ago that describes how transforms are generally used in raw procesing; you can find it here:

HTH…

1 Like

Are you sure?
To me it sounds weird to display the stats of data mapped from a ‘narrow-gamut’ space like sRGB, passed through a potentially even more narrow display colour space, to wide-gamut one like Rec2020 (what does the histogram tell us at that point?). Plus, ideally, if you switch displays (and display profiles), the histogram should not change (this may be a limitation of the pipe, I guess; I know there have been many such issues in darktable, with colour pickers and other tools being affected by the display profile).

Tone in darktable is impacted even by the zoom magnification. Although it is more complex than RawTherapee, my experience is that it has more bugs.

Todd’s sentence is a bit simplified; go to the link in my post above, there’s a diagram that more specifically lays it out, with the exception of where the histogram is taken…

Off-topic: I don’t think that is true at all on both accounts. There are also tools in RT for which the zoom level influences the preview, and considering the amount of issues on GitHub it is safe to say we are far from bug-free (and certainly have fewer developers than darktable to fix them).

There was some discussion that logically it should be like this

But currently as coded it is not allowed.

PR 3271 is a good read to follow what has gone on and how it is currently

maybe also here

It would be great if there is a web page with the (more or less?) consensual things discussed here. There are zillions of pages about color management, rendering intents, etc, but few explaining carefully which is a recommended workflow for a photographer, specially a linux one.

I’m not sure what you are referring to, but, yes, zoom affects certain tools in RawTherapee, and that is documented and visually informed via the 1:1 tag. Those are not bugs. But probably you are referring to something else.

I’m far from claiming that RT is bug free, but at least I’ve never encountered one that messed up completely the output of my pictures, like this one:

Notice that a bug report was issued and rudely closed without even looking at it (hence the number of open bugs is not a reliable metric):

Anyway, I do not use Darktable except on emergencies, so I could be very wrong. But I found this bug after playing with darktable for 10 minutes after a long time using RawTherapee.

I don’t understand why the histogram gets special treatment. The histogram presented for use should be based on an image in a particular tone and color state, however that image got there.

The article I wrote attempted to discuss the basic needs of color management in generic terms, without the baggage of a particular implementation. The OP is trying to understand it all, both concept and implementation, and it’s rather tortuous. I go back to dcraw as the illustration of the essential color management workflow, camera → output; everything else is amalgam to, and subdivision of, that essential operation, crunching down camera color to something that can be rendered somewhere…

Turning off opencl often cures many of these weird issues…

No opencl here.

As you already have realized, color management (or color in general) is at the very least an elusive topic, and needs lots of explanations, knowing lots of different notions about color theory, maths and, perhaps, common sense. I’m not going to enter that discussion (endless discussion, maybe…).

I’ll try to make easy explanations, as short as possible, but even then, this post is going to be lengthy. Sorry for that. And I’m going to explain things from the RawTherapee point of view. I have no clue about how darktable or other software manages colors.

As others have already said, you should always work within a color management environment if you care about showing the «right» colors. Not using a color managed workflow or exporting without an embedded profile is prone to errors and weird colors (it may not happen, but yo can’t be sure at all).

If you understand color management as «a way to convert colors between devices so all of them show the same hues», then the way it works is pretty simple (for us, users; it can be a nightmare for developers, though).

In a raw developer the image has certain colors, which we are processing and modifying to taste, but we can’t see them. They are in the so called processing engine. To be able to properly interact with the app tools, we have to see the image, so a conversion is made, from the profile the image is in (the working profile), to the display profile (that’s why a properly calibrated display is of utmost importance). Again: you will never ever see what is «in» the processing engine, but you need to get an accurate conversion of that image, so you can see as accurately as possible the colors the engine is dealing with.

If you are happy with the results, you export your image with a convenient profile, by means of another conversion from working profile to output profile, so colors remain the same as they were in the processing engine (let’s put aside out of gamut colors for a while).

Then, somebody else downloads your image, and by means of another conversion from the output profile to a display profile (the display used by that somebody), the image colors remain the same.

If you don’t embed an output profile, your raw processing app WILL apply certain profile to the image (in RT is sRGB, by default), but will not say anything to anybody about it. So it is encoded in certain color space, but nobody knows it. It’s like giving somebody some coordinates to find a treasure, but nobody knows which map they belong to.

Let’s say after that an application loads that image, and as there’s no profile embedded, it will ASSIGN an sRGB profile… There’s an important distinction between converting and assigning profiles: «converting» means recalculating the RGB values of each pixel, and even if they change, the color will remain the same. It is just that the coordinates of such color are different in a different color space, but the color remains the same. On the contrary, «assigning» means preserving the pixel values (the coordinates) and using a predefined color space. If the image has been encoded on sRGB and the assigned profile is the same sRGB, then you’re lucky. If not, then colors will certainly change. How much will depend on how different is the color space used for conversion from the one used as assigned color space.

Following the treasure example: you have given coordinates pertaining to Poland, but the treasure hunter starts using a map from the USA. Most probably he won’t find the treasure at all…

So, from my point of view you must always embed a color profile. That way you will at least be relatively sure about people using proper color management will see your image more or less like you intended. The rest will or will not see it like you wish, but you have no control over that. If you don’t use an embedded color profile, you won’t have any control over anybody looking at your image.

Now let’s talk about soft-proofing and the image rendered on display.

There is an unavoidable fact: your display will never ever show colors outside its own gamut. This seems a silly sentence, doesn’t it?

Usually your working profile will be a large one. Much larger than any display will currently be able to show. Professional display or not. You can’t see all possible colors from a large gamut. And usually you won’t see all colors of an output image, unless you have a good wide gamut display capable of showing 100% sRGB, and you save your image with an sRGB profile.

Why is this important? Because with your Dell display you will never see 100% of the possible sRGB colors (according to specs, it covers 99%). And if you wish to save your image with, let’s say AdobeRGB, then for sure you won’t ever see all the possible colors. The display usually will never show the output profile, even when soft-proofing.

In RT the conversion while soft-proofing may be like this (depending on the combination of buttons and settings): working profile > output profile > display profile.

Why is it like that? Because the display can only show its own gamut.

About browsers and the dreaded Chrome/Chromium (at least until version 85): some time ago I opened a thread about the convoluted way that browser handles (handled?) color management. In short: if you wish to be sure about Chrome rendering about the same colors as Firefox and RT, you better use a profile with a standard sRGB TRC («gamma curve»).

If for some reason you still wish to save your images as sRGB without an embedded profile, you better use an industry standard profile: sRGB IEC 61966-2-1:1999. This is the one which almost everybody will default to. But as I said, to me this is not advisable.

The profiles from Elle Stone are better in my opinion, but they will be slightly different from the previous one. I wouldn’t expect much of a change, but anyway, there will be some differences.

Finally, the rendering intents are there to cope with those out of gamut colors I didn’t want to talk about at the beginning. And they are important because, at the very least they are needed so you can properly see your wider gamut colors (from the working profile) within the narrower gamut of your display.

I have left out many topics and details here, but to process an image, maybe there’s not much more you need to know about. Well, it will depend on how accurate you wish your colors to be rendered :blush:

Hope it helps.