Color Management in Raw Processing

Darktable doesn’t really have a RGB working profile it does (almost) everything in LAB so LAB is the working color space for darktable. At the time this seemed like a good idea due to LAB covering the full human visible gamut (so anything the camera can see will be covered) and the inherit separation of chroma from luma, alas you run into issue since LAB is perceptual uniform which isn’t the greatest way to edit[1]. The other issue is that LAB is ‘here be dragons’ territory with regards to editing HDR[2].

TL;DR for now the working ‘profile’ of darktable is LAB (some people are working to change this but it is a slow process since they want to preserve older edits)


[1] For historic reasons a lot of editing was done perceptual uniform since it could be done with only 8 bits per channel but now that we can do 32bit float editing in a linear space gives a lot less artifacts
[2] There has been done some research but what I could find is that although L* is surprising accurate to well above 100 is is unknown how accurate a* and b* are at those L* values.

1 Like

I used darktable as example because I’m familiar with it and (some of) the code, but my goal was to understand what the working profile is, and is more clear now, thanks.

Is there a PR for that? Maybe I’m duplicating efforts…

I don’t think there is a pull request but I think I heard @anon41087856 talk about working on it

1 Like

Imho, raw processing is mainly in RGB space. The input (raw files) mostly is in rgb space (with some very rare exceptions) and the demosaic is also done in RGB space (with some exceptions, as in xtrans Markesteijn and bayer AHD demosaic, where an intermediate conversion to Lab or YPbPr is done).

After demosaic the image is not raw anymore, means the conversions after demosaic don’t fit to the topic of this thread.

It may help to remember that RGB profiles are essentially three components:

  1. Chromaticity of the white point, expressed as either x and y, or X and Y and Z. (In theory, chromaticity might be expressed as a* and b* of L* a* b*, but it isn’t usually.)

  2. Likewise chromaticity of the three primaries.

  3. Transfer function, aka transfer curve or tone response curve (TRC). This might be a simple power (aka gamma) curve (eg AdobeRGB1998), or a linear portion plus a power curve (sRGB, Rec709, Rec2020, ProPhoto). This doesn’t affect chromaticity.

(1) and (2) can be combined with a 3x3 chromatic adaptation transfer (CAT) matrix to make either a 3x3 chromatic adaptation matrix (CAM), or a look-up table (LUT).

Common CATs include Bradford, CMCCAT2000 and CAT02.

Or all three components could be combined with a CAT to make a single LUT. I don’t think this is normally done.

From any profile that has been correctly specified, we can easily reverse-calculate to find the chromaticities of the WP and primaries.

Assigning a profile merely attaches this metadata to the image (aka embedded profile).

Converting to a profile also recalculates image pixels. Converting an image is always from one profile to another profile. If it has no attached profile, it can’t be converted. However, when there is no attached profile, software may assume some version of sRGB.

We also need to know that there are many versions of sRGB or Rec2020 etc profiles, partly because there are many versions of “standard” illuminants D50, D65 etc, partly because the best guesses for physical “constants” changes with the years, and partly because different levels of precision are used.

Analogies may be helpful, but I suspect they often add confusion. A raster image in a computer is essentially a large quantity of numbers for the pixels, and associated metadata that specify what colours should be produced from the numbers. Data and metadata.

1 Like

At work, after having to undo misunderstanding based on a couple of bad analogies, I’m much less a fan of them.

Yes, I have read some of those papers. I recall there being proposals for an HDR-CIELAB. I wonder though at what point do we move on to other models such as CIECAM02 or CIECAM16, etc.

I’m still cogitating a post of some sort that incorporates the cartoon all of you have helped with, but I’m sidetracked right now doing work with the nascent librtprocess library. I’m thinking of a few more cartoons, ones that describe things like the transform operation (inputprofile → PCS → outputprofile), and the like, surrounded by a post of prose…

1 Like

I always thought of CIECAM as less of a color model and more of perception model useful for adjusting for certain difference in for example surrounding, but are not really intended as spaces to edit pictures in.

Yes! I do like those wordings! @ggbutcher

1 Like

More cartoons please! Vert useful - really! - and collect great feedback almost immeditely.

Apologies, my attention diverted to … SQUIRREL!!!

Sorry, back again. Okay, here’s R3:

I made a profile icon from a ppmcie graphic. Note that the Maxwell gamut triangle gets smaller as the workflow progresses, in the manner a camera → working -output color migration would happen in practice. Not sure what to say about that on the chart; in my mind, that’d be described in the accompanying prose.

I do intend to make something of this, but my attention span will be a bit fragmented in the next couple of months

3 Likes

Pretty pictures always helps with the experience. :slight_smile:

@ggbutcher

Hello Glenn,

Note that the Maxwell gamut triangle gets smaller as the workflow progresses

I always thought that the working profile should have a colour space as large as possible in order not to loose any colour information during processing? RT e.g. has as a colour space ProPhoto.

Where do the rendering intents enter in your graph?

Hermann-Josef

@Jossie,

Most cameras spaces I’ve considered are bigger than the “popular” working spaces, which are in turn mostly larger than the rendition spaces (although high gamut displays are challenging that) so my iconography attempts to illustrate the progression.

With regard to rendering intent, I was beginning to consider a separate cartoon about the transform act, maybe try to provide context around both PCS transforms and the apparent “VFR-direct” ACES/OCIO approach.

Oh, VFR-direct is an aviation term for “go directly there”…

This is getting into “print it and hang it on the wall” territory. Very understandable.

How are you going to handle the clipping of the visible gamut horseshoes? :stuck_out_tongue:

I can’t see what you’re talking about… :smile:

1 Like

Well, does your png have a profile? :nerd_face: