Color transforms in Raw Processing

A bit of discussion in Red Lily or Daylily. - #21 by sls141 prompted the following question in my oh-so-little head: Where exactly in the raw processing workflow do/should color transforms occur? A lot of discussion about ordering of operations has occurred in darktable threads of late, but really no specific discussion on this.

So, data from the camera starts off in what I’ll refer to as ‘camera space’, and eventually has to be crunched down gamut-wise to some colorspace suitable for the rendition medium. But, if one determines a working profile is a necessary intermediate, then the camera data is transformed somewhere in the pipe from camera space to working space (e.g., ProPhoto, Rec2020, etc…), and the final export transform is then from working space to rendition space (e.g., sRGB). Of note is that there is an additional transform going on during editing, from either camera space or working space to display space. This is the essence of what I describe in Article: Color Management in Raw Processing

My specific questions are these:

  1. Exactly where in the processing pipe should the ‘camera space to working space’ transform be accomplished?

  2. Is the ‘camera space to working space’ really needed? I ask this because I’m not currently using a working space, I do all my work in camera space and do the colorspace conversion at the end of the processing chain at file export. Seems to work okay, and probably has something to do with “well-behaved camera profiles” as described by Elle Stone here: Well behaved working space profiles and applied to camera spaces here: Make a better custom camera profile

These colorspace transforms are simply other image operations, just like tone curves, exposure, and white balance, but we don’t necessarily think of them as such with regard to ordering, so tell me what you think…

The bottom line is, as always, color science is tricky :wink:

But with regards to your questions:

  1. My impression so far has been “as early on as possible, but after demosaicing”. Demosaicing is sort of a sensor-operation, instead of a color operation (although it might sometimes be necessary to do white balancing before…). After you have your pixel values you want to be able to treat them as RGB colours, representative of the colors on the scene. That’s when you do your color space transformation.
  2. It’s all linear algebra, right? Multiply two matrices together and you have a single matrix, i.e. one conversion. I believe the camera and working profiles are separate because it is convenient: your camera profile may be different because of different lighting conditions, your working profile may be different because of how you want the adjusters of your tools to behave. Only working with one profile would make it cumbersome to adapt this.
1 Like

I’m not able to approach this from a scientific point of view, but from experience I can state that it makes a big difference when this input colour profile is applied.

I’ve been playing with the ordering as a result of the afore mentioned Red Lily or Daylily thread due to more extreme results when highlights are clipped if using a D750 profile.

In darktable the input colour profile is applied, by default, after exposure. I would expect this to be done right after demosaic. Easy to check if this has a better result:

Default pipeline:

Input colour profile moved to just after demosaic:


Look at the white bark.

Another example: Default:

And early in the pipeline:

The red “mist” on the yellow in the first one is interesting…

I know that both images have issues (first: Highlights are clipped, second: out of gamut), but still.

These examples are very basic: Standard modules, exposure + filmic (v4, scene referred) are turned on. All settings are as-is when turned on and all are cropped and the colour profile used is the one supplied in the Red Lily or Daylily thread.

I’m not going to make bold statements based on a bunch of test edits I made in the last, roughly, 24 hours, but it is obvious(?) food-for-thought and I am very curious to find out what others have to say about this (hopefully based on some science :slight_smile: )

1 Like

One of my thoughts in this was, if there are operations such as white balance that are beneficial to do pre-demosaic (I’ve heard this said, but I do not understand why), then would there be any benefit to being able to do the camera → working space transform right after loading the raw image, which would require a transform that recognized the mosaic pattern?

After looking at the spectral response of some camera profiles and the gamuts of some working profiles, I wonder if the key benefit of camera → working space then working space → rendition space would possibly be to mitigate the “abruptness” of a matrix transform, which just yanks the out-of-gamut data to just inside the destination gamut. I haven’t done this yet, but I’m going to eventually make some images with an intermediate matrix camera → working profile transform to compare to those I’ve made with a SSF LUT camera profiles to see if there’s any such mitigation.

One of the reasons I started the thread. I’m not a color scientist by any stretch, so I really don’t know how to apply measures such as delta-e to a quantitative assessment.

Oh, after I posted the D750 ICC file, I did some hard-drive digging and found the Open Film Tools SSF data, so I made a profile from it:

Nikon_D750_OFT_ssf.icc (212.5 KB)

It is different, and in my comparison of a rendition of the lily image, it sits between the matrix profile rendition and my first SSF profile from hand-crafted data. Less-yellowish greens, but not so green. I may have to do this assessment with a ColorChecker (or IT8, haven’t sent them back yet) images so I have an ‘eyeball’ reference.

An illustration of a ‘neutral’ raw in this very educational blog post gives a clue why: for a well balanced raw the latent ‘luminance’ maximises the spatial information contained, and it is sensible to leverage it during demosaicing.

1 Like

for reference: Allow usage of camera (or even user defined) white balance in raw preprocessing and demosaic · Issue #5616 · Beep6581/RawTherapee · GitHub

I can try to dig out more examples…

1 Like