Working profile for darktable

To consider this fully, one needs to understand the entire raw processing pipe…

First, from the time a raw file is ingested to the time the rendition export is puked to a JPEG, there is one rubicon that divides the image state: demosaic. Prior to that, it’s just a pattern of measurements, after that it’s the RGB triples. So, the term “raw” really applies to the left-hand side of the pipe, to those measurements. After that, the colors are now encoded but the essential part of the measurement is still intact, “energy-linear”. Accordingly, those colors still reflect the spectral resolution of the camera, so I will choose to describe them as “camera space”.

That’s also because the color management process is dependent on such an input description. A color transform requires metadata about the input data, which nominally are the primaries in the camera profile. Those primaries, nine numbers that are colloquially described as the “redd-est red, green-est green, and blue-est blue” of the profile are used as a matrix in a multiplication of each image RGB to transform the image data to the destination gamut. Matrix-wise, a camera matrix doesn’t look any different than a ProPhoto, sRGB, or other matrix, so it can be plotted chromatically (in that color horseshoe thing) like any of the others. When you do that, kinda looks like a ‘gamut’ but it really isn’t, more of a ‘scope of spectral response’.

So, in the act of making such camera profiles, the measurements don’t usually “ground-out” to full black and full white. Indeed, due to dark current coursing through the sensor electronics, you’ll never get a true 0,0,0 measurement of black in the scene. I know less about white in this regard, but Elle Stone in one of her articles described how to add hard-coded black and white entries to a target shot data to make a camera profile that is “well-behaved” with respect to handling all the colors. That’s the essential characteristic of working profiles, proper representation of color through the entire gamut, and the reason a lot of softwares use such in their pipeline. So, in those softwares, right after demosaic, the original camera colors are transformed to the gamut of a working profile and the RGB-oriented operations are applied from there.

Thing is, one can make camera profiles behave. So, bear-of-little-brain here thought, why not just ditch the working profile and save the color transform to the end of the pipe, when the data is either rendered in the display profile gamut or rendered to sRGB (or, choose your own poison) for export to a file. Ta-da, seems to work fine. Also, I think such destructive operations such as HSL saturation behave less-destructively when applied to the original “camera space” rather than some working space, or, horrors, the renditon space.

The upshot of all this is you really need to understand every step of the pipeline to fully comprehend the importance and impact of using a working profile…

2 Likes

From here:

https://docs.darktable.org/usermanual/3.8/en/module-reference/processing-modules/input-color-profile/

I found this:

working profile

The working profile used by darktable’s processing modules. Each module can specify an alternative space that it will work in, and this will trigger a conversion. By default darktable will use “linear Rec2020 RGB”, which is a good choice in most cases.

The debayering process does not also convert to a colorspace. That’s done separately, but cannot be done until the image is debayered.

Ah yes, I think that is a case of “technically true but not useful to the end user.”

So if the author of a module needs a specific color space for a specific operation, they can be sepcific about what color space they’re working in but won’t need to do all the extra work to detect the incoming color space (selectable in the input color profile module) and figure out the outgoing profile-- the pixel pipe just handles it for you. Seems convienent.

If you are not careful you could use a module that uses lab so your data will be in and out of another space anyway so the initial choice is important but also the modules that you choose to work with and where you might place them in the pipeline if you are altering their default order…

I would think all that colorspace-slinging would be destructive to the data, especially when the destination gamut is smaller.

Not a fan…

Would it really be a problem as long as the module-specific spaces are at least as large as the global working space and both spaces are linear? Because then we are dealing with a change of base, where the destruction comes from rounding errors.
As long as values outside the “visible” range can be handled in a mathematically correct way and translated back to the global space…

Some operations are much easier in HSL-like spaces, so it would be a pity to deny that possibility. As long as the math “works”, going outside the intermediate gamut shouldn’t be a big deal, provided there’s no clipping.

Yeah, that would probably be okay, but makes me wonder, “why would one want to do that, at least in RGB?”

I’m probably a bit of a hypocrite here, rawproc’s HSL saturation tool does just that, translates to HSL, does the dirty deed, and translates back to linear RGB all in the same compact algorithm. The non-RGB spaces probably require a bit of thinking in that regard. Ha, thinking about my tool, where I currently use it the translation is camera → HSL → camera, and one really can’t tell what it’s destroying because it’s purpose is to destroy (saturate)… :crazy_face:

1 Like

The display referred pixel pipe did a lot of stuff in LAB, but there was also, for instance, the RGB curves module, which works in RGB. This is an option for module authors depending on what they want the module to do.

I don’t think the color space changes in the new scene ref’d pipeline.

Just because it can happen in every module doesn’t mean it does happen. :slight_smile:

1 Like

I think if you tried to use a profile on the raw rgb data to correct it , that profile would only make sense for a given output (usually srgb). If the spectral sensitivity of my sensor can record some reds available in adobe rgb or p3 , but I slap an srg profile on it I lose those; even the same rgb number might represent a different shade of the color between display p3 and srgb . I guess what I’m trying to say is that wouldn’t working directly off a camera profile and no working space limit the data to srgb/rec709 because most people would have their profile based on this display and in essence making the working space srgb? By setting to rec2020 the data can breath in its full glory and I can render out to multiple outputs (Display p3, adobe rgb, etc) . You also set a common space for colormanging everything throughout the pipe.

I’m still learning this stuff, so I might be missing some points

(another good contributor has left the boat?)

I probably need to caveat: I’m using my own software that lets me do such “what Mother Nature did not intend” shenanigans. I can load the raw data, take it through black subtract, whitebalace, demosaic, highlight reconstruction, tone curve, then resize and sharpen for output before doing the color gamut-crunch from camera to sRGB. And, I get good colors; here’s my Z 6 test image:

1 Like

understood, thanks for the explanation.

no need to use this thread for speculations…

1 Like