Which is the working Color Space in darktable?

I am new to darktable (windows version).
I am impressed by this software, it has concepts and modules I have not seen in any other, and a great degree of control without being too techy (like dcraw that is oriented to programmers) an more photograph oriented, but letting you a lot of control.

One of the things that attracted me was using a 32 bit floating point workflow in CIE LAB.
It seemed a good idea to have results that mimic the human vision.

But then I have read Pierres’s article about darktable 3.0 migrating to a linear RGB workflow.

I understand the problem. It is clear that a linear workspace is needed for many transformations.

But which is the space used now?

It mentions CIE xyY, but it is not clear whether darktable processes images in CIE xyY or CIE RGB or other working space.

May you select the processing space? Where?

CIE xyY sounds as the logical substitute to CIE LAB, as it is linear and close to human vision, may be CIE RGB for technical processing reasons.

But you can find no mentions to that workspaces in darktable, when you select profiles…

Hi @ariznaf! Welcome to the Forum!

Working profile:

Profile

Output profile:

Profile2

1 Like

This depends on the modules that are used. Some modules work internally in the linear color space, others in the LAB color space. Some modules offer a choice of color space.

@anon41087856 has mentioned in the article which modules work in the linear color space and which in the LAB. He recommends to use the modules that work in the linear color space as far as possible and - if necessary - to use the modules that work in the perceptual color space at the end of the processing.

1 Like

Thank you, Boris.

Ah, OK, I have guessed that input profile might be the solution.

But @anon41087856 talks there about linear spaces that are more close to humanvision (and XYZ space) like CIE xyY and CIE RGB.

And there is none of this space.

In LR or ACR or other soft, I usually use Adobe RGB or ProPhoto in order to get a wide color space with no clipping of colors taken by the camera, edit the image and transform them to sRGB (and fight against color clipping or conversions) when I want to save it as a jpeg to the web.

The logical approach in darktable (in the move from CIE LAB to a linear space) seems to use CIE xyY as working space for the editions and fight color conversions during the export.

But there is no such working space.

By default it seems that linear Rec2019 is selected (with my canon camera, I don’t know whether that matters).
I don’t know that color space, it seems to be video related, and seems like a strange selection to me.

Yes, there seems that there are modules working in LAB and others in linear space.
But, does that mean that each module converts the pipeline data to its own internal working space and then back to the darktable working space?

Or does the module assume that the working space is the correct one and blindly applies its correction even if you are using a linear working space?

For modules that are linear, will the results be the same if you are working in any linear space?
What happens if you are using AdobeRGB or sRGB that are gamma compensated?
Do they work the same by converting to a linear data before their calculations?

The modules convert the color space into the one they need and back afterwards. The reason is that every color space has its advantages and disadvantages.

Linear RGB recreates the behaviour in the phsyical (real) world, but our human vision does not perceive the world linear.
In the video below you can see after 27:00 a comparison of linear RGB and LAB color space. The speaker explains that human perceive yellow brighter than blue (although their brightnesses may be equal in linear RGB).
LAB is oriented on human perception, yellow and blue seem to have the same brightness (for human eyes).

For example I rather do color shifts in LAB than in linear RGB. But for modules that handle dynamic range (filmic for example) LAB would not be the right choice.

3 Likes

OK, thank you.

I know about the differences in human perception and real behaviour of light, but I did not know about that video, it is greatly wellcome.

But as you point, I am not so sure about when to use something that mimics light behaviour (lineal) or human perception (LAB of gamma compensation).

You say that each module does the needed transformations from the workgin space you select in input profile and the model space it uses to do its magic and then back to the working space.

So you should be able to use a color correction module that is design to work in LAB to improve contrast without altering colors, for example.
You should be able to use that module with similar results with a linear ProPhoto working space, an sRGB space or AdobeRGB or Rec709 (as long as you do not produce out of gama colors).

But for example, when I use the exposure module (in the current implementation it seems that it works in a lineal space) results are different if I select a linear space or a gamma compensated one like sRGB or AdobeRGB.
If that modules converts the pipeline to a lineal space internally the effects should be similar.

Are there modules that expect a fix working space (lineal space for example) and do not make that internal convertion?

Decision is actually very simple. If you use modules that work in the linear color space you make sure that the changes are physically correct, i.e. you avoid unwanted artifacts in the image. Only at the end of the processing (pixelpipe) the modules that work in the perceptive color space should be used.

But final users do not have control about when to apply a model in darktable, do we?

As long as I know the pipeline has a fixed order and that not depends in the order you make things or apply corrections, the module fixes when it is apply. Am I wrong?

Exposure is one of the first modules applied, i guess, just after transforming from camera RGB to your working space.

If modules do transform the pipeline back and forth to their internal expected working space, then results should be similar using exposure module in an AdobeRGB space or a linear Rec709.
But they are not.

As long as I know, in current version, selecting a linear working space would be a good option in order to get results with no halos or strange artifacts when making some adjustments.

Is lineal ProPhoto a good bet (I am more used to ProPhoto in LR and PS).
Why there is a Rec709 lineal and it seems to be the default? Is it better to let that default space?
Is it wide enough to contain all colors generated by most cameras?

When making color corrections in an appealing mode, LAB or a human vision related space seems more appropiate.
A suppose that modules like contrast/brightness/saturation should work in that space.

But darktable hides that details to the normal user.
It is a bit confusing (at least for people that is begining with darktable).

Interface should be more clear about which space a module is working on and at which step is it applied (which 4 steps process that aurelienpierre defines is it designed for).

I will try to dig in all that and select the correct modules that work in linear space as adviced.
But some times you need other things like simply saturating your image or dasaturating it.

Yes and no. The order is fixed, but can be changed if necessary.

Default working color space is linear Rec2020 RGB. Yes and yes.

Linear Rec2020 RGB has larger gamut than Adobe RGB:

Therefore the results can not be the same. Also, why should you avoid using a larger gamut (except for export) and use a color space with a smaller gamut instead?

I do not agree. For contrast and saturation I use almost exclusively the color balance module, which works in the linear color space and the results are excellent.

Yes, this is true. It is indeed still very confusing for beginners at the moment. But that’s also because this change to a scene reffered approach is taking place. In the future this will be much clearer.

If you need help, let me know.

The Color Balance Module is very suitable for this. Give him a try.

I think there’s some mixing of concepts in the discussion. Need to tease that apart:

  1. ‘Colorspace’ refers to a way of encoding light to allow construction of color in an image. RGB, LAB, XYZ, CMYK are all different ways of doing this; XYZ and RGB are the closest to how humans “encode” light.

  2. ‘Linear’ refers to the measured energy of light, where twice the light measures as a number twice the size at which the original amount of light measures. That relates to what we perceive as ‘tone’, or some equivalent description of ‘lightness’.

  3. ‘Linear RGB’ would then be a combination of the encoding of tone and color, where red, green and blue values are used to interpret color, and their magnitudes correspond to the original measurements of the light energy.

  4. ‘ProPhoto’, ‘sRGB’ and such are specific data formats that provide a common definition for expressing a colorspace and tone.

Apologies if this is all understood, it just didn’t seem that way reading the posts…

3 Likes

OK, great.

I have to investigate how you can control the order (and that way I will learn more about when each module is applied).

Yes a clearer interface in that aspect would be great. Glad to know it is a work in progress.

So Rec2020 may be a good space to do edits, not as big as ProPhoto, but big enough.

And as long as I understand values are not clipped until the final transform to the output workspace, so there won’t be any clipped color even if you select a narrower space.
Even AdobeRGB would do, but there is no linear AdobeRGB.

I will write down your advice and try to use color balance instead of bright/contrast/saturation.

I will try to stick to the recomended modules for most of the process (I created a working space with them to get them at hand).

But the color balance is much more comples, with different different adjustments to lights, midtones and shadows.

1 Like

Well, using a working profile does change the colors from what the camera encoded, it’s just going from one really large colorspace to a less-large one. Intuitively, I think any of the Rec2020-sized profiles are decent working spaces for the bitdepths used internally.

Of late (not sure if you could do this in dt), but I’ve been processing my raw files in the original camera space, and doing the squash to sRGB at the very end of the tool chain. Looks good to me…

1 Like

mmm I think you cannot process the file in the camera space.

darktable process files after demosacing and converting it to a working space, as long as I could understand.
You can select the color space, but there is not such thing as a camera color space.

Camera does not capture color, just light intensity behind a filter made of a Bayer matrix.

Cameras provide a LUT table to make the transform form the captures level or red, green and blue to the XYZ space, and darktable translate them from there to your working space.
You have several flavors of that LUT transforms (camera profiles in canon cameras) that affect how you interpret that colors.

You say that color are affected by the working space you select.
It should not, the colors are determined by the LUT trans formation not by the working space.
Only when your working space is to narrow and the colors you captured do not fit there you have a different color.

If your working space fits all your colors you should not have differences.

The only differences should be related to quantizations errors. But darktable pipeline is supposed to be floating point (at least it was when in CIE LAB) so there won’t be quantization erros.

I guess that colors would be the same even if you select a narrow linear sRGB but you would get reds with negatives values or green that are greater than one.
That should not be a problem for processing if you don’t clip them.

Well, there is a way I think to do something like working in the camera “model space”.

You can profile your camera.
But you need a profile made for the specific lighing situation.

One you have the profile, you can select that profile as the working space.

I have never done that, but I have read some where that you can select a profile to use as working space.
So you should be able to work in that color space.

I have tempted sometimes to work that way, in order to get the best from the camera and don’t regret any color.

But you have to profile your camera and the profile changes each time you change lighting.

You need a color checker and photos taken with that color checker under your lighting condition.

It is posible under controlled lights in studio, for portraits of product photography.

Well, I process the image as-read from the file, no conversion to a working space. The only conversion I do is at the end, sRGB for the JPEG file. This is not darktable; I just describe it to put some context in the process.

There is a camera profile, which is a description of the colors the camera can resolve, and this is the input to the first colorspace transform done in the workflow. Can’t do color management without it as the starting point. The camera profile can provide a LUT for the transform, but the color information can be just a matrix.

Yes, the specification of a working space tells the software to convert the data from the camera space to the working space. If your working space is sufficiently large in gamut, the change is not significant and thus not adverse to what you’re doing.

Yes, but each of the Bayer filters is a bandpass for a particular segment of the spectrum. A Bayer camera sensing light is not radiometrically consistent across the array.

Regarding working space considerations, this might be helpful:

As I said later (probably you were writing your reply) you probably are reffering to working in your camera profile.

I you have profiled your camera for the lighting conditions you have, you can do it, and I think you can do it in darktable (never tried).

But you are not working in the original data.
The original data is not even RGB (no demosaicing yet).

You are working in an interpretation of that data that reflects the behavior of your camera in that lighting conditions, in RGB.

Yes I think you can do it in darktable too, selecting that camera profile as your working space, I have read it some where, but I cannot tell you how, as I have never tried.

I don’t have a color checker to do the profiling nor usually work in a controlled light condition.

I think that is not exact.

What tell you how to convert the data from your camera to color are the color matrices (or the profile if you have profiled your camera).
That converts numbers into XYZ colors that ar unique colors.
Then you transform that from there to the working space, the numbers do change from a color space to another, but the colors do not change (if the color is inside the gamut).

So as long as your color space is wide enough to fit all colors, color does not change if you select one space or the other (except for quantization errors that occur when you work with integers and a fix number of bits, but not in floating point math).

But, if they’re outside, they do…

A lot of my photography involves extreme colors, and how they’re handled is indeed a concern of where and how I transform from one profile to another. Generally, just I don’t think it’s prudent to eyeball the scene and say, 'my colors aren’t going to be affected transform-to-transform in the color-squash down to a rendition colorspace. And, a ColorChecker doesn’t provide enough patches to make a decent LUT profile to accommodate them in a lot of situations; I’ve specifically proven that to myself.

My “project du jour” is to measure my Z6’s spectral sensitivity in order to make a decent LUT-based camera profile for the handling of extreme colors. And, I’m seeing other benefits in some of the “ordinary” colors; for instance, in foliage images the greens are less yellow with such profiles.

2 Likes

@ggbutcher Is that not part of the input color profile…conversion to a working space between input and output?? But you start by saying you don’t use one??

Well you cannot work directly in the capture data of the camera"
There is not color sata there yet and you only haga one channel.

May be you can do some things there like denoising, but not too much.

I understand he means he works in the profile of his camera.

Of course that is a huge transform of the original data.
But may be it is the closest interpretation to what camera is capturing.

If you are profiling your camera may be it gives the best results the more precise interpretation of what camera has capture and colors.

I think you can use your camera icc profile as input profile and working space, can’t you?

Un theory if you convert that to a huge enough space there should be no color shifts either.

I think I need to explain my workflow in my software.

First, rawproc is my software, wrote it myself, and it supports the way I wish to work, and that is to open the raw file as-is, and add each operation in sequence, to my taste. That includes all the pre-demosaic things like black-subtract and white balance. Then, demosaic, and from there I have a RGB image to go do all the things we usually talk about here like filmic. Then, at output, I usually resize and sharpen for JPEGs to be seen in browsers; for other destinations I would do something different.

So, when I open a raw file, what I usually do first is to assign the appropriate camera profile, which can be either dcraw-style primaries, or a ICC profile I’ve constructed for my camera. After that, I’ll black-subtract for my Z6 (not needed for my D7000), then white balance, and then demosaic. Now I have the linear RGB, next thing is to scale the data to the container bounds of 0.0-1.0 for my internal floating point representation (when I load, say, a 14-bit raw, there’s still two bits remaining to fill 16-bit integers, and I respect the same relationship in floating point), and then I start to consider tone and maybe color manipulations. Nowadays, that’d be mainly filmic and maybe a color saturation, but I might replace filmic with a control-point curve for certain images. Then, resize, and sharpen, and save. Save to JPEG is a conversion to sRGB with a 2.2 gamma, converted from the camera profile.

At no point in this did I convert the image from the camera profile to a working profile.

Here’s a screenshot:

The toolchain is in the top-left pane. Note the checkbox; it designates which tool is selected for display. I can select any of them, even the initial file-open image, and it gets piped to the display through the display profile. And the display transform for any tool past the “colorspace:camera,assign” tool uses the camera profile as input. No working profile in this example.

Now, I used to add another colorspace tool right after demosaic to convert to a working profile. But I found that to be unnecessary, my images were coming out just fine without it. Go figure… really, I need to figure out why that is, because of all of the compelling prose I’ve read about working profiles, but right now, I am pursuing other things like a spectral-sensitivity-based camera profile.

1 Like