Darktable, filmic and saturation

Thank you, Aurélien!

I’ll look at your file. It’s great to have your take on it.

I have tried filmic on three occasions and found it particular difficult to obtain correct skin-tones. I sure am old-school and I still believe that every image should have pure black, pure white and everything in-between. There is definition in the shadows, there is definition in the white-shirt and a very complete scale of greys.

filmic isn’t really made for that. I guess I’m better off with my custom-D610-curve. When trying to adapt filmic to my needs, I end up with strange contrast on the skin.

But I will keep on trying when I shoot objects next time.

Having experience with another raw processor where You have to order your operations, I’ll tell you this is important to fully comprehend. You need to understand both the effects and possible side-effects of the operations, or what you see as the result will not make sense, and you’ll end up making iterative arbitrary changes attempting to “fix” it. Abstracted tools like filmic will vex you in that regard, as there’s a lot going on under the hood to account for in establishing its order in the processing pipe.

@aurelienpierre is to be commended here, he’s doing yeoman’s work in bringing filmic to a place where it can be effectively applied in this particular regard. Just be careful in thinking that being able to specify its order in the processing chain will just make things better. You will need to understand it first.

I can’t comment on how you are (mis)using it until you show me some cases, but filmic IS intended to have pure white, pure black, and a linear tonecurve where skintones lie. Have you read the manual ?

It’s not that abstracted though, you apply a S tonecurve on log-encoded data and desaturate the bounds. The fact it’s bundled at once just makes it possible to retain the RGB scene-referred ratios along the transformation and avoid to re-enter manually the correct settings from one module to another.

Dear all,

today I have been “dissecting” a bit the filmic code in DT, to better understand how it works, and I might have spotted a “conceptual” bug in the case when the “preserve chrominance” option is activated.

As far as I have seen, this option usually produces images with higher saturation. Such saturation is however strongly dependent on the “target gamma” of the display device. Below is one example of what I am talking about:

“preserve chrominance” activated, target gamma = 2.2:

“preserve chrominance” activated, target gamma = 1.5:

“preserve chrominance” de-activated:

The problem I found lies in the sequence of operations that are applied when chrominance preservation is activated. Shortly speaking, the code follows those steps:

  1. computes the maximum of the RGB channels (max=MAX(RGB))
  2. computes the ratio between max and the RGB channels:
    ratios[c] = RGB[c] / max
  3. applies the filmic tone mapping to max:
    max' = TM(max)
  4. computes the new RGB channel values as
    RGB'[c] = ratios[c] * max'
  5. applies the inverse of the display power function, to get back to linear RGB:
    RGB''[c] = powf(RGB'[c], power)

In other words, the “luminance blend” at step 4 is applied using a max' value that is non-linearly encoded, and the individual RGB values are linearised afterwards. This means that the code applies a non-linear power transform to the RGB values, which in turn boosts the saturation but also introduces hue shifts due to its non-linear nature.

In my opinion, the correct way to proceed would be the following:

  1. computes the maximum of the RGB channels (max=MAX(RGB))
  2. computes the ratio between max and the RGB channels:
    ratios[c] = RGB[c] / max
  3. applies the filmic tone mapping to max:
    max' = TM(max)
  4. applies the inverse of the display power function to max':
    max'' = powf(max', power)
  5. computes the new RGB channel values as
    RGB'[c] = ratios[c] * max''

This way, the “luminance blend” at step 5 gets computed using linear quantities, as it should. This results in an output that has a saturation much closer to the case where “preserve chrominance” is de-activated, and also which is totally insensitive
to the “target gamma” setting.

“preserve chrominance” activated, target gamma = 2.2, proposed method:

@aurelienpierre what do you think?


Yes, I would prefer it to be the proposed. I need clarification on what max is exactly. Is it a scalar or vector? To me, max" isn’t linear per se but given that we are adjusting parameters based on the display’s preview it is closer to what we want. Correct me if I am thinking about this the wrong way.

A digression: one thing I noticed about PhotoFlow’s preserve function was that it seemed to introduce out-of-gamut colours, esp. in the shadows. I don’t know if anything has changed since but because of that I never used it.

Not sure what the problem is though. We don’t care that the max' is non-linear, that’s actually the whole point : we are applying a S curve. But we want to keep the ratios as they were. The over-saturation is the consequence of our brain expecting to see things like the Abney effect, but the choice was to let users deal with desaturation because I haven’t found a good one-size-fits all saturation auto-adjustment. To alleviate that, another option could be to correct the ratios by a factor max'/max, which I haven’t tested enough, or use a xyY variant, which brightens reds too much.

The gamma correction is a local workaround for darktable only, since it is grounded in the ICC workflow and there is no mean to bypass the TRC from the output ICC. It is only intended to neutralized the output profile gamma and avoid a double up on the pipe (e.g. gamma-correcting log-encoded values). So, setting the gamma to anything but 2.2 (assuming you use an Adobe RGB output) makes no sense (regarding the design intent), it is designed only to be a no-op once we cross the output transform. Of course, the clean way would be to bypass the output profile TRC, but since ICC profiles can have it stored as a LUT, a tag or a transfer function, the best way is to allow OCIO transform and dont tweak/strip ICC profiles.

Also, the filmic curve is designed to remap the log-encoded grey to (\text{target middle grey})^\frac{1}{\gamma}, which is around 0.46 for usual RGB spaces. So, to keep the luminance the same, if you change the gamma to 1.5, you need to adjust the target grey level to 0.2458 to compare colors at the same lightness.

the max is used as a norm for the RGB vector, hence a scalar.

If you are referring to the gamma correction being proposed, you can disable it by setting output gamma = 1 and target grey = 0.46, so the resulting lightness mapping is equivalent, but the colors will be washed away because the output color profile gamma will still apply.

Thanks for clarifying.

↑ This was what I actually wanted to confirm when I said

but it came out in a jumble. :blush::face_with_hand_over_mouth: By this,

I merely meant that I preferred a desaturated result, since I find it easier to increase colourfulness than decrease it.

again, there is no good one-size-fits-all way to desaturate automatically while preserving hue. That is the Grail of color processing. So I choosed to leave it to the user.

I have planned a chroma compression using IPT-HDR, but it’s not working for now and there is a bug somewhere in my code.

1 Like

This is where I get lost, and where I suspect there is still a mis-conception.

The TRC from the output ICC is not an invention of the ICC-based CM, nor something that is supposed to have an impact on the color rendering.

If you have an sRGB display, then the display expects input values to be encoded with the sRGB TRC. However, this is prevalently for the sake of efficient bit allocation. More recent display formats use different encodings, but conceptually they serve the same purpose. The display hardware will internally invert the TRC and make sure that the propose light intensity is emitted by each of the pixels so that colors are reproduced accurately.

Hence, you would want to bypass the TRC from the output ICC only if wanted to push pixels directly to screen without further color management.

Moreover, the DT Filmic module does not even see the output ICC, because the last step is a conversion from linear ProPhoto to CIELab. Then the module is completely agnostic with respect to where the pixels go further up in the pipeline.
What if they are been sent to a printer? Should one re-adjust the target gamma accordingly to what is in the printer profile?

Let me give you a simple example of what I am trying to explain. Here is a simple test in PhF, where I first apply a power correction with exponent=0.5, and then a second power correction with exponent=2.0. The net result is a no-op, because the two exponents are one the inverse of the other. Now let’s see what happens when the the first power correction is applied to the luminance channel and not to the individual RGB channels, while the second power adjustment is applied to the individual RGB channels. Notice that this is basically what the DT filmic code is doing, if we put aside for a moment the S-curve.

This is the input image in PhF:

This is the result of the sequence “exponent=0.5 applied to luminance + exponent=2.0 applied to RGB”:

Now, here is the input image in DT:

And this is the Filmic output with contrast=1, target gamma=2.0 and preserve chrominance activated; in order to compare apples with apples, I have disabled the luminance-dependent desaturation in the DT code:

Notice how the increase in saturation is very similar in PhF and DT?

Once more, this is nonsense in my opinion, unless you want to completely bypass the display color management and push pixel values directly to screen. The “output profile TRC” simply describes what the display hardware expects in terms of encoding of the input data. In a correctly set-up CM environment it has no bad side effects on the color reproduction, instead it guarantees that pixel values are properly interpreted by the display hardware…

I sort of get both perspectives, though it can be super confusing. There is a tension between honouring Troy’s method and how our raw processors “work”…

Well, for all the discussion we’ve had to date on this, it looks to bear-of-little-brain here that this is the fundamental issue: at output, ICC and OCIO are an “exclusive OR”, you do one, or you do the other, but not both. This is with respect to the eventual departure from linear in the workflow that is required to accommodate the rendition medium.

All this makes me miss printing from a film negative. With that, there’s no question about how to handle diverse rendition cases; you make the print look the way you want, and your audience regards it in those same terms. For my next PlayRaw, I’m going to need everyone’s mailing address so I can communicate the baseline rendition… :smile:

Somewhat yes, but just because they are two different mechanisms to achieve the same goal, which is color fidelity. You can use either one or the other. If the underlying math is the same, then the result is the same…

However, nothing prevents to mix them in the color workflow. You can do intermediate color transforms with ICC, and then do the final output transform with OCIO. Notice that even this statement is wrong, because OCIO is just a framework in which you can implement your own color transforms. So it would be more correct to say something like “do the final output transform through the ACES v1.0.3 OCIO config”.

I am quite sure that one can take a conversion from a reference working colorspace to a LUT ICC display profile, and translate it into an ad-hoc OCIO config.
In this case, it doesn’t really matter wether you use the ICC profiles or the OCIO config to actually do the color transform, as the result will be the same.

ICC, OCIO and other CMs are frameworks that simplify the work of setting up and performing color transforms, but they are not “black boxes” doing some magic stuff that none can understand. The math is clear, and could be re-written from scratch without using LCMS or OCIO.

With regard to color, agree.
With regard to tone, I still think it’s either/or…

In rawproc, when I put a filmic tone curve and a corresponding S-curve in my tool chain, I have to turn off the display profile transform, well specifically, switch to a profile that has a gamma=1.0 TRC. So the color is converted to suit, but the tone is nulled-out. Since my display gamut is close to sRGB, I just use one of @Elle’s g10 sRGB profiles.

For either ICC or OCIO workflows, I think it’s really important to embed the appropriate ICC profile in an output JPEG that corresponds to both the color and tone of the image data. Then, for color-managed rendition destinations it shouldn’t matter how you got there. But, for all the non-managed wide-gamut monitor wildness that’s now out there, I don’t know what single thing will work…

Oh, for anyone who wants to play with arbitrary color/tone transforms, rawproc has a “colorspace” tool that you can use anywhere in the processing chain, multiple times. It just does a LittleCMS cmsTransform, where the input profile is whatever the previous image in the tool chain has assigned, and the output profile can be either an ICC profile file or a dcamprof json file with a ForwardMatrix and gamma TRC. Great fun for lashing up this stuff for experimentation…

I think you are right here. There is a problem with the preserve chrominance setting in the current implementation of Filmic.

The problem is (as you already pointed out) that the RGB values are raised to a power after the ratio preserving transform is applied, which breaks the ratios.

Setting the destination/display to 1.0 gamma, 50% grey level removes the unnecessary power function, and makes it behave as I would expect.


When I used the log correction without the gamma reversal, the middle grey was always remapped to 72-75% Lab instead of 50% (expected from the log parameters). That’s a gamma 2.2 double up. What a gamma does is pushing the 18% grey to 45-50%. That’s already what we do in the log. Pre-reverting the gamma in filmic gives you a linear scope on log data (at the end of the whole pipe), instead of having a gamma scope on log data (which is a double up). That’s the whole fallacy of the display-referred pipeline : you can’t separate data and scopes, they are both grounded into the display space.

Yeah but the output profile applies the TRC on independant channels, doesn’t it ?

The full pipe is as follow:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{out} = \left[\left[f\left(\dfrac{\log_2(L_\infty) - b}{w-b}\right) \cdot \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \cdot \dfrac{1}{L_\infty} \right]^\gamma \cdot M_{ProPhotoRGB \rightarrow sRGB}\right]^\frac{1}{\gamma}

where brackets are matrices operations, parenthesis are scalar operations, with L_\infty the infinite norm of the vector, ie the maximum of the RGB vector, w the white exposure, b the black exposure, f the filmic tone mapping curve, and M_{ProPhotoRGB \rightarrow sRGB} the simplified transformation matrix from ProphotoRGB -> XYZ -> Lab -> XYZ -> sRGB. What you propose is:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{out} = \left[\left(f\left(\dfrac{\log_2(L_\infty) - b}{w-b}\right)\right)^\gamma \cdot \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \cdot \dfrac{1}{L_\infty} \cdot M_{ProPhotoRGB \rightarrow sRGB}\right]^\frac{1}{\gamma}

I would have to unroll the full equations, but it doesn’t look right.

1 Like

Applying the power before or after will break the ratios too. See the above equations. Having more pleasing colors doesn’t mean the ratios are better preserved.