Darktable, filmic and saturation

I sort of get both perspectives, though it can be super confusing. There is a tension between honouring Troy’s method and how our raw processors “work”…

Well, for all the discussion we’ve had to date on this, it looks to bear-of-little-brain here that this is the fundamental issue: at output, ICC and OCIO are an “exclusive OR”, you do one, or you do the other, but not both. This is with respect to the eventual departure from linear in the workflow that is required to accommodate the rendition medium.

All this makes me miss printing from a film negative. With that, there’s no question about how to handle diverse rendition cases; you make the print look the way you want, and your audience regards it in those same terms. For my next PlayRaw, I’m going to need everyone’s mailing address so I can communicate the baseline rendition… :smile:

Somewhat yes, but just because they are two different mechanisms to achieve the same goal, which is color fidelity. You can use either one or the other. If the underlying math is the same, then the result is the same…

However, nothing prevents to mix them in the color workflow. You can do intermediate color transforms with ICC, and then do the final output transform with OCIO. Notice that even this statement is wrong, because OCIO is just a framework in which you can implement your own color transforms. So it would be more correct to say something like “do the final output transform through the ACES v1.0.3 OCIO config”.

I am quite sure that one can take a conversion from a reference working colorspace to a LUT ICC display profile, and translate it into an ad-hoc OCIO config.
In this case, it doesn’t really matter wether you use the ICC profiles or the OCIO config to actually do the color transform, as the result will be the same.

ICC, OCIO and other CMs are frameworks that simplify the work of setting up and performing color transforms, but they are not “black boxes” doing some magic stuff that none can understand. The math is clear, and could be re-written from scratch without using LCMS or OCIO.

With regard to color, agree.
With regard to tone, I still think it’s either/or…

In rawproc, when I put a filmic tone curve and a corresponding S-curve in my tool chain, I have to turn off the display profile transform, well specifically, switch to a profile that has a gamma=1.0 TRC. So the color is converted to suit, but the tone is nulled-out. Since my display gamut is close to sRGB, I just use one of @Elle’s g10 sRGB profiles.

For either ICC or OCIO workflows, I think it’s really important to embed the appropriate ICC profile in an output JPEG that corresponds to both the color and tone of the image data. Then, for color-managed rendition destinations it shouldn’t matter how you got there. But, for all the non-managed wide-gamut monitor wildness that’s now out there, I don’t know what single thing will work…

Oh, for anyone who wants to play with arbitrary color/tone transforms, rawproc has a “colorspace” tool that you can use anywhere in the processing chain, multiple times. It just does a LittleCMS cmsTransform, where the input profile is whatever the previous image in the tool chain has assigned, and the output profile can be either an ICC profile file or a dcamprof json file with a ForwardMatrix and gamma TRC. Great fun for lashing up this stuff for experimentation…

I think you are right here. There is a problem with the preserve chrominance setting in the current implementation of Filmic.

The problem is (as you already pointed out) that the RGB values are raised to a power after the ratio preserving transform is applied, which breaks the ratios.

Setting the destination/display to 1.0 gamma, 50% grey level removes the unnecessary power function, and makes it behave as I would expect.

2 Likes

When I used the log correction without the gamma reversal, the middle grey was always remapped to 72-75% Lab instead of 50% (expected from the log parameters). That’s a gamma 2.2 double up. What a gamma does is pushing the 18% grey to 45-50%. That’s already what we do in the log. Pre-reverting the gamma in filmic gives you a linear scope on log data (at the end of the whole pipe), instead of having a gamma scope on log data (which is a double up). That’s the whole fallacy of the display-referred pipeline : you can’t separate data and scopes, they are both grounded into the display space.

Yeah but the output profile applies the TRC on independant channels, doesn’t it ?

The full pipe is as follow:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{out} = \left[\left[f\left(\dfrac{\log_2(L_\infty) - b}{w-b}\right) \cdot \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \cdot \dfrac{1}{L_\infty} \right]^\gamma \cdot M_{ProPhotoRGB \rightarrow sRGB}\right]^\frac{1}{\gamma}

where brackets are matrices operations, parenthesis are scalar operations, with L_\infty the infinite norm of the vector, ie the maximum of the RGB vector, w the white exposure, b the black exposure, f the filmic tone mapping curve, and M_{ProPhotoRGB \rightarrow sRGB} the simplified transformation matrix from ProphotoRGB → XYZ → Lab → XYZ → sRGB. What you propose is:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{out} = \left[\left(f\left(\dfrac{\log_2(L_\infty) - b}{w-b}\right)\right)^\gamma \cdot \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \cdot \dfrac{1}{L_\infty} \cdot M_{ProPhotoRGB \rightarrow sRGB}\right]^\frac{1}{\gamma}

I would have to unroll the full equations, but it doesn’t look right.

1 Like

Applying the power before or after will break the ratios too. See the above equations. Having more pleasing colors doesn’t mean the ratios are better preserved.

In your code, this transform is applied after the 1/gamma power transform.

my code does a gamma power (i.e compression), not a 1/gamma. The 1/gamma happens in the output color transform.

i’m missing something here…
Why are the TRCs in the display profile relevant to filmic? The display profile TRCs are just the inverse of the TRCs of the actual display hardware. The combined display profile + display hardware transform should be linear if everything is working, the display gamma should not make any difference to any colour processing before the display profile.

1 Like

In order to keep everything in linear encoding, your log correction should map middle grey to 18%, right?

Again, why the log output maps mid-grey to 50%?

Right, I mixed up the exponents in the formula…

What exactly does not look right?

I think that here you are mixing up the concepts of “display-referred” and “encoding”.

Display referred means that the pixel values are “squeezed” to fit the gamut and dynamic range of the output device (as opposed to scene-referred, where the pixel values are proportional to scene light intensities, without restrictions). However, nowhere in the definition of “display referred” I have seen that display-referred data have to be encoded with an exponent different than 1 (see for example here for a good definition).

So the power-like encoding has nothing to do with scene- or display-referred editing. Again, it is a simple trick to optimize the bits allocation in low bit depths formats…

I have no choice, since they are implemented altogether in a serialized pixelpipe.

Let’s start again slowly, because I answered in a hurry previously. What filmic does internally is:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{filmic, out} = \left[f\left(\dfrac{\log_2(L_\infty/g) - b}{w-b}\right) \cdot \dfrac{1}{L_\infty} \cdot \begin{bmatrix}R\\G\\B\end{bmatrix}_{filmic, in}\right]^\gamma \cdot M_{ProPhotoRGB \rightarrow XYZ}

with g, the input grey value (I forgot that in the above equations, not a big deal for the big picture though). Let’s write f\left(\dfrac{\log_2(L_\infty / g) - b}{w-b}\right) = k to make things clearer. It’s an S curve having a linear part in the middle (latitude).

g is the scene-referred linear grey, e.g. 18% relative to the diffuse white. The log thing, by design, raises the midtones and remap 18% to whatever. Having it remap the grey to 18 % makes no sense, it would be a no-op. The benefit of it is you are able to target whatever grey value you want without fiddling around with parameters to tweak. That is, if your gamma doesn’t come in the way, which it does.

So, imagine a classic setup: grey = 18\%. white = 100\%, hence white white \, exposure = \log_2(1/0.18) = 2.47 EV. Set the dynamic range is 10 EV (modern mid-range camera), so the black exposure would be -7.52 EV. Then grey_{out} = \dfrac{\log_2(1) - 7.52}{10} = 75 \%.

The filmic tone curve is built such that f(grey_{out}) = grey_{display}^\frac{1}{\gamma}, e.g. a perceptual grey, which will be around 46% for usual RGB spaces. Try to set it to 18%, and the spline interpolation will fail big time because it puts too many nodes too close from each other. But once this is done, the TRC from the output ICC gets in the way because the grey has already the correct value (46%). So we correct the filmic output to bring back the grey to 18% with a f(x) = x^\gamma,\gamma \in [1.8; 2.4], so the output ICC will put it again at 46%. At this point, whether the gamma is an encoding trick to avoid integer quantization artifacts, or a real display response-curve is irrelevant, we only want the grey point where we expect it in the file. Then, of course, the GPU/display will linearize it back accordingly.

So, overall, what do we send to the display ?

\begin{bmatrix}R\\G\\B\end{bmatrix}_{display} = \left[ \left[ \dfrac{k}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \right]^{\gamma_{filmic}} \cdot M_{ProPhotoRGB \rightarrow Out RGB} \right]^{1/\gamma_{ICC}}

Now, the problem is darktable’s pipe is in Lab. Going through the output color profile, it takes the Lab data (which can be whatever encoding you used), assumes it is linear, and convert it to the destination space. There is no way to either stack another linearization step or completely bypass the TRC when the grey is already where we want it. So the linearization happens in filmic, until I rework that goddam blind pipe driven in the wall full-speed by the output color profile.

The goal is to preserve the RGB ratios, i.e to get:
\dfrac{1}{L_{\infty, display}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{display} = \dfrac{1}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \Rightarrow \begin{bmatrix}R\\G\\B\end{bmatrix}_{display} = \dfrac{{L_{\infty, display}}}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in}
To achieve that, we need to assume Out RGB = ProphotoRGB, which falls back to comparing the RGB vector components in the same vector space, so M_{ProPhotoRGB \rightarrow Out RGB} = Id. Thus:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{display} = \left[ \left[ \dfrac{k}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \right]^{\gamma_{filmic}} \right]^{1/\gamma_{ICC}}

Let’s recall that k = f\left(\dfrac{\log_2(L_{\infty, in} / g) - b}{w-b}\right) = L_{\infty, display}. Then:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{display} = \left[ \dfrac{L_{\infty, display}}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \right]^{\gamma_{filmic}/\gamma_{ICC}} = \dfrac{L_{\infty, display}}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in}, \forall \gamma_{filmic} = \gamma_{ICC}

And thus, as long as \gamma_{filmic} = \gamma_{ICC} the color ratios are preserved along the transformation, quod erat demonstrantum.

If we go your path, we get:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{display} = \left[ \dfrac{L_{\infty, display}^{\gamma_{filmic}}}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \right]^{1/\gamma_{ICC}} ≠ \dfrac{L_{\infty, display}}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in}, \forall \gamma_{filmic} = \gamma_{ICC}
and you are actually compressing the ratios, which is consistent with the less saturated results you get, but does not achieve the original purpose (which is to keep the scene-referred ratios, and is not a perceptually-accurate method and, as such, is not intended to dook good with no further adjustment).

Now, because M_{ProPhotoRGB \rightarrow Out RGB} ≠ Id in general, and this matrice gets raised at the 1/\gamma_{ICC} power, there is some error introduced in the colorspace conversion. But this doesn’t disappear in your version.

What would be great would be to bypass entirely the output ICC TRC, leave data as they are, and tag the TRC accordingly to the log transform in the file, and send to display with a log->gamma final transform that would put the matrices products at the right place, between the TRC conversions (not inside).

But that would be for darktable 2.8.

That’s exactly the reason of my rage against the ICC workflow. You think because you follow the right path, it should end-up ok, because everything is taken care of in a big black box.

ICC. Are. Not. Intended. To. Preserve. Radiometric. Color. Ratios.

What we need to do, here, is stop beeing by-the-book programmers applying standards nobody understands anymore, and look at the maths. You need to draw a block diagram and unroll the transfer functions.

The combined display profile + display hardware transform should be linear if everything is working

No. It should be an identity. Which is not the point here. The point is: how do you carry-on the radiometric color ratios along all the pipe, from sensor to display, provided you need to perform non-linear operations somewhere because linear data look bad to humans ?

Historically, that was a non-question since Photoshop default pixelpipe is gamma-encoded integer, so linearity was never an option until 32 bits floats came along. But, even in 32 bits, we are still grounded into the ICC limitations because people don’t designed pipes, they design single filters.

The ICC workflow is designed such that, if it looks good on your screen, it will look the same on every other screen, because you retouched on that screen and it is your visual anchor/reference. Because you match displays, it’s called display-referred.

The workflow I’m aiming at it designed such that we retain the radiometric data (~ wavelengths) between what you saw in reality and the file you get. Then, whatever you see on the screen is only the business of the guy who manufactured it. Because you match nothing but try to preserve the physical properties of the light on the real scene, it’s called scene-referred. And because you retain some physical link/meaning on your pixels codevalues, you are able to unleash the power of optics inside the algorithms to get a faster and more natural retouch (although the primary goal was to blend digital special effects and digital/film footages seemlessly in movies).

To me, this sounds more like a limitation of your interpolation method than a conceptual issue.

Let’s do a parallel with good old film. When shooting a well-exposed 18% grey card, this will result in a given density in the intermediate negative. The exact value of the density is determined by the ISO speed of the film and the development process, but is not arbitrary.
When the negative is printed, and if you want to recover the original brightness of the scene, the paper will be exposed for a duration of time that results in a 18% reflectance for the grey card.

Hence, unless you deviate from the “nominal” exposure+print process for artistic purposes, good old film maps scene-referred 18% grey to 18% paper reflectance.

Do you agree?

Also, why 46% should be “the correct value”? This assumes that exponent=2.2 is “the correct encoding”, but this is not an universal thing…

This is fine. My original point was that you must apply f(x) = x^\gamma,\gamma \in [1.8; 2.4] to x \equiv max when “preserve chrominance” is activated, and only afterwards scale the RGB channels by the norm ratios. Otherwise, RGB ratios are not preserved…

Since your last step in the filmic code is a conversion from linear ProPhoto to Lab, you expect the grey point to be at 18%. The ProPhoto → Lab conversion will then map it to 50% in the Lab encoding. That is perfectly correct, and will by no means screw up the way colors are reproduced on screen.

The Lab data is encoded with the Lab transfer function, full stop. I am 100% sure that the DT color management code does not assume that the Lab data is linear.

That is another conceptual mis-understanding. You do not want to preserve the RGB ratios in the display colorspace, because this colorspace is not linear. Preserving the RGB ratios only works correctly in linear encoding, there the RGB values can be assumed to be proportional to light intensities.

Therefore, you want to preserve the RGB ratios in the linear ProPhoto colorspace in which the RGB values are represented for the filmic tone mapping.

Also, the filmic module does not know anything about the actual display device. The image might be sent to a printer instead of a display, or the display might expect and encoding other than a pure power function. But you do not need to care about this, because the CM deals with that downstream in the pipe.

Once more, you do not want to keep the ratios in the display colorspace, because it is not linear…

I cannot disagree more… could you show me a technical paper that demonstrates what you are saying?
One could correctly say that ICCs with non-linear encodings are not intended to preserve radiometric color ratios.

In fact, non-linear encodings mainly serve the purpose of efficient bit allocation when working at 8-bit integer precision. If you are dealing with 32-bit floating point data, there are no reasons (apart very few exceptions) to work on pixel values that are not linearly encoded.

And in this case, yes, radiometric color ratios are preserved.

Also, one should not mix the concepts of color spaces and ICC profiles. Color spaces are intrinsically needed to interpret the RGB values in terms of “color”. An RGB triplet has no meaning per-se.

ICC profiles are simply a convenient and standardized way of representing color spaces, as well as the transforms between them.
Getting rid of ICC profiles will not automagically solve the problem of how to interpret (and correctly use) RGB triplets, but will leave all the related maths on your side…

In fact, I am pretty sure that out here there are several people that perfectly understand the ICC standard.

Did you do that? I personally did, and that helped me a lot to clarify what is going on…

The radiometric ratios should be preserved in the light intensities emitted by the display, not in the RGB values as represented in the display colorspace, because the display expects non-linear data encoded with its OETF.

How to carry on the color ratios? Do all your processing in a RGB colorspace that is linearly encoded, and only as a final step convert the values to the display profile.
Do not do any editing in the display profile.
As long as you only convert the pixel values from one colorspace to another, but you do not manipulate the pixel values in the intermediate colorspaces, the colors will not be altered. You can add as many non-linear intermediate ICC profiles in a CM chain, and still get the same output (apart from numerical accuracy consideration, and possible gamut clipping if conversions are not un-bounded).

What the hell are you talking about??? The world around us is linear. The light recoded by our retina is proportional to the light intensities in the scene we observe.
A digital camera is basically a multi-pixel photon-detector, and the RAW pixel values are at first order proportional to the light intensity reaching the sensor, which is the same light that reaches our retina… guess what? RAW data is linear.

Of course, if you send linearly encoded data directly to an sRGB display, that will look wrong (too dark). But that is because the display does not expect linear data.
If you send RGB=(0.46,0.46,0.46) to an sRGB screen, the hardware will decode the power function and will generate a light intensity that is 18% of the maximum brightness, that is perceived mid grey.

If you send RGB=(0.18,0.18,0.18) to the display, it will still decode the power function and will generate a light intensity equal to (0.18)^{\gamma}, that is about 2% for \gamma = 2.2.

That is why linear data looks wrong when sent directly to a display…

I still don’t get what are the limitations…

This is not ICC workflow, it is display-referred workflow. You edit your image in the same colorspace as your display device. You do not need ICCs for that, because there are no colorspace transforms involved!

As soon as you apply any color or tone transform, like boosting the saturation or applying a non-linear tone curve, the radiometric relationship between what you saw and the file you get is already lost.

You still need colorspaces to represent the RGB data in your file, and the response of your display. How do you deal with that?

You cannot imagine to produce a “universal file” that will look good on any display, without any color manipulation to account for the specific characteristics of the display device.

5 Likes

I’m at work right now, and I want to spend some time looking at the specific implementation of the filmic curves you are using.

My guess (and it is a guess - I haven’t looked at the details yet) is that the power 2.2 (default) function you are applying on the output from filmic is not to cancel the effects of the ICC output profile further down the pipe - its because the filmic curves you are using produce an output in a 2.2 gamma encoded colour space and the conversion from PropohtoRGB to Lab expects linearly encoded data. You are cancelling your own gamma curve, not an ICC one.

I’ve read the code now.

dt_prophotorgb_to_Lab expects linear rgb data. If you pass it gamma 2.2 data, you’ll get what you describe. The output ICC profile is irrelevant. The 2.2 is in the filmic output, not the ICC profile.

1 Like