Right, I mixed up the exponents in the formula…

# Darktable, filmic and saturation

**Carmelo_DrRaw**(Carmelo Dr Raw) #63

I think that here you are mixing up the concepts of “display-referred” and “encoding”.

Display referred means that the pixel values are “squeezed” to fit the gamut and dynamic range of the output device (as opposed to scene-referred, where the pixel values are proportional to scene light intensities, without restrictions). However, nowhere in the definition of “display referred” I have seen that display-referred data have to be encoded with an exponent different than 1 (see for example here for a good definition).

So the power-like encoding has nothing to do with scene- or display-referred editing. Again, it is a simple trick to optimize the bits allocation in low bit depths formats…

**aurelienpierre**(Aurélien Pierre) #64

I have no choice, since they are implemented altogether in a serialized pixelpipe.

Let’s start again slowly, because I answered in a hurry previously. What filmic does internally is:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{filmic, out} = \left[f\left(\dfrac{\log_2(L_\infty/g) - b}{w-b}\right) \cdot \dfrac{1}{L_\infty} \cdot \begin{bmatrix}R\\G\\B\end{bmatrix}_{filmic, in}\right]^\gamma \cdot M_{ProPhotoRGB \rightarrow XYZ}

with g, the input grey value (I forgot that in the above equations, not a big deal for the big picture though). Let’s write f\left(\dfrac{\log_2(L_\infty / g) - b}{w-b}\right) = k to make things clearer. It’s an S curve having a linear part in the middle (latitude).

g is the scene-referred linear grey, e.g. 18% relative to the diffuse white. The log thing, by design, raises the midtones and remap 18% to whatever. Having it remap the grey to 18 % makes no sense, it would be a no-op. The benefit of it is you are able to target whatever grey value you want without fiddling around with parameters to tweak. That is, if your gamma doesn’t come in the way, which it does.

So, imagine a classic setup: grey = 18\%. white = 100\%, hence white white \, exposure = \log_2(1/0.18) = 2.47 EV. Set the dynamic range is 10 EV (modern mid-range camera), so the black exposure would be -7.52 EV. Then grey_{out} = \dfrac{\log_2(1) - 7.52}{10} = 75 \%.

The filmic tone curve is built such that f(grey_{out}) = grey_{display}^\frac{1}{\gamma}, e.g. a perceptual grey, which will be around 46% for usual RGB spaces. Try to set it to 18%, and the spline interpolation will fail big time because it puts too many nodes too close from each other. But once this is done, the TRC from the output ICC gets in the way because the grey has already the correct value (46%). So we correct the filmic output to bring back the grey to 18% with a f(x) = x^\gamma,\gamma \in [1.8; 2.4], so the output ICC will put it again at 46%. At this point, whether the gamma is an encoding trick to avoid integer quantization artifacts, or a real display response-curve is irrelevant, we only want the grey point where we expect it **in the file**. Then, of course, the GPU/display will linearize it back accordingly.

So, overall, what do we send to the display ?

\begin{bmatrix}R\\G\\B\end{bmatrix}_{display} = \left[ \left[ \dfrac{k}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \right]^{\gamma_{filmic}} \cdot M_{ProPhotoRGB \rightarrow Out RGB} \right]^{1/\gamma_{ICC}}

Now, the problem is darktable’s pipe is in Lab. Going through the output color profile, it takes the Lab data (which can be whatever encoding you used), assumes it is linear, and convert it to the destination space. There is no way to either stack another linearization step or completely bypass the TRC when the grey is already where we want it. So the linearization happens in filmic, until I rework that goddam blind pipe driven in the wall full-speed by the output color profile.

The goal is to preserve the RGB ratios, i.e to get:

\dfrac{1}{L_{\infty, display}}
\begin{bmatrix}R\\G\\B\end{bmatrix}_{display} =
\dfrac{1}{L_{\infty, in}}
\begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \Rightarrow
\begin{bmatrix}R\\G\\B\end{bmatrix}_{display} =
\dfrac{{L_{\infty, display}}}{L_{\infty, in}}
\begin{bmatrix}R\\G\\B\end{bmatrix}_{in}

To achieve that, we need to assume Out RGB = ProphotoRGB, which falls back to comparing the RGB vector components in the same vector space, so M_{ProPhotoRGB \rightarrow Out RGB} = Id. Thus:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{display} = \left[ \left[ \dfrac{k}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \right]^{\gamma_{filmic}} \right]^{1/\gamma_{ICC}}

Let’s recall that k = f\left(\dfrac{\log_2(L_{\infty, in} / g) - b}{w-b}\right) = L_{\infty, display}. Then:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{display} = \left[ \dfrac{L_{\infty, display}}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in} \right]^{\gamma_{filmic}/\gamma_{ICC}} = \dfrac{L_{\infty, display}}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in}, \forall \gamma_{filmic} = \gamma_{ICC}

And thus, as long as \gamma_{filmic} = \gamma_{ICC} the color ratios are preserved along the transformation, *quod erat demonstrantum*.

If we go your path, we get:

\begin{bmatrix}R\\G\\B\end{bmatrix}_{display} =
\left[
\dfrac{L_{\infty, display}^{\gamma_{filmic}}}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in}
\right]^{1/\gamma_{ICC}} ≠
\dfrac{L_{\infty, display}}{L_{\infty, in}} \begin{bmatrix}R\\G\\B\end{bmatrix}_{in}, \forall \gamma_{filmic} = \gamma_{ICC}

and you are actually compressing the ratios, which is consistent with the less saturated results you get, but does not achieve the original purpose (which is to keep the scene-referred ratios, and is **not** a perceptually-accurate method and, as such, **is not** intended to dook good with no further adjustment).

Now, because M_{ProPhotoRGB \rightarrow Out RGB} ≠ Id in general, and this matrice gets raised at the 1/\gamma_{ICC} power, there is some error introduced in the colorspace conversion. But this doesn’t disappear in your version.

What would be great would be to bypass entirely the output ICC TRC, leave data as they are, and tag the TRC accordingly to the log transform in the file, and send to display with a log->gamma final transform that would put the matrices products at the right place, between the TRC conversions (not inside).

But that would be for darktable 2.8.

**aurelienpierre**(Aurélien Pierre) #65

That’s exactly the reason of my rage against the ICC workflow. You think because you follow the right path, it should end-up ok, because everything is taken care of in a big black box.

ICC. Are. Not. Intended. To. Preserve. Radiometric. Color. Ratios.

What we need to do, here, is stop beeing by-the-book programmers applying standards nobody understands anymore, and look at the maths. You need to draw a block diagram and unroll the transfer functions.

The combined display profile + display hardware transform should be linear if everything is working

No. It should be an identity. Which is not the point here. The point is: how do you carry-on the radiometric color ratios along all the pipe, from sensor to display, provided you **need** to perform non-linear operations somewhere because linear data look bad to humans ?

Historically, that was a non-question since Photoshop default pixelpipe is gamma-encoded integer, so linearity was never an option until 32 bits floats came along. But, even in 32 bits, we are still grounded into the ICC limitations because **people don’t designed pipes, they design single filters**.

The ICC workflow is designed such that, if it looks good on your screen, it will look the same on every other screen, because you retouched on that screen and it is your visual anchor/reference. Because you match displays, it’s called display-referred.

The workflow I’m aiming at it designed such that we retain the radiometric data (~ wavelengths) between what you saw in reality and the file you get. Then, whatever you see on the screen is only the business of the guy who manufactured it. Because you match nothing but try to preserve the physical properties of the light on the real scene, it’s called scene-referred. And because you retain some physical link/meaning on your pixels codevalues, you are able to unleash the power of optics inside the algorithms to get a faster and more natural retouch (although the primary goal was to blend digital special effects and digital/film footages seemlessly in movies).

**Carmelo_DrRaw**(Carmelo Dr Raw) #66

To me, this sounds more like a limitation of your interpolation method than a conceptual issue.

Let’s do a parallel with good old film. When shooting a well-exposed 18% grey card, this will result in a given density in the intermediate negative. The exact value of the density is determined by the ISO speed of the film and the development process, but is not arbitrary.

When the negative is printed, and if you want to recover the original brightness of the scene, the paper will be exposed for a duration of time that results in a 18% reflectance for the grey card.

Hence, unless you deviate from the “nominal” exposure+print process for artistic purposes, good old film maps scene-referred 18% grey to 18% paper reflectance.

Do you agree?

Also, why 46% should be “the correct value”? This assumes that exponent=2.2 is “the correct encoding”, but this is not an universal thing…

This is fine. My original point was that you must apply f(x) = x^\gamma,\gamma \in [1.8; 2.4] to x \equiv max when “preserve chrominance” is activated, and only afterwards scale the RGB channels by the norm ratios. Otherwise, RGB ratios are not preserved…

Since your last step in the filmic code is a conversion from linear ProPhoto to Lab, you expect the grey point to be at 18%. The ProPhoto -> Lab conversion will then map it to 50% in the Lab encoding. That is perfectly correct, and will by no means screw up the way colors are reproduced on screen.

The Lab data is encoded with the Lab transfer function, full stop. I am 100% sure that the DT color management code does not assume that the Lab data is linear.

That is another conceptual mis-understanding. You *do not want* to preserve the RGB ratios in the display colorspace, because this colorspace is *not linear*. Preserving the RGB ratios only works correctly in linear encoding, there the RGB values can be assumed to be proportional to light intensities.

Therefore, you want to preserve the RGB ratios in the *linear ProPhoto* colorspace in which the RGB values are represented for the filmic tone mapping.

Also, the filmic module does not know anything about the actual display device. The image might be sent to a printer instead of a display, or the display might expect and encoding other than a pure power function. But you do not need to care about this, because the CM deals with that downstream in the pipe.

Once more, you do not want to keep the ratios in the display colorspace, because it is not linear…

I cannot disagree more… could you show me a technical paper that demonstrates what you are saying?

One could correctly say that *ICCs with non-linear encodings are not intended to preserve radiometric color ratios.*

In fact, non-linear encodings mainly serve the purpose of efficient bit allocation when working at 8-bit integer precision. If you are dealing with 32-bit floating point data, there are no reasons (apart very few exceptions) to work on pixel values that are not linearly encoded.

And in this case, yes, radiometric color ratios *are* preserved.

Also, one should not mix the concepts of *color spaces* and *ICC profiles*. Color spaces are intrinsically needed to interpret the RGB values in terms of “color”. An RGB triplet has no meaning per-se.

ICC profiles are simply a convenient and standardized way of representing color spaces, as well as the transforms between them.

Getting rid of ICC profiles will not automagically solve the problem of how to interpret (and correctly use) RGB triplets, but will leave all the related maths on your side…

In fact, I am pretty sure that out here there are several people that *perfectly* understand the ICC standard.

Did you do that? I personally did, and that helped me a lot to clarify what is going on…

The radiometric ratios should be preserved in the *light intensities emitted by the display*, not in the RGB values as represented in the display colorspace, because the display expects non-linear data encoded with its OETF.

How to carry on the color ratios? Do all your processing in a RGB colorspace that is linearly encoded, and only as a final step convert the values to the display profile.

*Do not do any editing in the display profile*.

As long as you only convert the pixel values from one colorspace to another, but you do not manipulate the pixel values in the intermediate colorspaces, the colors will not be altered. You can add as many non-linear intermediate ICC profiles in a CM chain, and still get the same output (apart from numerical accuracy consideration, and possible gamut clipping if conversions are not un-bounded).

What the hell are you talking about??? The world around us *is linear*. The light recoded by our retina is proportional to the light intensities in the scene we observe.

A digital camera is basically a multi-pixel photon-detector, and the RAW pixel values are at first order proportional to the light intensity reaching the sensor, which is the same light that reaches our retina… guess what? RAW data *is linear*.

Of course, if you send linearly encoded data directly to an sRGB display, that will look wrong (too dark). But that is because the display *does not expect* linear data.

If you send RGB=(0.46,0.46,0.46) to an sRGB screen, the hardware will decode the power function and will generate a light intensity that is 18% of the maximum brightness, that is *perceived mid grey*.

If you send RGB=(0.18,0.18,0.18) to the display, it will still decode the power function and will generate a light intensity equal to (0.18)^{\gamma}, that is about 2% for \gamma = 2.2.

That is why linear data looks wrong when sent directly to a display…

I still don’t get what are the limitations…

This is not ICC workflow, it is display-referred workflow. You edit your image in the same colorspace as your display device. *You do not need ICCs for that*, because there are no colorspace transforms involved!

As soon as you apply any color or tone transform, like boosting the saturation or applying a non-linear tone curve, the radiometric relationship between what you saw and the file you get is already lost.

You still need colorspaces to represent the RGB data in your file, and the response of your display. How do you deal with that?

You cannot imagine to produce a “universal file” that will look good on any display, without any color manipulation to account for the specific characteristics of the display device.

**paulmiller**(Paul Miller) #67

I’m at work right now, and I want to spend some time looking at the specific implementation of the filmic curves you are using.

My guess (and it is a guess - I haven’t looked at the details yet) is that the power 2.2 (default) function you are applying on the output from filmic is **not** to cancel the effects of the ICC output profile further down the pipe - its because the filmic curves you are using produce an output in a 2.2 gamma encoded colour space and the conversion from PropohtoRGB to Lab expects linearly encoded data. You are cancelling your own gamma curve, not an ICC one.

**paulmiller**(Paul Miller) #68

I’ve read the code now.

dt_prophotorgb_to_Lab expects linear rgb data. If you pass it gamma 2.2 data, you’ll get what you describe. The output ICC profile is irrelevant. The 2.2 is in the filmic output, not the ICC profile.

**paulmiller**(Paul Miller) #69

The chroma preservation is taking R:G:B ratios from a linear RGB space, and applying them to data in a 2.2 gamma RGB colourspace. Even if the filmic curve made no adjustment to ‘max’, the output colour will be different to the input.

Would it not make more sense to apply the ratios in the same colour space as they came from? (i.e get the before ratios from pow(inputrgb, displaygamma) or apply the linear ratios to linear output RGB, with an appropriately adjusted scale factor as described by @Carmelo_DrRaw above).

**aurelienpierre**(Aurélien Pierre) #70

18% grey is relative to the luminance of a diffuse white in the context of a 6-8 EV dynamic range medium (LDR). Now, because the dynamic range has improved, the 100% luminance (== sensor clipping upper bound) can mean diffuse white or plain light source, so your actual diffuse white could be as low as 50%, so your actual mid grey can be anything, if you exposed the shot to the right. The actual density of the grey is not something you can assume anymore, and it is indeed arbitrary. What you need to understand here is filmic is a hacky way to squeeze an HDR scene-referred workflow inside a display-referred pipe. So, obviously, there are ugly bits.

It is an example, and is valid if you use a common Adobe RGB output or a common calibrated monitor.

I have shown you analytically that it is not true. Unless my calculations are wrong. If the profile TRC applies on separated RGB channels, you should do the opposite on separated RGB channels.

And that’s exactly the point of the gamma correction before the ProphotoRGB -> XYZ -> Lab conversion we do. Putting the filmic output grey from 46% to 18% (provided user set a 2.2 gamma).

Yes it does. It converts from Lab to XYZ to output RGB. Lab is supposed to be a connexion space but is used in dt as a working color space. So whatever Lab vectors you input gets converted to RGB, with no possible decoding. So it assumes a linear pipe.

I disagree. You want to control the color shift happening when you cross the non-linearity wall, which is not possible in any CMS and is the whole purpose of the chrominance preservation option. Otherwise, you can just use the separated channels version.

That’s why you get 4 parameters for the user to setup the properties of the output/display.

You are wrong. Your camera records linear light ratios. Your (LCD/LED) monitor displays linear light ratios. In-between, we put non-linear encodings to alleviate the quantization artifacts, that should result in no-ops if everything goes right. But we also need non-linear operations to make image look perceptually pretty, called tone curves or lightness adjustments or tonemapping, which should affect the pixel energy without changing its radiometric ratios. So we can absolutely ensure the light ratios are preserved all the way if we take the whole pipe as a bundle, and not as a stack of filters. And we should. Because light ratios represent the distribution of the light spectrum.

What the goddam CMS does it matching your working output look with another output look. Not matching the output light spectrum with the input light spectrum and adjust its energy depending on the dynamic range. ICC does not care about pixel spectrum and energy conservation and that’s why the movie industry is using OCIO.

A RGB triplet is a spectral distribution of some light convolved by the spectral sensitivity of 3 colorimeters. It has a meaning. Color spaces give RGB primaries, which are essentially the norm of the RGB base vectors in an XYZ connexion space. That should be trivial for you.

Did you read my calculations ?

Your brain does logarithmic corrections on what the retina records. You see in a logarithmic space, you hear in logarithmic space (dB…). That’s why the most basic thing to do to go from raw to JPG is a tone curve that raises the midtones and compresses extremes.

ICC are grounded in display-referred. Filmic tries to map scene-referred to display-referred because dartktable pipe understands only display-referred.

Does not mean we want the tone curves / tonemapping to mess up our editings.

```
for(size_t k = 0; k < roi_out->height * roi_out->width * ch; k += ch)
{
float *in = ((float *)ivoid) + k;
float *out = ((float *)ovoid) + k;
float XYZ[3];
dt_Lab_to_XYZ(in, XYZ);
float rgb[3] = { 0.0f };
dt_XYZ_to_prophotorgb(XYZ, rgb);
float concavity, luma;
// Global desaturation
if (desaturate)
{
luma = XYZ[1];
for(int c = 0; c < 3; c++)
{
rgb[c] = luma + saturation * (rgb[c] - luma);
}
}
if (preserve_color)
{
int index;
float ratios[4];
float max = fmaxf(fmaxf(rgb[0], rgb[1]), rgb[2]);
// Save the ratios
for (int c = 0; c < 3; ++c) ratios[c] = rgb[c] / max;
// Log tone-mapping
max = max / data->grey_source;
max = (max > EPS) ? (fastlog2(max) - data->black_source) / data->dynamic_range : EPS;
max = CLAMP(max, 0.0f, 1.0f);
// Filmic S curve on the max RGB
index = CLAMP(max * 0x10000ul, 0, 0xffff);
max = data->table[index];
concavity = data->grad_2[index];
// Re-apply ratios
for (int c = 0; c < 3; ++c) rgb[c] = ratios[c] * max;
luma = max;
}
else
{
int index[3];
for(int c = 0; c < 3; c++)
{
// Log tone-mapping on RGB
rgb[c] = rgb[c] / data->grey_source;
rgb[c] = (rgb[c] > EPS) ? (fastlog2(rgb[c]) - data->black_source) / data->dynamic_range : EPS;
rgb[c] = CLAMP(rgb[c], 0.0f, 1.0f);
// Store the index of the LUT
index[c] = CLAMP(rgb[c] * 0x10000ul, 0, 0xffff);
}
// Concavity
dt_prophotorgb_to_XYZ(rgb, XYZ);
concavity = data->grad_2[(int)CLAMP(XYZ[1] * 0x10000ul, 0, 0xffff)];
// Filmic S curve
for(int c = 0; c < 3; c++) rgb[c] = data->table[index[c]];
dt_prophotorgb_to_XYZ(rgb, XYZ);
luma = XYZ[1];
}
// Desaturate on the non-linear parts of the curve
for(int c = 0; c < 3; c++)
{
// Desaturate on the non-linear parts of the curve
rgb[c] = luma + concavity * (rgb[c] - luma);
// Apply the transfer function of the display
rgb[c] = powf(CLAMP(rgb[c], 0.0f, 1.0f), data->output_power);
}
// transform the result back to Lab
// sRGB -> XYZ
dt_prophotorgb_to_Lab(rgb, out);
}
```

Where do you see anything applied in 2.2 gamma RGB space ? The user-defined gamma compression happens last in the algo, to send back the remapped grey to the display grey. Again, **filmic does not output linear data** because it raises the midtones luminance, like any tone curve or tone mapping you would use.

You don’t trust me ? Strip the gamma line, compile, shoot a color checker target and compare input and output values… I did it. It doesn’t work without the gamma because of how the pipe is wired.

But you are confusing **gamma encoding** and **artistic tone curve**. f(x) = x^\gamma is also known as a lightness adjustment. Do you revert your lightness adjustments before you convert to output RGB ? Did you use a linear base curve in darktable before filmic ? I don’t think so…

**Carmelo_DrRaw**(Carmelo Dr Raw) #71

Which profile TRC? If you are talking about the display profile, than the display hardware is already applying the opposite on separated RGB channels, so you do not have to worry about that.

I still do not understand what you are saying…

"*It converts from Lab to XYZ to output RGB.*

This is the way ICC conversion are performed (in the case of a matrix-type output RGB profile). When going from Lab to XYZ, the the inverse of Lab TRC is at some point applied to linearize the values, so that the output *is* XYZ.

“*So whatever Lab vectors you input gets converted to RGB, with no possible decoding.*”

What do you mean by “with no possible decoding”?

Again, I suspect you are mixing the linearity of the tone mapping curve, and the linearity of the colorspace. Of course when you apply a non-linear tone mapping curve to the RGB values you get color shifts. But when you apply such a non-linear tone mapping to the norm of the RGB vector, and you linearly scale the RGB values according to the ratio of the input and output norms, then you preserve the colors. **But the RGB values you are scaling must be linearly encoded for this to work**.

Honestly I have no better way to explain this…

I expressed myself inaccurately… what I meant is that “the filmic module *does not need to know* anything about the actual display device”.

Your filmic tone mapping curve takes RGB values in linear ProPhoto and outputs RGB values in linear ProPhoto. Both input and output are proportional to scene light. It is up tho the rest of the CM chain to make sure that this will also be proportional to emitted light (if the output is a display) or reflected light (if the output is a print).

That’s the whole concept behind CM and, believe me or not, **it works!**

The document you quoted (which I already knew and studied) does not give a single proof of what you are saying. There are very few references to ICC, the main ones being:

“*Display-referred imagery is also the realm of ICC profiles and traditional appearance modeling techniques. If you have two different displays, with different color reproductions, you can use ICC enabled software to convert between color representations while preserving image appearance. You can also use ICC for display calibration, where libraries will compute the color transform necessary to have your display emulate an ideal calibration.*”

and

“*Unlike other color management solutions such as ICC, OpenColorIO is natively designed to handle both scene-referred and display-referred imagery*”

Both they don’t tell me what is wrong with ICC. And by experience I can say that one can perfectly manage scene-referred pixel data using ICC transforms…

That’s the whole point of this discussion. I am still convinced (and trying to demonstrate) that the current filmic code does things wrongly when the “preserve chrominance” is activated.

It does not output *linear data* because it applies a non-linear tone curve, but it outputs *linearly encoded data*. The two things are not the same.

This is where t happens:

```
// Apply the transfer function of the display
rgb[c] = powf(CLAMP(rgb[c], 0.0f, 1.0f), data->output_power);
```

In order to get back to linear ProPhoto you apply a power function to the RGB data. That means that the RGB data prior to this conversion *are encoded with a power function*.

If that is an artistic adjustment, then it is a non-linear adjustment being applied to the separated RGB channels. This happens also in the “preserve chrominance” case. This last artistic step is affecting the RGB ratios in a non-linear way, and therefore ratios are not preserved at the end (while your goal was to preserve the RGB ratios).

Do you see the problem now?

**paulmiller**(Paul Miller) #72

Assume we have a function ‘filmic()’ which applies the log tone mapping and s-curve with appropriate parameters.

Stripped down pseudocode for the filmic module:

I’ve ignored the saturation adjustments.

I’ve labeled the various RGB colour spaces for clarity rather than re-using the same variable.

```
float XYZ[3];
dt_Lab_to_XYZ(in, XYZ);
float rgb_prophoto_linear[3] = { 0.0f };
dt_XYZ_to_prophotorgb(XYZ, rgb_prophoto_linear);
float rgb_prophoto_filmic[3];
for (int c = 0; c < 3; c++)
{
// apply filmic curves
rgb_prophoto_filmic[c] = filmic(rgb_prophoto_linear[c]);
}
for (int c = 0; c < 3; c++)
{
// Apply the transfer function of the display
rgb_prophoto_linear[c] = powf(CLAMP(rgb_prophoto_filmic[c], 0.0f, 1.0f), data->output_power);
}
// transform the result back to Lab
dt_prophotorgb_to_Lab(rgb_prophoto_linear, out);
```

Try the following thought experiments:

- What happens if we replace filmic() with the identity function?
- What happens if we remove the ‘apply the transfer function of the display’ section?

`filmic()`

is doing 2 things at once:

- it applies an artistic tone curve (the tonemapping function)
- it applies a
`1/data->output_power`

gamma correction to bring the rgb values into a display colourspace.

I assume the built-in output_power coversion comes from filmic’s roots in filmic blender, where as I understand it, the output colourspace is intended to go directly to a calibrated display (usually sRGB) without further colour management.

Darktable’s pipeline passes colour values in Lab space, so you have to transform from your ‘display’ colourspace back to Lab somehow - thats where the output_power gamma comes from.

The artistic part of filmic is the good stuff. The gamma encoding is just an encoding issue.

Now, ratios:

```
float XYZ[3];
dt_Lab_to_XYZ(in, XYZ);
float rgb_prophoto_linear[3] = { 0.0f };
dt_XYZ_to_prophotorgb(XYZ, rgb_prophoto_linear);
float max_linear = fmax(rgb_prophoto_linear[0], fmax(rgb_prophoto_linear[1], rgb_prophoto_linear[2]));
float ratios_linear[4];
// Save the ratios
for (int c = 0; c < 3; ++c) ratios_linear[c] = rgb_prophoto_linear[c] / max_linear;
float max_filmic = filmic(max_linear);
// at this point, max_filmic represents the value of max after the filmic artistic adjustments _in a gamma encoded colour space_.
// Re-apply ratios
for (int c = 0; c < 3; ++c)
{
rgb_prophoto_filmic[c] = ratios_linear[c] * max_filmic;
}
for (int c = 0; c < 3; c++)
{
// Apply the transfer function of the display
rgb_prophoto_linear[c] = powf(CLAMP(rgb_prophoto_filmic[c], 0.0f, 1.0f), data->output_power);
}
// transform the result back to Lab
dt_prophotorgb_to_Lab(rgb_prophoto_linear, out);
```

The re-apply ratios code is applying a max value in a display_gamma colour space to linear values and the raising the result to a power - that just seems odd.

Having the pipeline work in Lab isn’t a problem from a maths point of view – you can convert from Lab into whatever colourspace you want to process things in and then convert back. Of course, Lab is not linear (in the sense that it isn’t related to XYZ by a matrix transform). If you do a lot of stuff in linear RGB, then converting from and to Lab is a performance issue and makes the code a bit messier, that is all.

edit - posted too soon by mistake and left half-typed code…

**aurelienpierre**(Aurélien Pierre) #73

They are encoded in log, if you really want to see that as an encoding, with a middle grey value (output) targeting the correct grey value of the output space (46% if you set gamma 2.2). Out of filmic, the RGB data are ready for display. But we can’t bypass the output color profile in darktable, which applies the ICC profile gamma/OETF no matter what, because it expects a linear pipe, so double-up.

The correct way to do it would be to convert from Lab to output RGB before filmic, apply the tone mapping in this space, ditch the in-module gamma workaround, copy the filmic output directly to the JPEG/pixbuf and tag the file with the gamma corresponding to the grey mapping. But darktable’s pipe doesn’t allow that now. So, the current option is a trade-off until clean-up arise. Last week, the modification allowing a full RGB pipeline in darktable (along with modules re-ordering) has been merged in master, so I will finally be able to do that properly.

Your fix is a good looking option, but does not preserve hues and defeats the original purpose.

It does no such thing. I don’t know where you get this idea, but it’s dead wrong.

We get a log transfer function.

We get a double up: the middle grey gets pushed to 72%.

That’s actually the opposite that happens. Filmic’s output is already display ready, but I can’t send anything to display directly, so I have to fake a linear output.

**afre**#74

Interesting. Is that going to replace the current pipeline? Will L*a*b* still have a place?

**paulmiller**(Paul Miller) #77

(edit: trying to make the quote clearer)

The conversion to display colourspace isn’t explict in the equations for the filmic transform, but it is there:

Where \overrightarrow {RGB} is a linear rgb pixel value, grey is the input grey level, E_k is the black exposure level, E_w is the white exposure level and d_w, d_k and d_\gamma are the display parameters. Note that the function depends on the display \gamma value.

The output, filmic(x) is *display ready* (or at least ready for some imaginary display), therefore filmic(x) *must* be in the display colourspace.

So, filmic() is doing 2 things. It maps the wide range of colours and tones in the input image into the display tonal range in a pleasing manner, and it converts to the display colourspace (yes, I know the current implementation does both at once). You could conceptually separate filmic into 2 functions. filmic_{art}(x) and display(x). When composed, these functions have *exactly the same effect* as filmic above.

filmic_{art} does the artisitic tone mapping and outputs values in the same linear RGB space as the input, and display does the conversion from linear to display RGB space.

display(x) does nothing to the appearance of the image, it is just mapping from one colour representation to another. Note that display is not necessarily a pure power function (sRGB isn’t).

Separating filmic like this uncouples the implementation of filmic from details of the colour encoding used for the display.

If filmic is pure tone-mapping, it doesn’t need to know the transfer function of the target display, since that is just an encoding detail and doesn’t affect the result.

Provided you have the correct colour encoding to send to the display eventually, you’ll get the right output.

I think I’ve probably said enough now, we seem to be arguing in circles.

**Carmelo_DrRaw**(Carmelo Dr Raw) #78

Out of filmic, the RGB data are ready for display.

I guess you know this is a bit over-simplified, right?

Try to answer the following question: *which display?*

But we can’t bypass the output color profile in darktable, which applies the ICC profile gamma/OETF no matter what, because it expects a linear pipe, so double-up.

Whatever ICC transform will reverse the encoding of the input data, and will encode the output data with the transfer function specified in the output profile.

It does not expect linear data. However, for this to work the input data must be “tagged” with the appropriate ICC profile.

The correct way to do it would be to convert from Lab to output RGB before filmic, apply the tone mapping in this space, ditch the in-module gamma workaround, copy the filmic output directly to the JPEG/pixbuf and tag the file with the gamma corresponding to the grey mapping.

This sounds like “display-referred” pixel editing, where the pixels are manipulated in the colorspace of the display device in use. I hope you will not really go in this direction, as IMHO it would be a regression.

Also, what about the gamut? If you save the filmic output directly to JPEG, you will end up with a file with a power=2.2 encoding and ProPhoto gamut, which is not forbidden in principle, but quite unusual in practice…

Your fix is a good looking option, but does not preserve hues and defeats the original purpose.

Let me proof you the opposite with a practical example. I have used this HDR file as the test image. I will show screenshots taken with PhF just to easily show the LCh values of specific image portions.

Here is the original file, without any tonemapping. Do not look at the colors on screen, they are clipped. Just note down the LCh values of the middle-right color patch:

```
L = 86.38
C = 182.34
H = 315.84
```

This is the output of the default DT filmic module, with “preserve chrominance” activated. I have however modified the code to disable the de-saturation of shadows and highlights, to avoid the additional hue distortions introduced by that step. The LCh readings are

```
L = 28.56
C = 154.53
H = 307.13
```

Link to the TIFF file.

Finally, my modified code, also with disabled shadows/highlights de-saturation. The LCh readings are

```
L = 51.46
C = 120.16
H = 315.84
```

Link to the TIFF file

See where the hue is preserved, and where it isn’t?

Other patches show varying degrees of discrepancy.

It does no such thing. I don’t know where you get this idea, but it’s dead wrong.

You need to apply a `powf(rgb[c], data->output_power)`

correction to go from filmic output to linear ProPhoto. Also, you said yourself that the filmic output is “ready for display”, assuming by default a power=2.2 transfer function for the display. The `1/data->output_power`

is obviously not explicit, but “embedded” in your filmic curve, but it is there (otherwise the output would not be “ready for display”).