Where is white point in the scene referenced workflow?

I am trying to use only the scene reference workflow in darktable 3.6.0. Because the module shadows and highlights is being deprecated, I shouldn’t access the white point there. Where can I find it now?

Short answer: The filmic rgb module.

Have a look at these for a better understanding about the workflow and at what point/place things are to be done:

1 Like

Not directly related to the question but perhaps somewhat… it might also be fair to point out that the master tab of color balance rgb has a controls for white and gray points to set reference points for the module if you are using that for contrast and luma corrections related to shadows and highlights…

@Jade_NL Thanks for bringing up filmic rgb. I was trying to keep the question short.

In Aurélien PIERRE’s recent video on processing for beginners
https://www.youtube.com/watch?v=5CmsxxxsMDs
he uses black point first before doing the final tweaking using the black point relative exposure in filmic rgb. I don’t know if these two things are equivalent or just similar.

I think of white point behaving like black point in reverse. If black point is in the exposure module, shouldn’t white point be included as well?

There is a hard limit for black, being a zero value. In a linear scene-referred pipe, white has no upper limit. The camera’s sensor will have a maximum output value, which is definedin the raw black/white point module, but any modules coming after that can affect the maximum value. For example, if I increase exposure using something like exposure module or tone equaliser, this can increase the maximum possible value (eg. add 1 stop of exposure, the maximum value is doubled; add two stops of exposure, and the max value is quadrupled).

So, the idea is a scene-referred pipeline is that we make no assumptions about what the maximum value might be, because it can vary depending on what modules came before. Rather, we wait until we want to map the image to something that can be represented on a specific output device – at that point, we need to decide what value in the scene-referred pipe should map to the maximum value a display can handle. This is what filmic does, via the relative white exposure slider. It caps the values coming into it from the scene-referred pipe, so that they will fit within the valid range of an output device. After filmic, we are now working with a display-referred pipe, meaning that any modules after that should respect the range of values that the display can support. We try to minimise the number of modules operating in display-referred space in order to minimise the re-work required when adapting the image for a new display device with a different dynamic range.

7 Likes

So in other words, that means that the Filmic relative white exposure slider defines the white point for the displayed referred pipe?

Well, the white point of the display referred pipe is always 1.0, the filmic module take the values from the scene referred pipe and crams them into that [0.0, 1.0] range. So, the white relative exposure chooses luminance level in the scene-referred pipe above which everything gets mapped to 1.0 in the display-referred pipe. Below that white point level, the filmic curve has a smooth roll-off to ease the transition into that white clipping point.

1 Like

I am not a sophisticated user. Is there something bad about the white point in the old shadows and highlights module so that the developers left it intentionally out of the scene referred modules/workflow?

Yes, it works in display space, so it is prone to halos and other artifacts when pushed. Try the Tone Equalizer and some of its presets as a replacement.

Tone equalizer is a good idea. I was hoping for a slider to put in the quick access panel, though.

Showing my Lightroom origins, I am trying to set up a bunch of sliders in the quick access panel
mostly from color balance rgb:

Master perceptual brilliance grading : global (to emulate the “brightness” slider in the old contrast brightness saturation module.)
Master perceptual brilliance grading : shadows
Master perceptual brilliance grading : highlights
Master perceptual brilliance grading : midtones
exposure: black point

And I’d like to add the white point to round things out, which is why I’ve been asking you all about it. Too bad it doesn’t work.

2 Likes

Its just a bit of a different mindset…you forget more the absolute limits and worry about nailing your midpoint then you assign dynamic range on either side of that and then the roll-off around the middle is shaped by what you set for contrast and latitude and if you shift the highlight/shadows bias…if you change exposure or your middle gray and leave your filmic settings then the dynamic range will be distributed differently…playing with the sliders in filmic with this view will reinforce that and then you start to think in terms of mapping dynamic range around gray and not black and white endpoints
image

3 Likes

There us the option if the sigmoid mapping curve that msps 0,inf of the scene to 0,1 of the display.
But it us not distributed as a module yet.

What i don’t understand in that graphuc us that in a display referred output, midgray shoukd be at 50% not 18%.
18% corresponds to a linear, phisical light world, not a perceptuve or display gamna corrected world.

So it does not seem to be the display output, but an intermediate state with no gamma applied.

1 Like

Afaik, 18% reflectance is (in 8-bit integer sRGB) encoded as (128,128,128), i.e. 50% of the maximum value. Keep in mind that sRGB is not a linear encoding.

Like all things it life …its all about perception :slight_smile:

Why 18% Grey and Not 50% Grey?

You are probably asking yourself why do we refer to it as 18% grey rather than 50%? The simple answer is that our visual perception of brightness is not linear.

Though white objects reflect 100% of light and black objects reflect 0% of light, objects that are ¼ as bright will not reflect 25% of light and objects that are ½ as will not reflect 50% light.

This means we cannot use the arithmetic mean when calculating the average brightness level; instead, a geometric mean is used. To calculate a geometric mean you multiply two numbers and then find the square root of the result.

This is why the middle grey is referred to as 18% grey. At the midpoint between white and black that shade of grey only reflects 18% of light.

2 Likes

With the changes in filmic v5 I don’t really find that much difference with sigmoid…I have not tried it on hundreds of images but removing the mid tone bias and color preservation and the safe roll-offs makes the results of filmic a lot more predictable and pleasing less likely to produce extreme results

1 Like

I cannot predict what i cannot understand.
Filmic does too many things in one point, from range compression to color preservation to highlight recovery (which seems not to be a recovering at all, but a smoothing) too many combinations and not easy to test what each slider does as you only can disable or enable the whole module.
And it overlaps or undoes what other modules has done prior to it.
It gives good results by default, but tends to reduce too much contrast in highlights and darks to a lesser extend (not easy to counteract with color bakance or color balance rgb, as you cannot expand highlights enough with them, tobe equalizer seems to be the way).

I have not tested sigmoid enough, but it gives good results too and is way easier to use and understand.

Anyway we will reach a better understanding of filmic step by step.

1 Like

Gamma is part of your output profile, not really the difference between ‘unlimited scene data’ and ‘what is meant for an output device’.

In other words, filmic brings it to a linear space, but meant for displaying. The gamma curve is different for all sorts of output devices, so that should not be a part of filmic.

Filmic is linear scene referred to linear display referred (if you want to put it very very simply, map unlimited values to a 0,1 range wirh a certain distribution). The ‘color space’ is still the same after filmic.

Taking that linear output and converting it to the color space of whatever your output settings are, is done in the very last output space step (or in your export settings).

Imagine writing an image to Adobe rgb, which has a gamma of around 2.19 (2+51/256 I believe).
Now you want to write it as prophoto, which has commonly a gamma of around 1.8.
Your middle grey would end up at different values, so filmic (and other modules) need to work without having to think about that… So that means linear gamma.

Or, in very short form: whatever transforms you do, gamma for an output device comes at the very very very end :).

1 Like

I describe it to myself as ‘filmic tries to fix overexposed parts by using some sort of in painting to overwrite those parts, to make them less ugly’.
If your raw data is really clipped, you can try to get a nicer look from it. But if your raw data isn’t clipped, try to prevent having to use any ‘highlight recovery at all’.

If the data is there, bring it into visible range.
If the data isn’t there, try to paint over it with something that looks better… if you can.