Introducing the filmic module in darktable

I will prepare one of them and upload it to a new thread, showing it with no curve, with filmic applied and processing it in my own.

But in my tests it seems that edition of this kind of pictures tends to be easier if you apply first leveles or a linear curve to expand the histogram and then adjust filmic parameters with a greater dinamic range, than using just filmic and adjust black and white points to stretch the range.

Yes you’re wrong, the filmic module it’s a tonemapping operator exactly like reinhard, his primary function is to compress highlights above 1 to 1.

https://eng.aurelienpierre.com/2018/11/filmic-darktable-and-the-quest-of-the-hdr-tone-mapping/#logarithmic_shaper-2

The first step is the logarithmic shaper :

y=(log2 (x/gray) - black)/(white-black)

black and white are misured in EV
gray =0.18

The white EV is the white relative exposure slider in the filmic module and it rapresents how many stop above middle gray there are, aka the max value in the image (generally above 1) that will be mapped to 1.

Reading that value in EV you could do the conversion yourself and see what the is max value using this formula( is it correct @anon41087856?) :

max_value=0.18*(2^white_relative_exposure)

max_value_nits=(0.18*(2^white_relative_exposure))*100

The inverse formula if you know the max value is:

white_relative_exposure=log(max_value/0.18)/log(2)

To me keeping the EV value in the white relative exposure just generates confusion

Thank you a lot .

I have been trying to put all that in an excel

I understand well the shaper function.

With the transfer function from the log output of the shaper to the output device

For an gray level of 0,18 in the camera input and -10 EV for black point and 2 EV for white point I get a the gray level output from shaper is 0,83333

image

Everything OK for that.

Problem comes with transfer function

I use 0.18 gray for display and gamma 1

So the gray level of the display is 0.18^1/gamma= 0.18

In appearance I use a 6 EV lattitude for display output son rel latt is 0.5
Contrast=1

Calculation of Toe gives me
TL= 0,41666 which seem correct

But the output coordinate is
TD= -236666

image

The problem is the position of the graypoint.

If Gl is 0.8 and the gray level in output display is 0.18 then you trace a line with unity slope through that point you get negative number.

In filmic gray point of output seems to be much higher, but @anon41087856 says it is 0,18^1/gamma (0.18 in this case).

I can’t see what is wrong.

That is the display of the transform between the logarithmic output get from the shaper function to output display isn’t it?

You first transform your image data using shaper function and then you transform that with the transfer function to contrast the output and smooth the lights and blacks…

1 Like

It doesn’t compress only highlights above whatever level, it compresses everything above middle grey so the end of the curve hits 100% display-referred.

No, filmic takes values between whatever non-zero and whatever other non-zero, and pack that inside 0 and 100% display-referred. To do that, the middle grey values acts as a pivot/fulcrum/anchor. You need to forget about 0 and 1, those are not special numbers in the scene.

I don’t understand what you are trying to compute in the first place.

1 Like

Sorry for the delay, @anon41087856.

I am trying to understand what filmic curve does and how.
So I am trying to reproduce the calculations in excel (using the english version of the article in your web).

I believe I am having troubles with some concepts like the GrayPoint.
I had thought I knew what it was, but now I am not so sure.

I am confused, and my calculations do not have sense, so I am misunderstanding something.
Please help.

GrayPoint is define “everywhere” as the relative intensity of light that perceptually you see as a middle gray.

lets consider the shaper function alone

y= \frac{log_2(x/grey)- black_{EV}}{DR}

That implies that if you can appreciate say 12 EV ( DR= 12 ) being 0 the darkest point where you cannot distinguish any detail and 12 the lightest where you see no detail, middle gray would be 6EV, with 6EV range for shadows and 6 EV for lights (so black_{EV}=-6, white_{EV}=6)

I am assuming that the input that is coded in 12 bits (from 0 to 4096)

To calculate the input corresponding to a middle gray ouput, I take y=0,5 (a perceptual gray 6EV from the blackpoint).

if we calculate corresponding input:

x=gray \times 2^{ y \times DR+black_{EV}}
x_{gray}= 0.18 \times 2^0=0.18

we get 0.18 as expected.
If we calculate the camera value it would be X_{grey} = 4096*0.18= 737.28 \sim 737.
Lets calculate the corresponding camera value to black point (y=0)

x_{black}= grey \times 2^{black_{EV}}=0,18 \times 2^{-6} =2.8125 \: 10^{-3}

The corresponding camera value would be X_{black}= 4096 \times 2.8125\: 10^{-3}= 11.52 \sim 12
Now I will try to calculate the corresponding camera white point that I have expected to be 4096.

With this dynamic range of 12 EV, the white point is 12EV above the BP and 6EV over grey point, being y=1

x_{white}= 0.18 \times 2^{12-6}=0.18 \times 64=0.18 \times 64= 11.52

which is greater than one, if we calculate the correspoingi camera value it gives us X_{white}=47186.

Obviously I am misunderstanding something.
As equations are taken from your paper and I have revised calculations, it cannot be the calculations.

So I am misinterpreting grey point or white point.

May be grey point does not correspond to middle gray in output (y=0.5) but then, what does middle mean? Are wrong all that places that talk about it as the perceptual middel point?

Cameras do not punt middle gray in the middle of the distribution, canon has white_{EV}= 2.5 and a dynamic range of say 12 EV (so black_{EV}=-9.5}.
I had thought that was to let more room to shadows than lights.

May be it is the concept of white point as a number of EV in the output space.

I calculate the output corresponding to a camera value of 4096, I get:

X_{white}=4096 \rightarrow x_{white}= X_{white}/X{grey}= 4096 / 737.28= 5.55556

If I calculate the corresponding output value is:

y_{white}= \frac{log_2{\frac{5.5556}{0.18} +6 }}{12}= 0.91232

The pure white will be then in the 10.94 EV (just about 11 EV above black and not 12) so the DR won’t be 12 and the lights would have 4.94 EV and not six (but we made all calculations with a DR of 12 EV).
And the grey point won’t be in the middle of the perceptual output any more.

So what is the gray point and which is the corresponding perceptual output value?

I think the trick is in the definition of gray and white point.

1 Like

First thing I noticed is that you put the middle gray at the center of the dynamic range.

But middle gray is a perceptual notion, and does not have to be in the middle of the dynamic range. E.g. in a snow landscape under cloudy conditions: most of the shaded will be at higher luminosities than “middle gray”. Or take a black cat on a dark background…

Or an image of an interior with a window. Where do you want to have the accent:

  • on the interior: then your middle gray will be taken with respect to the interior
  • on the garden: then your middle gray will be taken with respect to e.g. the lawn…

In at least one of those two options, middle gray will certainly not be at the center of the dynamic range.

2 Likes

I’m sure I am misunderstanding middle gray, thank you.

I was always said that middle gray was what we see as a medium gray, in the middle of the transition from black to white.

So I assumed it corresponds to y=0.5 the middle zone from 0 to 10 EV (in Adams zone system), so transition from 4 to 5 zone.

Is it the opposite? The middle of the linear light intensity where light intensity is halve the maximum?

Does that correspond to y=0,18 and zone 1,8 EV near the 2 nd zone?

I have observed too that in current cameras, white point is put somewhere from 2 EV to 2,5 EV from the gray point.
You can measure that seeing how much you can overexpose an image of a uniform iluminated surface before you clip lights.

So if we have 12 EV of dynamic range that would let you 2,5 EV for lights and 9,5 EV for shadows (BP= -9,5 EV).

So it is clear that it is not in the middle of anything.

But what is then grey point?

What is the meaning of 18% gray point?
That does mean that in a camera with a 12 bit sensor (4096 possible values) and if gray is 18% the gray value would be 0.18*4096= 737?

Yes, in a gamma encoded color space (rgb or Lab).
0.18^(1/2.4) =~ 0.5
In linear rgb middle gray is 0.18

This is true if you expose the photo so that a middle gray card is mapped to 0.18 (exposing for gray)

log2(0.18 * 4095)= 9 EV
log(4095/(0.18 * 4095))/log(2)=2.5 EV

If you underexpose by 1 stop (exposing for the highlights)
0.18 becomes 0.09

log2(0.09 * 4095)= 8.5 EV
log(4095/(0.09 * 4095))/log(2)=3.45 EV

In this last case in darktable is possible to add Digital exposure compensation so middle gray is ancored at 0.18 and the max value is 2.
Then use the filmic module.

1 Like

We are mixing up the middle gray from the displayed image and middle gray from the raw data.

First, in the final image, middle gray is at 50% luminosity, i.e. in the middle of the display range. I think we both agree on that. And of course, middle gray will always, by definition, be at the center of the dynamic range for a display-referred image (screen, paper, projector, …)

Now, let’s switch to the scene and the captured data.
The image you capture is exposed to make sure that none of the important zones in the image is clipped (clipped point highlights can be very difficult to avoid).

To transform that image from linear, scene-referred data to logarithmic, display-referred, we use the filmic module. That module has basically three parameters that influence the dynamic range that is covered:

  • middle gray luminance
  • white point
  • black point.

Now, there are several ways to set those values.
One way is to set the middle gray luminance where you want in (at 18.45% in filmic), then use the exposure module to get your middle gray where you want it, and in the filmic module, adjust white and black point to avoid clipping at either end of your luminosity range.
That will virtually always put your middle gray away from the center of the ev range. That is to be expected in the scene-referred data.

But the key point is that scene middle gray is where I decide to put it. Highlights and shadows will just have to adapt, and will end up somewhere, hopefully within my display range.
Again, I decide where middle gray is within the dynamic range covered by my input, not the math. And in general I decide to put middle gray at 18.45%. But I can very well put it at say 5%, or 25%. But that point that I set is the luminosity that will end up as middle gray in my output image
(cf. the filmic presets: middle gray luminance varies between 0.58% and 18.45%…)

In practice, you will use the exposure module (black point) to get your scene-referred values in a somewhat symmetrical range around your center, to avoid problems with the filmic shaper function. But that’s already a matter of taste, and not a requirement of the math.

3 Likes

I might be missing something but why would that be the case? If you are talking about signal levels, then it assumes both that the output space follows perceptual lightness and that white is 100%. But Report ITU-R BT.2408-3 (page 5) places HDR reference white at 203 cd/m², which is encoded in PQ (which is display-referred) as 58 % of the maximum signal value, and 18 % grey is placed at 26 cd/m² or 38 % signal, which is neither half of the maximum signal value nor half of the white signal value. (And also not 18 % of 203 cd/m², not sure what’s up with that…)

If you are talking about log units of dynamic range, then I also don’t see a reason why the output should have exactly as much headroom below middle grey as it does above.

Could you perhaps clarify what you mean? (Or what definition it is that makes it true by definition?)

OK, let me make some numbers.
Now it is more clear.

For now I am just talking about the camera raw values (X) and the normalize (0 to 1) values (x) and the output of the shaper (y) that translate them to the perceptual space.

Yes if gray is 18% Gray point is 2.47 EV bellow White ( x_{gray}= 0.18 x_{white} \rightarrow y_{white}= -log_{2}(0.18)= 2.47 EV over gray point).
Your arre right the middle gray is not in the middle of the output luminosity range, despite its name (silly me).

And that is more or less what canon camera sets, as you blow up the point where you are measuring when you overexpose something less than 2.5 EV.

We will suppose 12 bit camera (0-4096 numbers).
And a White point near the extreme, say 4090, and will suppose a 18% grey point.
We will suppose a black point in the camera taken data of 5 where the noise is so huge you discern nothing.

Let me calculate the corresponding values.

So at camera X_{white}= 4090, X_{black}= 4.

To normalize the values we us the white point set in the camera (4090) so input values will be normalize from 0 to 1

\\ x_{black}= 4/4090=9.78 \, 10^{-4}\\ x_{white}= 4090/4090= 1

Lets calculate the camera value of the gray point, as it is 18% of white point it should be:
X_{grey}= 0.18 X{white}= 0.18 \times 4090= 736.2

Let us calculate EV from white to grey point and from black to grey point:

White_{EV} = log_{2}(X_{white}/X_{grey})= log_2(4090/736.2)= 2.474 EV

As expected, and now BlackEV:

Black_{EV}= log_2(X_{black}/X_{grey}= log_2(4/736.2)= -7,524 EV

Much bigger than white, as it uses to be in the examples of filmic.

DR= White_{EV}-Black_{EV}= log_2(X_{white}/X_{black})= log_2(4090/4)= 9.998 EV

Roughly 10 EV as one would expect.

Lets calculate the shaper output:

\\ y= \frac{log_2(x/x_{grey})-Black_{EV}}{DR}\\ G_{log}=y_{grey}= -Black_{EV}/DR= 0.7526\\ y_{black}= \frac{log_2(9.78 \, 10^{-4}/0.18)+7,524}{9.998}=0\\ y_{white}= \frac{log_2(1/0.18)+7,524}{9.998}=1\\

So the shaper output goes from 0 corresponding to the black point value in camera to 1 corresponding to white point.

You only get negative outputs if your raw camera is less than the black point (4) and greater than one if 1 if the raw value goes above the stablished white point (4090).

If you use EV instead, it goes from 0 to DR (about 10) being grey point at -Black_{EV}

Is this OK for now?

Gray point is not at the middle of the logarithmic interval.
image

2 Likes

Well I had the same idea at first than him.

Everybody tells you that grey value corresponds to a 50% gray.

But in the logarithmic output of the shaper function it does not, in cameras it is at 18% of clipping level, I beleive and that is 2.47 EV from pure clipping white.

If your camera has 10 EV it is clear that you have more space bellow grey than above.

May be that when you transfer it to the output screen with a 2,2 gamma it ends in 50%.

I have not met that point yet, I wanto to be sure I understood correctly what shaper does and the meaning of white poing, black point and grey point (it is obvious I had not them as clear as I had thought).

That also depends. There is nothing in ISO 12232:2019 that mandates this, and cameras deviate from this to varying extents and in both directions. (Yes, apparently, there have been a few Panasonic cameras with negative raw headroom.)

And so this does not necessarily follow:

1 Like

Yes, I am assuming a 18% grey but if it is not the case the corresponding EV will vary, don’t it?

if you set grey to 25% whiteEV would be only 2 EV, to steps from grey point to clipping.

I understand that if the photometer is calibrated to 18% you should use that at first.

But ir you have subexposed your image 1 EV you should use a greater grey point to get the correct exposure (may be you have made it intentionally or you have read in a brighter part of the scene).

Let me try to clarify: camera manufacturers are free to map whatever raw value they want to middle grey as long as it corresponds to the nominal focal plane exposure for the ISO setting in use (0.1 lx·s for a standard output sensitivity of 100 for example). Literally the only thing that the ISO standard says about RAW is that ISO speeds shall not be reported for them because they haven’t been processed yet.

It is perfectly legal to implement all ISO settings with the same analog amplification and RAW values, and so then you have more headroom above middle grey at higher ISO settings, because the RAW value that will be mapped to middle grey is lower.

This is not just theoretical: my E-M1 Mark II does exactly that for ISO 64 and ISO 200. For a given exposure, the RAW files are identical with both settings (verified in RawDigger).

Have you watched @anon41087856 extensive videos? I feel like he covers this in a lot of detail.

Well I have finished the excel file to implement calculations of the filmic curve that @anon41087856 provided.

I was having some problems because I was in the error of thinking that a 18% value in the raw file from white point should be mapped to a 50% value after applying the shaper function.

And then I had a bug in the formulas of the transfer function that keep me getting negative values of TD.

But I have corrected it and now it works well.

I can input the white point and black point of the camera values (and the number of bits) and get BlackEV an WhiteEV calculated.

I calculate also the output display values with a given gamma and number of bits of the display.

So I can show the shaper transform, the transfer function transform, normalize output (before gamma conversion) against camera normalize values and display data versus camera raw data.

If I understood well what @anon41087856 explained the transfer function has to be applied to the output of the shaper function.
Then you apply the gamma correction to get the display values.

For the example I give before:

RAW values (12 bits)

white clip value: 4090
black value: 4

grey 18%

this gives -7.5 BlackEV and 2.47 whiteEV. Dynamic range about 10.

lattitude: 7
contrast: 1.2

Display 8 bits, gamma 1.1

This is the shaper function against normalize camera data

This the transfer function (Hermite spline interpolation) with lattitude 7 and 1.2 contrast

What I wanted to see the normalize output of combined transfer and shaper (before final converstion to display with gamma) against normalize input of the raw sensor data.
And the final display data after gamma conversion and in display units against raw input data.

I will put it later (I had uploaded come curves but they were incorrect, I will upload them when I get them correct).

I am not sure yet if all is correct, I think this is the way filmic does its job.

If somebody is interested in the excel file in order to play his own games with numbers, no problem to provide it.

1 Like

Would be interesting to play around with this spreadsheet, therefore it’s a good idea to share the file.

Ok, I have detected that it has some flaws yet.

The ouput of shaper and transform functions is not ok (you can see it because curves don’t cross at the gray point), I have multiplied the two curves instead of applying the result of shaper as input to the transfer function :scream: :hot_face:

That is the result of using a computer late at night :stuck_out_tongue_closed_eyes:

I will correct it and make some improvements and cleaning, and add comparations with other curves as a gamma curve.

And I will create a new thread to comment results and upload the excel file.

Well I have been polishing the excel.

I have corrected previous errors, reorganized things and commenting it.
I have added the balance function that was missing in previous versions.

Here it is (I will create a thread introducing the datasheet calculations and giving some exapmple).

https://we.tl/t-COkEAj2cA0

And here are the curves that where incorrect in previous message.

The shaper only transformation versus normalize input and compare to a gamma curve that conserves same grey point.


We can see the difference between them, logarithmic shaper having a more smooth local contrast transition in shadows and a bit more local contrast in lights near grey point.

The normalized output versus normalized input (with no display transformation yet).


Again smooth transition in shadows, and more local contrast in lights until the shoulder.
But we see more compression in high lights near the white point, so theree will be less detail.

Of course we can tweak the parameters to change that behaviour if that is what we want.

Fianally the final screen versus raw data curves. Here I have used the standard 2.2 gamma and 8 bit screen, but that can be changed, use a linear gamma or whaterver you feel fits your needs for comparations.