We’re not dealing with precise data, we have precise numbers, resulting from a light signal, to which has been added noise (from different sources) and which then has been passed through a certain number of mathematical transformations (which may be different for different parts of the scene). So while there still is a relation between the number and the data value (or we couldn’t do any editing), the number is an approximation of the data value…
Your constant hammering on the picker values being wrong is a red herring: those pickers do exactly what they are supposed to do. That those values are not what you want, is not the fault of the code, or even of the method.
If you want filmic to cover a precise dynamic range, set the black and white reference values by hand (either with the sliders, or right click on each slider and enter a precise value with the keyboard).
And no, sensor properties are not a good starting point: the original dynamic range will be modified through the pixel pipe processing, and there is no reliable value for middle gray in the original raw data.
In addition to your reasoning (which I agree with), here is a thought experiment if @Neutral_Gray is still not convinced: consider an ideal sensor that just counts photons. Assume that it has no clipping, and no noise, so its dynamic range is \infty. What would you have filmic do?
This was a concern that I had with sigmoid. RGB Filmic provides a graphical interface that I really felt to be useful. sigmoid on the other hand is a simple ‘black-box’ where one needed to imagine the dynamic changes.
I would really like to see a more graphical sigmoid.
It’s a bit different…it’s a curve from “black” to white. You can rotate it with contrast and with skew one direction relaxes the shadows and compresses the highlights and the other does the opposite… So it’s adjustment comes from distortion of the curve. You can keep adding exposure and it will continue to fade to “white”
You get the the most precise data representing the scene by only enabling Input Profile and White Balance and turn off any module which has either a nonlinear transfer function (Tone Curve, Filmic RGB or Sigmoid), treats parts of the image with different gain factors, e.g. Tone Equalizer or Color Balance RGB or alters color. Adjust the Exposure as needed, it has basically the same effect as changing exposure time, aperture or ISO on the camera. Given that the input profile is correct for the camera and the given illuminant, the pixel data of the image you get this way is the most precise approximation of the true scene color and luminance.
You can try this by taking an image of a color checker: White balance for the Neutral 8 patch and adjust Exposure until the Neutral 5 patch is at L=50.8. If the input profile is good, then you get close to the correct color and luminance for all patches.
Filmic RGB assumes a 12 EV dynamic range by default which is IMO quite reasonable for most sensors. Given exposure is adjusted for 18% average scene illuminance, +4EV for white and -8EV for black are sensible. 100% is +2.47EV relative to 18%, therefore +4EV gives about 1.5EV margin for highlight compression.
If exposure has been adjusted for lower or higher middle gray than 18%, move both sliders by the same amount keeping the 12EV dynamic range between them.
From this starting point, tweak the white / black sliders to massage the highlights and shadows to get them as you like.
Don’t use the pickers but keep the 12EV difference between white and black to avoid this.
Maybe a slider to shift both points by the same amount and/or a “constant dynamic range” option (e.g. shift-click) to pick white and adjust black at the same time so the dynamic range remains constant would be helpful?
As an update, I adopted this into my workflow.
The assumption is that I accept being slightly “off” the real spectrum due to the shifts of “intervening” modules.
After correcting exposure in the entire roll, I go into maximum zoom (thus shortening loading times on my old computer) and copy the individual exposure shift values into a simple Excel spreadsheet, which calculates the values that I can then type into Filmic RGB’s “scene” tab.
This is my column setup:
[image number] [exposure shift] [white relative exposure: (4)+exposure shift] [black relative exposure: (-8)+exposure shift]
I think this gives me a good basis for artistic corrections, which I would make in the “look” tab.
It is similar - but not identical - to the “custom middle-grey values” option, but in this case, adapted to the use of the exposure module.
I wish there was a hardcoded option to automatically connect the value from the exposure module with Filmic RGB, but given that this overall method is already regarded as flawed by many, I have my doubts as to whether this is going to happen.
Not needed, as filmic only looks at the image it receives from the preceding module, and considers 0.18 brighness as the invariant point. White and black references are always relative to that brightness, whatever happened in earlier modules (using custom middle gray values doesn’t change the principle, only the pivot point).
Wrong, as you can change the brightness of the midtones with other modules than exposure. Especially the tone equaliser can change the mid tones up to 2 EV (not sure that would be a good use of the module, but that’s not relevant here).
Also, the sliders in the “look” tab determine the contrast at and around middle gray (0.18 input brightness):
contrast sets the slope of the curve,
latitude sets the area over which that slope is forced.
That means that with your method, you give up all control over your highlights and shadows: if you have an image with a low dynamic range, you’ll either get gray highlights and shadows or very high-contrast midtones.
And why do you worry about input (sensor dynamic range) where filmic is supposed to prepare for output (screen, ~10EV, or paper, ~7-8 EV dynamic range)?