Critique of Filmic RGB graphs/documentation

In filmic, there’s the Y norm, and Y is the (photometric) luminous intensity, so it takes hue-dependent perception into account.

Also, filmic is not the only one to employ a norm; I think it shares that feature with OpenDRT, and probably with others.

The motivation (at least if the norm is about perceived intensity) is understandable, I think. In per-channel tools, such as AgX, a blue, which is perceived much darker than a green with the same channel value, will desaturate and tend to white:


Of course, the original EXR image has very bright highlights, but I’d still expect a larger difference between the points where the primaries start turning white.

Are you sure about that? According to its manual,

luminance Y is a linear combination of the R, G and B channels. It tends to darken and increase local contrast in the reds, and tends not to behave so well with saturated and out-of-gamut blues.

AFAIK perceived lightness (commonly denoted L^*) is nonlinear in sRGB. There are, of course, linear approximations for various ranges, but perhaps using them is not ideal for a tonemapper that claims to handle HDR.

1 Like

You might think, “I think Filmic is a trainwreck, with advice like that,” but for many of us who experienced the early days of this mapper, it was an excellent tool. In fact, it still is.

I can achieve similar results with Filmic RGB as with Sigmoid or AgX, with a bit more work. And I understand the assertion that one should use the necessary tools, however necessary, until one achieves the desired visualization of their photograph through various tests.

Ultimately, we all have different tastes and ways of understanding how to capture an image and get the most out of it. In DT, many tools are duplicated, and they are not omitted. Each user is free to use them or not.

This is my contribution to the discussion.

5 Likes

My interpretation of that quotation is that how you adjust the sliders depends as much on the image data as on personal taste and the effect that you want to achieve. If there was only one right setting then the whole process could be completely automated.

3 Likes

In the CIE 1931 model, Y is the luminance, Z is quasi-equal to blue (of CIE RGB), and X is a mix of the three CIE RGB curves chosen to be nonnegative.
CIE 1931 color space - Wikipedia

Luminance is a photometric measure of the luminous intensity per unit area
(Luminance - Wikipedia)

Photometry is a branch of optics that deals with measuring light in terms of its perceived brightness to the human eye. […] Photometry is distinct from radiometry, which is the science of measurement of radiant energy (including light) in terms of absolute power.
(Photometry (optics) - Wikipedia)

However, it’s still a linear quantity (twice the RGB stimulus, twice the intensity). It does represent the relative perceived brightness correctly, though.

Out-of-gamut values need to be tamed before we can perform reliable computations. No tone mapper in darktable handles such colours very well (though filmic is perhaps the most sensitive to them). For an extreme example, see module proposal: gamut compression - #22 by kofa and the follow-up from @hanatos pointing out much of the input (data in the image) is out of the spectral gamut, having been corrupted by the camera input profile.

1 Like

But beginners don’t. Beginners see the package with default workflow: scene- referred and latest(?) tone mapper (AgX from 5.4?).

We have no clue about the rest. Until we begin to dig. And discover this forum :slight_smile:

Well, if you apply an S-curve to RGB data on a per-channel basis, what happens to the ratios between red, green, and blue? And what would be the effect on the resulting colour?

It was also the first tone mapper for a scene-referred workflow. However unpleasant and arrogant Aurélien Pierre is, he was a big force behind the scene-referred workflow.

5 Likes

If this were to be done I suggest that base curve and in particular Base Curve Fusion needs a quick mention.

The beauty of Darktable is choice and we are not all forced down one pathway. Before AgX I discovered Sigmoid and loved the results I got straight out of the box, but there were some images that Filmic seemed to master better. For now AgX rules my workflow and feels to me like filmic gone right, but I am told it works very differently to filmic.

Darktable is a fun and creative program for each to master in their own way. Let’s enjoy. BTW, I appreciate the contributions AP made to Darktable when he was involved and I appreciate the contributions the current developers are adding to the program. It evolves like I evolve and my processing workflow evolves.

6 Likes

I get the feeling the focus of the topic is shifting. Let’s not talk about the qualities of filmic and its author (and about sigmoid and Agx), but rather about improving the docs, if that’s needed.

7 Likes

Good point. Sorry for derailing the discussion, filmic rgb is as it is and it would be best to just document it.

The reverse-engineering of formulas done by @alpinist should enable the inclusion of these in the docs. Not because users necessarily go by the math, but for those who understand the math, the discrepancy can be confusing.

The demo image linked above by @kofa, used in the discussion of the other tone mappers, would be useful to demonstrate various chroma preservation methods. I think that showing v7, and then maybe the v6 options, would be sufficient.

I often fell back to using V5 with no chrominance preservation to get pleasing results for me. I believe another user on this forum suggested it and when I tried it I was pleased.

I wonder if the notorious six color chart would be useful in the documentation? I found this helpful in in other posts to understand the challenges faced by tone mappers.

I suspect filmic will be a challenge to explain especially with the number of versions presented to the user. When I go to the user guide I am normally looking for an understanding of what the sliders do and how to use them best.

1 Like

If you revisit the link that I shared above it walks through filmic pretty well. The issue maybe that it was originally written in 2018 and clearly has some updated sections and some that maybe are not…the screen shots for the main module are of the old one but I would assume most of the “conceptual” information is not changed and would be more or less accurate. There is also a good explanation of the saturation and latitude for versions that used that but again v7 handles saturation differently. If this was cleaned up to confirm all information was in historical context then I think it would be a good format for a reference document. For example he does explain ranges, values and settings for the various parameters (but for what version??)…It has been some time since I went through the current filmic documentation…Maybe there is room to “contextualize” it somewhat from teh content of this document and also clean up any inaccuracies from intermingled version notes… I do agree one of the big problems with filmic were those color norms. I think they were confusing and hard to apply in a predictable way so on that count v7 made attempts to simplify and clean that up. With so many versions and variations it could be that one way to improve things would be to have a reference document for previous versions but you don’t confuse things by trying to cover all that material and explain the current module at the same time in the manual. If I recall I think there is context for this in the manual but maybe its more noise than helpful… I really don’t know…

1 Like

Thanks for your reply Todd. I agree with what you are saying. From my DT teaching perspective I have given up teaching filmic. I find sigmoid gives nice results out of the box for people who want easy results. AgX has filled the space for more ‘advanced’ users who want to tweak stuff. I am not sure how many new users would go down the filmic pathway and the old users who like filmic have got it worked out. But some tweaks to the documentation could be worthwhile.

2 Likes

To be honest: I cannot understand some of the negative feedback on the Filmic RGB module. I find it (especially V7) quite intuitive to use: Set exposure to some decent value, then adapt white and black exposure, switch tab, adjust contrast and highlight de-saturation. Done. Do I understand the mechanics behind it: no. Do I really understand the mechanics behind any of the modules: no. So, why bother with the mechanics if you get good results nonetheless? I am sure there is no solid explanation what LR does when it comes to tone mapping.

15 Likes

fully agree with you! I’m also a well trained and have been an enthusiastic filmic RGB user for years and of course I followed the AgX topic since the first post. To be honest - in the end AgX is something to learn as is filmic RGB. And I still struggle with both modules on certain pictures.

Each tool needs to be mastered (no matter what’s the math behind)

And … nothing can replace testing, practice and Boris’ videos :wink:

2 Likes

I think no one is criticising the way filmic works, or the results that can be achieved; the discussion was initially about documentation and how some of the graphs may be drawn wrong on the UI (but that does not influence the processing).

5 Likes

Yup. Put basecurve at the end of the pipeline and it resolves most of the complaints against it, other than the potential for hue twisting due to per-channel lookups without a hue correction afterwards like what RT’s “film-like” curve mode or sigmoid do.

Sadly, fusion in DT’s basecurve module is broken right now due to a combination of issues:

  1. Per the original exposure fusion paper, the actual fusion calculations need to be in a perceptually uniform space. Enfuse uses LAB - yeah there are better perceptually uniform spaces nowadays, but darktable uses linear RGB which does NOT work well
  2. The default weighting algorithm in enfuse (a gaussian bell centered at 0.5) is tailored to sRGB. Using that weighting algorithm on linear RGB data leads to poor results most of the time.

Ideally an exposure fusion implementation would support a “split pipeline” - generate the multiple synthetic exposures using exposure shift, and apply any of the tonemapper curves to those exposures before fusing them.

3 Likes

(In passing)

I think this is because log(0) = -∞, and yet the code has to deal with black. This is perhaps not the best choice for this function. The statisticians over at R use a pseudo-logarithm to get around this, which turns linear when x ≅ 0.

p(x) = log \left( {\frac x 2 + \sqrt{\left( {\frac x 2} \right) ^ 2 + 1}}\right)

Which is equivalent to

p(x) = arsinh \left( \frac x 2 \right)

In C, this might be:

asinhf(x / 2.0f)

But I have not checked this.