New Sigmoid Scene to Display mapping

@jandren how do you get white with sigmoid? The synthetic +3ev is still tinted.

I think that’s the point - that it doesn’t ever completely desaturate a nonwhite color to full-white. (In that test example I don’t think there are any unsaturated input pixels. If you want white output you need white input.)

A few observations on the synthetic comparisons


  1. The blue-green side of the triangles looks rather less saturated than the other two sides, for sigmoid +2 and +3.
  2. Filmic +2 seems a bit of an outlier given the interior of the triangle is more saturated; whereas on most others the interior is less saturated.
  3. Filmic +1 stands out with its desaturated apex. Also this is starting to happen in Filmic 0.
  1. Has more contrast against the light background.
  2. Not just +2 but also +1.
  3. Yes, this was the first thing I noticed. Not pretty.

On the input, the interior is less saturated. But it appears that filmic clips saturated channels much more quickly, and once a channel clips, there’s inherent desaturation. The extreme occurs at +2 and +3 where the most saturated inputs have now clamped to pure white.

@jandren Bright Hammock Colors :wink:

1 Like

The filmic desaturation of bright colours have always been discussed as a feature. I use it sometimes in RT with desaturated colour toning and a bell shaped luminance mask. It does give a bit of a analogue film look

I believe filmic has a desaturation curve, which could be added to the sigmoid module if there is a demand for it, or better yet, separately elsewhere.

Filmic desaturated as you push it towards white. My understanding is that this is physically accurate.

It does! You just have to push more! Remember that there isn’t really a concept of white in scene-referred space. We know what black is and we can then normalize around a standardized reflectance level such as “middle grey”. But we can pixels that have colors and are 100x more brights than middle grey and that is just fine!

Here is the continued story as you keep pushing the exposure up, from +4 to +10!
Note that some dithering could help here, the perfectly smooth transition doesn’t really work all that well with 8-bit images apparently.







Right and wrong, it converges to white for all colors except for the edge case of colors placed directly on the gamut border (usually in some form clipped pixels). You can easily fix this by either gamut compression in color calibration or just adding like 5% or 10% desaturation to “highlights”.
I had liked to explore an even more robust solution to this problem which would be to use wider primaries for the per-channel processing decoupling it from both the work profile and the output profile. But won’t start that work until I know there will be an actual interest in merging the work I have so far.

Things like modernizing the highlight reconstruction to actually extrapolate brightness higher than the sensor could capture makes a lot more sense in this context.

The desaturation method used in the filmic method isn’t based on some physical model derived from first principles. I would rather call it an approximation of what we see happens with analog film. It’s an ad hoc solution to the fact that rgb ratio-based display transforms always preserves the emitted spectrum, even stare into the sun bright (when you expose for a backlit face).

Three is in contrast to rgb ratio, no need to add that kind of desaturation for a per-channel-based display transform, the desaturation naturally emerges from the dynamics of the mapping.

That is because the rec709 gamut isn’t symmetrically placed inside of the rec2020 gamut. So the distance to the edge of the work profile gamut is relatively further away from green than for blue thus making green desaturate a bit earlier. This is not really visible in actual real images but it is another good argument for decoupling the per-channel processing primaries from the work profile primaries.

Yes, and it seems really hard to get around this problem for rgb ratio. This is a case where the choice of the norm has an effect. You will get slightly different results depending on the norm you use.

Yeah that one is interesting, color balance rgb pushes blue to 100% chroma (clamped to the gamut boundary) and higher saturation colors desaturate earlier. Not an easy thing to fix even though it feels like a bug


9 Likes

Based on your other comments (nothing will ever fully clip, it will just become compressed more and more), I think it will converge towards white, but never actually hit white without a rounding/quantization error - which gets right to your comment about 8-bit output. You only see pure white when you’re so close to it that your final output transform gets quantized to 255 for all channels.

I expect you’ll only see a true grey output when the input is, itself, true grey, or so close that it gets quantized to true grey during the final output transform to 8-bit sRGB after some desaturation.

Might be interesting to see how it handles something like an HDR input captured via bracketing + HDRMerge’s combining. A torture test would probably be something like the lighting at my former favorite concert venue, the owners LOVED his RGB LED lighting driven to highly saturated colors. (But that would be really hard to do a bracketed shot at because he also had the lighting reacting rapidly to the music
) Sadly, no more opportunities there since the venue closed in 2019 and the owner died of CJD in 2021. :frowning:

What’s the default norm these days? People referenced max(R,G,B) which was the only option a long time ago, but the behavior there looks more like what would happen if the norm were luminance. As I mentioned before, one of the reasons stated to use max() instead of luminance is to avoid taking an input channel and driving it above the output clip point, which seems to be what’s happening here. Unsurprisingly, blue is weighted least when calculating luminance.

1 Like

@jandren I know you weren’t responding to me specifically. To me, the desaturation of filmic is too arbitrary for my taste, which I noted when it became a feature. @Entropy512 The same is true with norms. Choose the one that best fits the edit is not good enough.

Still, without getting into endless debates, I would say they are functional and that is all that matters to most people and the folks who are comfortable with advocating for them.

6 Likes

I’ve personally never liked the results when using norms as implemented in darktable - @jandren 's comparison does a great job of illustrating why. Back when I used darktable, I almost always disabled the norm-based color preservation and accepted the risk of a bad hue twist, because that negative was not nearly as bad as the negatives that came out of any of the norms. I suspect that in a comparison you’ll see that almost any norm-based approach (as opposed to per-channel with a hue correction performed afterwards) is likely to fail in some cases, with tradeoffs between the various failure modes (hue/saturation dependent clipping, luminance shifts with hue, etc). Jandren’s approach (which I believe is similar to Adobe’s approach used in RT, but with a few additional constraints that he has discussed) is going to behave more predictably/consistently.

7 Likes

To get Jandren’s sigmoid branch I do that (if my memory is correct):

$ git remote add jandren https://github.com/jandren/darktable.git
$ git fetch jandren
$ git checkout -b sigmoid_tone_mapping --track jandren/sigmoid_tone_mapping

Then you can build as usual, and even rebase against the current master.

1 Like

@phweyland , as you noted in github, it looks like rebasing will stop the crashes on importing raws. If you could expand the above to include the rebasing commands, that would be great.

@RawConvert If you have already added the remote upstream:

$ git fetch upstream
$ git rebase upstream/master

You are absolutely correct this time, it converges towards white and never clips. But, as you noted, there will be a point where it is white as in (255, 255, 255) because of 8-bit unsigned integers. I hope the continued story up to +10 EV makes it clear for most users on what to expect.

I think the Shanghai HDRI in post 357 is a pretty good torture test. Happy to try others as well if you find any nice ones!

The default norm is the “power norm” which behaves like a mix between “euclidian” (around achromatic) and “max” (when approaching “single primary colors”, i.e. red, green, or blue). “Luminance” works really poorly in filmic atm due to blue being greatly undervalued which causes gamut mapping problems.

It started out with the method present in RT. Kinda developed into its own thing now with control over the amount of correction as well as taking the “energy” of the output into account. Do you btw know anything more about the Adobe reference? There isn’t any paper or similar linked in the code


2 Likes

Thanks. So would the whole sequence be this please? -

$ git remote add jandren https://github.com/jandren/darktable.git
$ git remote add upstream https://github.com/darktable-org/darktable.git
$ git fetch jandren
$ git fetch upstream
$ git rebase upstream/master
$ git checkout -b sigmoid_tone_mapping --track jandren/sigmoid_tone_mapping

Please note, I don’t want to store code from one build to the next, I don’t want any ongoing complexities. That is, in one build session, starting from scratch, I just want to download everything I need, build, then subsequently do some housekeeping and e.g. delete the code.

This won’t work unless you invert the last 2 commands.

Moreover this is not at all a complete build session. However that should work if you have already a darktable build chain running.

That’s a good question - multiple documents reference it being Adobe-like, but it’s not actually in the DNG spec.

Might have been reverse engineered by observing behavior with controlled inputs? I get the impression Anders Torger did a lot of that in figuring out some of the low-level details needed for dcamprof. He’s got a bit of a rant on Adobe’s vagueness with DNG: Yet some DNG comments (from a raw software developer)

2 Likes