Again, auto apply the color balance rgb preset “Add basic colorfulness.”
You get a lot of support on these forums and most here think it is worthy, but the only way to see if it gets accepted into darktable master is to make that pull request in git. I wouldn’t be too frightened of a ‘controversial’ label, you’ve done a lot work on it since that label first appeared, and demonstrated how it fits into the scene referred workflow. What more can they ask?
I figured I should try this for completeness!
First, two of the images from the above image set with the preset applied to the rgb ratio versions compared with the preserve hue = 100% version.
That is indeed better but there is of course still a difference. It’s really hard to see how they differ exactly on a photograph like that. I personally really love synthetic tests on these occasions where a simple sweep can show a lot of what happens! Again, try to imagine that this generalizes to any curve shape and norm of filmic, there will mostly be a change in when and how fast it approaches white. The default contrast setting of 1.6 matches fairly well with the color balance rgb preset. The input fills the rec709 gamut and the working space is rec2020.
I hope it is clear to everybody that these two methods do not produce the same results. I do not even want to call them similar when comparing the synthetic tests! I would like to be so bold as to say that it’s impossible to reproduce the look of per-channel mapping (with preservation hue on or off) with a combination of an rgb ratio map and color balance rgb. Scroll up to post 477 and think about how you would warp the image data before an rgb ratio mapping operation such that the combined effect is the same as per-channel mapping.
Simply put, do you want the pre-channel look? Then use the per-channel method.
One reason for this is that the chroma changes that happen when applying a per-channel mapping are “hyperbolic” and never clips the gamut which is a very nice property. I think this hyperbolicness would be extremely interesting to use for a color editing module but that is a story for a later date.
Thank you! I think this is a much more even comparison and shows the difference in color rendering as it would be if you were using filmic as recommended by the documentation.
There certainly is room for this if the support is there I think. I don’t see why now that we see modules with multiple sections like color calibration and exposure that we could not consider having a global tone mapping module with two or more sections, one for filmic and one for sigmoid and potentially any others that might come along…The one issue could be if other modules take specific input from filmic parameters or are designed to work assuming filmic was used then that might not work but otherwise it should be possible…or just keep it separate as a tone mapping module?? I guess like art and RT you could just have a separate fork and use it as that user group does?? I guess time will tell
@jandren how do you get white with sigmoid? The synthetic +3ev is still tinted.
I think that’s the point - that it doesn’t ever completely desaturate a nonwhite color to full-white. (In that test example I don’t think there are any unsaturated input pixels. If you want white output you need white input.)
A few observations on the synthetic comparisons…
- The blue-green side of the triangles looks rather less saturated than the other two sides, for sigmoid +2 and +3.
- Filmic +2 seems a bit of an outlier given the interior of the triangle is more saturated; whereas on most others the interior is less saturated.
- Filmic +1 stands out with its desaturated apex. Also this is starting to happen in Filmic 0.
- Has more contrast against the light background.
- Not just +2 but also +1.
- Yes, this was the first thing I noticed. Not pretty.
On the input, the interior is less saturated. But it appears that filmic clips saturated channels much more quickly, and once a channel clips, there’s inherent desaturation. The extreme occurs at +2 and +3 where the most saturated inputs have now clamped to pure white.
The filmic desaturation of bright colours have always been discussed as a feature. I use it sometimes in RT with desaturated colour toning and a bell shaped luminance mask. It does give a bit of a analogue film look
I believe filmic
has a desaturation curve, which could be added to the sigmoid module if there is a demand for it, or better yet, separately elsewhere.
Filmic desaturated as you push it towards white. My understanding is that this is physically accurate.
It does! You just have to push more! Remember that there isn’t really a concept of white in scene-referred space. We know what black is and we can then normalize around a standardized reflectance level such as “middle grey”. But we can pixels that have colors and are 100x more brights than middle grey and that is just fine!
Here is the continued story as you keep pushing the exposure up, from +4 to +10!
Note that some dithering could help here, the perfectly smooth transition doesn’t really work all that well with 8-bit images apparently.
Right and wrong, it converges to white for all colors except for the edge case of colors placed directly on the gamut border (usually in some form clipped pixels). You can easily fix this by either gamut compression in color calibration or just adding like 5% or 10% desaturation to “highlights”.
I had liked to explore an even more robust solution to this problem which would be to use wider primaries for the per-channel processing decoupling it from both the work profile and the output profile. But won’t start that work until I know there will be an actual interest in merging the work I have so far.
Things like modernizing the highlight reconstruction to actually extrapolate brightness higher than the sensor could capture makes a lot more sense in this context.
The desaturation method used in the filmic method isn’t based on some physical model derived from first principles. I would rather call it an approximation of what we see happens with analog film. It’s an ad hoc solution to the fact that rgb ratio-based display transforms always preserves the emitted spectrum, even stare into the sun bright (when you expose for a backlit face).
Three is in contrast to rgb ratio, no need to add that kind of desaturation for a per-channel-based display transform, the desaturation naturally emerges from the dynamics of the mapping.
That is because the rec709 gamut isn’t symmetrically placed inside of the rec2020 gamut. So the distance to the edge of the work profile gamut is relatively further away from green than for blue thus making green desaturate a bit earlier. This is not really visible in actual real images but it is another good argument for decoupling the per-channel processing primaries from the work profile primaries.
Yes, and it seems really hard to get around this problem for rgb ratio. This is a case where the choice of the norm has an effect. You will get slightly different results depending on the norm you use.
Yeah that one is interesting, color balance rgb pushes blue to 100% chroma (clamped to the gamut boundary) and higher saturation colors desaturate earlier. Not an easy thing to fix even though it feels like a bug…
Based on your other comments (nothing will ever fully clip, it will just become compressed more and more), I think it will converge towards white, but never actually hit white without a rounding/quantization error - which gets right to your comment about 8-bit output. You only see pure white when you’re so close to it that your final output transform gets quantized to 255 for all channels.
I expect you’ll only see a true grey output when the input is, itself, true grey, or so close that it gets quantized to true grey during the final output transform to 8-bit sRGB after some desaturation.
Might be interesting to see how it handles something like an HDR input captured via bracketing + HDRMerge’s combining. A torture test would probably be something like the lighting at my former favorite concert venue, the owners LOVED his RGB LED lighting driven to highly saturated colors. (But that would be really hard to do a bracketed shot at because he also had the lighting reacting rapidly to the music…) Sadly, no more opportunities there since the venue closed in 2019 and the owner died of CJD in 2021.
What’s the default norm these days? People referenced max(R,G,B) which was the only option a long time ago, but the behavior there looks more like what would happen if the norm were luminance. As I mentioned before, one of the reasons stated to use max() instead of luminance is to avoid taking an input channel and driving it above the output clip point, which seems to be what’s happening here. Unsurprisingly, blue is weighted least when calculating luminance.
@jandren I know you weren’t responding to me specifically. To me, the desaturation of filmic
is too arbitrary for my taste, which I noted when it became a feature. @Entropy512 The same is true with norms. Choose the one that best fits the edit is not good enough.
Still, without getting into endless debates, I would say they are functional and that is all that matters to most people and the folks who are comfortable with advocating for them.
I’ve personally never liked the results when using norms as implemented in darktable - @jandren 's comparison does a great job of illustrating why. Back when I used darktable, I almost always disabled the norm-based color preservation and accepted the risk of a bad hue twist, because that negative was not nearly as bad as the negatives that came out of any of the norms. I suspect that in a comparison you’ll see that almost any norm-based approach (as opposed to per-channel with a hue correction performed afterwards) is likely to fail in some cases, with tradeoffs between the various failure modes (hue/saturation dependent clipping, luminance shifts with hue, etc). Jandren’s approach (which I believe is similar to Adobe’s approach used in RT, but with a few additional constraints that he has discussed) is going to behave more predictably/consistently.
To get Jandren’s sigmoid branch I do that (if my memory is correct):
$ git remote add jandren https://github.com/jandren/darktable.git
$ git fetch jandren
$ git checkout -b sigmoid_tone_mapping --track jandren/sigmoid_tone_mapping
Then you can build as usual, and even rebase against the current master.
@phweyland , as you noted in github, it looks like rebasing will stop the crashes on importing raws. If you could expand the above to include the rebasing commands, that would be great.