Your rather different order of operations, which appears to have some nonlinear shifts, is likely to behave very differently with contrast weight. At least currently, if there’s just linear multiplication for exposure shift AND no clipping, contrast weight shouldn’t be of benefit - and in fact just becomes a linear function of the exposure shift multiplier.
If you turn on any of the other stuff in basecurve, things could become VERY different. I’m also wondering why a decision was made to disable color preservation when exposure fusion was used - I suspect that operating with this disabled is what makes Pierre hate basecurve so much.
The actual gamma part of sRGB is 2.4 outside of the linear region. The linear region makes it average out to the oft-quoted 2.2. Working color space is the default (apparently rec.2020 linear now?) - changing this could break VERY badly currently. Part of the whole “this is a WIP” thing - the appropriate approach may be to change from working to sRGB after the basecurve is applied, and convert back at the end.
Conditional inside the basecurve_compute_features() OCL kernel (and its equivalent cpu function in basecurve.c) - Since we’re generating the pushed exposures internally, we don’t have to worry about clipping with this flow.
My laptop screen isn’t that much better - but for many years we’re going to have to cater to the lowest common denominator of unmanaged SDR displays. There’s absolutely no decent widely-deployed standard for delivering stills to HDR displays. HEIF/HEIC might do it, but support for that is very limited, especially support with HDR display capability. Right now, if I want to output to an HDR display (such as my Vizio P65-F1), I have to do the following:
Export from darktable as Rec. 2020 linear TIFF
Use ffmpeg with the zscale filter to convert it into Rec. 2020 HLG, 10-bit HEVC codec
Doing this looks AMAZING. But it’s a massive PITA for anyone to view unless encoding a bunch of images to a video slideshow.