That’s just the thing. Lightroom has exactly three sliders that can rescue color from white highlights: Exposure, Whites, and Highlights. Any other tool, like the tone curve or contrast, can only make white pixels gray, but they can not recover color.
That’s precisely what is different in darktable. You can use any number of modules to recover color, as the color remains available all through the pipeline, and highlights are only “whitened” at the very end.
A big point is then where in the processing you (have to) do that correction. Iirc, in Lightroom you determine the order of the layers, and that order is important.
In dt the order is important as well, but it’s (in principle) fixed. As you cannot clip values, that’s not a problem, and avoids inconsistencies.
But in the end, I feel that the handling of exposure and highlights is mostly just more convenient in linear space. It is doable in display-referred space (like all editing, after all, it has been done with good results for years).
More important, and much more subtle, is what happens with colours. This article about Lab vs. linear RGB could help. Don’t look too closely at the specific modules, the interesting part is in the beginning where he shows the difference between editing in linear (scene-referred) and perceptual (~display-referred) spaces.
Especially important (imo) is what happens when you use masking and blending. Blending between two colours in sRBG space by blurring passes through gray, not so in linear space. And blurring an image is an important trick in editing ( ASM sharpening, tone equaliser,…). But you can run into problems wherever you mix colours.
Color balance rgb is one of the last modules in the pipe before filmic. The only modules after it are “rgb curve”, “rgb levels” and “base curve”, all of which also do non-linear tone mapping. The later you can do such operations the better.
The linear part of “scene-referred” is not in the actions you perform on the data, but in the relation between pixel values and light energy. That’s what allows modelling of physical processes (like lens blur, but also colour filters).
Of course, e.g. tone equaliser is a non-linear operation. But it still maintains a linear relation with the light energy as you would observe is in the scene. The same goes for color balance RGB.
I have ported it already in a branch on my Git fork. I may create a PR if needed, it seems to work except for some NaN (see author message) but maybe not an issue on darktable where we have fixed lot of them already.
He seems to be active on his project again. He seems to be making a lot of changes to the pipeline and underpinnings of Ansel. I haven’t really looked at them all in any detail but I wonder if it will soon make the two projects less “interchangable” ??
It is already not interchangable as for example RGB Primaries & Sigmoid are not in Ansel and will certainly never be as he was against Sigmoid (I pushed for its inclusion and I think it is a good part responsible of the forking).
EDIT: It seems that the Lua support is broken in Ansel and quite some more things at the moment.
Ya also interchangable might not be the best term. But it seems like he is slowly redefining the pixel processing in a way that might have to be accounted for when any code gets ported. His code might account for downstream impacts of this from a certain module’s output and DT might not??.. But really I didn’t look much past the large list of recent commits added to his code and I’m not a programmer so it just a speculative observation on my end
Thanks, @Pascal_Obry (sorry, I thanked Hanno first – of course, thanks to you as well, @hannoschwalm) and Aurélien (as the code comes from Ansel, as Pascal has already pointed out).