A few days ago I was experimenting with masking and local color calibration. I had an image with severely mixed lighting that I was trying to correct. And for reference, I tried to replicate a similar workflow in Capture One.
And then I noticed something odd: If I applied Capture One’s white balance in a layer, it would not stack on top of my existing edits, but replace them. Even more odd, I could create a layer that fully desaturates an image, and recover that saturation in a later layer. This should be logically impossible!
Some more experimentation led me to the following conclusion: Capture One does not implement layers. It implements local adjustments. Here’s how it works:
Capture One only has a single image pipeline, but it can vary editing parameters across different parts of the image. So if I create one layer that desaturates the image entirely, with Saturation-100, and another layer that oversaturates by Saturation+100, the result is Saturation=+100-100=0, aka. the original image.
This is of course not at all how Darktable works. Here the output of one module becomes the input of the next. If I desaturate something in Darktable, the color is irretrievably gone.
And that handily explains how Capture One can be much faster than Darktable, even with many “Layers”: There simply is only one well-optimized pixel pipeline, with no variability at all. (It also means that “Layers” is clearly a misnomer, and should be “Local Adjustments”.)
I am still trying to figure out what this means in practice. I know from experience that multiple color calibrations have allowed me to work in conditions where I struggled with Capture One. But that might have had more to do with Darktable’s better masking tools and more powerful color calibration.
Can you think of an instance where one approach or the other might be clearly preferable? Do you know how other image editing software handles local adjustments/layers?