I am an heavy user of the color zones module of darktable.
In the current implementation it sits in the display-referred part of the pipeline, after the filmic/base curve/sigmoid. I have followed with interest the transition of darktable towards a solid scene-referred workflow.
My question is: where should the optimal position in the pipe be? Which color space is optimal?
Is it already in the “best” spot?
Today I stumbled upon a very interesting blog post from Björn Ottosson , that is discussing about blending of colors in sRGB/linear/perceptual color spaces [How software gets color wrong].
Have a look in particular at the color blending comparison section:
[figure from How software gets color wrong]
My naive conclusion from this post confirms “sRGB” as a bad choice for smoothing/color blending, “linear” behaves much better and in a physically accurate way, and a perceptual space seams more predictable. Look at the transitions involving blue shades.
Also, what do you think about using color zones?
Any comment is much appreciated!
Yes, JzAzBz should be good for HDR, but I’m unsure what you mean by “it fits a scene-referred workflow”.
JzAzBz is designed to be a perceptually uniform colorspace. A scene-referred colorspace will not be perceptually uniform.
The blog post is interesting, especially in the choice of OSA-UCS for the perceptual colorspace, as that doesn’t have a closed-form transformation to CIEXYZ, so using it for image editing would be messy and slow.
Perceptual colorspaces are fun, because there are so many to choose from.
The blog doesn’t address the question of what primaries should be used for colorspaces using RGB models. There have been discussions on that issue in this forum.
Indeed, I am also unsure with my words. In my limited understanding, one of the requirements of working in a scene-referred workflow is to deal with the HDR of the scene.
Then we should make changes that are believable in a physical sense, i.e. we are dealing with amount of photons, so we should work in a space mimicking the behaviors of mixing real photons fluxes.
So I guess, working in a linear scene-referred color space implies to be in a non-perceptually uniform space.
In general I see photo-editing divided in two steps:
fixing the raw light data so that they are represented in our screens in a believable way (best in a linear scene-referred workflow), or in alternate reality that is still physically believable.
artistic manipulation of colors in order to achieve a look, matching analog film colors for example.
The color zones module fits the second step.
This second step is more convenient in a perceptual color space.
And the optimal way to do so is after converting linear data to non-linear (scene- to display-referred transform).
Did I get it right?
Indeed I also think it is very convenient for fine tuning of colors, especially foliage and skies. And it is pretty intuitive to play around.
But in my limited experience I have an hard time to figure out how this tool compares to state of the art alternatives of other pieces software (and recently published tools).
Generally, I try to keep the curves as smooth as possible, because I remember weird outputs when fancy slopes are applied. This might be more of a misuse of the tool than a issue regarding the color space or technicality of the module.
For example, I feel quite bad about these examples in the manual. I wouldn’t do this to my images.
Is this also your view?
Also, I don’t know how color accurate is the GUI of the module. But I see hue shifts with saturation for example.
Anyway, I like that others appreciate the module as much as I do.
So I guess, working in a linear scene-referred color space implies to be in a non-perceptually uniform space.
Yes.
In general I see photo-editing divided in two steps:
As a simplification, that’s fair enough. Where would you put denoising? And sharpening?
My editing involves four classes of colorspace:
Camera RGB, before and after demosaicing. This is a linear RGB, possibly plus an offset. I denoise before demosaicing, athough the degree (if any) of denoising is an aesthetic decision.
Scene-referred linear RGB, which is additive (so there is no offset), using some standard primaries.
Perceptual (lightness, chroma, hue). Some people denoise here.
Output-referred, either additive RGB or subtractive CMYK.
I don’t have enough experience editing with different perceptual colorspaces to say which is the best for my own editing, let alone anyone else’s.