why is tone equalizer is so early in the pipeline?

I would like to understand why the default puts tone equalizer so early in the pipeline. A comment in the code says

Because it works before camera RGB → XYZ conversion, the exposure
cannot be computed from any human-based perceptual colour model (Y
channel), hence why several RGB norms are provided as estimators of
the pixel energy to compute a luminance map. None of them is
perfect, and I’m still looking forward to a real spectral energy
estimator. The best physically-accurate norm should be the
euclidean norm, but the best looking is often the power norm, which
has no theoretical background. The geometric mean also display
interesting properties as it interprets saturated colours as
low-lights, allowing to lighten and desaturate them in a realistic
way.

but the whole issue would be irrelevant if it came after the color calibration. Then the norm could be a perceptual map, eg a linear map from the cube root of LMS.

(I know I can reorder modules; I am asking about the reasoning behind the defaults)

Not at all an answer to your question, but I always drag it below exposure so that I don’t have make adjustments to tone equalizer settings every time I want to tweak exposure. It works very well.

2 Likes

My intuition is that tone equalizer is meant to correct exposure in its simplest definition, as a RGB pixel data scaling. So in the pipeline it is at the same level as the exposure module.

Intuition n°2 is that if we put it after XYZ conversion it would not do the same thing.

Not answering your question but I use both color equalizer and color zones. They come at very different stages of the pixel pipeline. But I look forward to reading some informative and authoritive answers by more knowledgeable people. The current locations have not caused me any issues with either of the two modules.

… but it needs a luminance to work with to map the image, and that luminance is obtained from a pre-color calibration image that is not white balanced, so it is not what you end up working with. The difference may be minor, but it is there.

If you are using color calibration for minor WB corrections, then it is innocuous, you will not see a large difference.

It would have helped if you also quoted the preceding paragraph:

And as “color calibration” comes after “input profile”…
Wrt. cropping; the text might predate the deprecation of the “crop and rotate” module. (I’m frankly too lazy to check right now).

I read that paragraph, but I don’t see the connection between

It is intended to work in scene-linear camera RGB, to behave as if light was physically added or removed from the scene.

and

it should be put before input profile in the pipe

because the rest of the pipeline preceding sigmoid/filmic is still scene referred, and the module works fine there.

Again, all it needs is a luminance to work with.

After “input profile”, is the image still encoded in the “camera RGB” space? (note: I don’t know why the module has to work in camera RGB space; I assume the author did have a good reason for that)

“scene referred” doesn’t mean the color space cannot be changed.

The tone equalizer, conceptually, does not depend on the image being encoded in any specific color space. Again, all that requires is a definition of luminance, and the ability to adjust that luminance. In particular, XYZ is particularly well suited for it (Y = luminance).

A lot of decisions in software can be accidental.

The “conceptually” is the problem here. While I agree with you in that, we are dealing with the implementation of a concept. And that shift may impose extra constraints.

Of course, but the phrasing in the comments (" It is intended to work in scene-linear camera RGB") makes me think that that’s not the case here. And even if that intention is more or less accidental, it is the motivation for the placement of the module in the pipeline.

You may or may not agree with the design decisions, but the “why” of “tone equaliser’s” position in the pipeline is explained in the code comments.

1 Like

First I would like to understand them.

Your explanation just amounts to “it is what it is”. I appreciate that you are trying to help, but in case something is not understood it is fine to just say so.

Then ask about that… Your question was about the position in the pipeline. Based on the comments in the code, we know why it was placed there.

Then you can of course question the design decisions made in the module, and the effect those would have on the pipeline position, but that’s a different question.

Good luck with the rest of the discussion.

At least the beauty with DT compared to programs like LR is that the user can reorder the pipeline if they have a reason to do so. I certainly reorder the pipeline when working with negadoctor and images that need to be inverted.

3 Likes

It needs a luminance estimator for the masking, but the correction in itself is done on camera RGB, as an exposure compensation would do (or ISO change).

Probably it could give good looking results in XYZ, but I’m not sure would be 100% the same as a camera RGB multiplication…