"Aurelien said : basecurve is bad"

Hmm… funny.

From https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/:

In reference to the scene, yes. In reference to the display - no - it’s linear. Nonlinearities in representation/encoding are not performed until the “Output Color Profile” module at the very end.

OK, I get it.*
Makes the term “linear” more than a bit of a moving target :confused:

1 Like

A lot of terms can wind up being a moving target due to corner cases.

For example, the HLG standard claims to be “scene referred” - but is bounded! Due to that bounding, I consider it closer to display-referred than the standard’s attempts to claim otherwise.

Well, mathematically, it’s definitely related to a linear relationship of the magnitudes to the phenomenon they represent. Regarding light, twice the energy = 2x the number, no? That’s why the precedent adjective, e.g., “scene-linear”, is important.

1 Like

I can’t agree more. Unfortunately, there was a silent switch from scene-referred linearity to screen-referred linearity.

Guess I’ll leave this discussion as well, not interested in shifting definitions.

2 Likes

I’m probably guilty of that as well in some posts, assuming “linear” is solely related to the light at the scene. I am now sensitized to that, thanks…

I think that these discussions would benefit from a definition of “linear” — even if we disagree, it would help clarify the differences.

I consider a representation linear if I would find it meaningful to perform linear operations on it (eg a + b, \alpha \cdot a, A a for pixels a,b, scalar \alpha, matrix A). This is not something I would want to do with the output or filmic rgb or similar (artifacts), so I don’t find it helpful to think of those as linear.

My understanding is this: in a linear representation, if a value x represents an amount of light L, then Nx represents an amount of light NL. Here, both N and x are reals larger than 0.

1 Like

Yup, and that can be either scene light, OR display light.

Yes, in some cases you have an intermediate representation that is altered further, but here, the question in the case of display-linear:
If you took the data, as it was, and sent it to a display that accepted linear data, or assumed it was linear and applied an inverse EOTF for a particular display that wanted nonlinear data, would it be at least some semblance of correct, or would it be significantly altered because its meaning in terms of display light was misinterpreted? (See, for example, what happens if you misinterpret linear data as sRGB for example - doing this leads to the misconception that “linear looks dark” when it only looks dark if you mistakenly assume it has a gamma curve applied.)

My understanding is that a data representation / display mismatch will result in incorrect distribution of light intensities, including clipping. I’m not sure how strong the effect would be. I guess you could use Glenn Butcher’s raw processor to test this, since it allows disabling the display-mapping transform.

1 Like

This all goes to something troy_s pointed out to me back in the “wild days”, that general-purpose LCD displays really do linear at the hardware level, and just have what amounts to “CRT emulation” in their circuitry to be compatible with that legacy. So, with LCD display profiles, we end up doing this game of “tone badminton” to take scene-linear data back and forth through these so-called “gamma transforms” to end up in the same linear domain on the hardware.

Given all that hoo-ha, to me the only meaningful use of the term “linear” is with regard to the original scene. Display engineers will probably beg to differ, as they do have to consider the linearity of their energy-producers, but for the imaging pipeline I think we need to focus on the impacts all our little tools have to those original measurements in the camera…

1 Like

It most definitely will. If necessary I’ll provide an example tonight, but easy ways to do this:
Export a TIFF with linear data. Use exiftool to remove the ICC profile, or view it with a non-color-managed application such as Imagemagick’s “display” tool.

Alternatively, if you export something with an sRGB transfer function as the encoding, and load it in something that misinterprets it as linear, it will look bright and washed out. This is much rarer, but I have seen some applications assume “floating point TIFF = linear”, ignoring the ICC profile.

Having done some of this, you need to make sure you tease apart the TRC and color transforms. Elle Stone’s g10 profiles are a good resource for doing this. Also, rawproc, as what @mbs points out, it is very good at separating out the individual image operations.

Might be a good alternative way to define “linear”:

If you took that data, saved it as-is with no transforms, and then added an ICC profile with appropriate primaries and gamma=1.0 using exiftool, then reloaded that image with a color-managed application, would this be a sane starting point for further processing?

If so, it’s linear.

I disagree with this, it basically assumes that display engineers are universally incompetent and never do their jobs properly when it comes to linearizing their input data.

Sounds right. When exporting, wouldn’t relying on the export transform with a 1.0 gamma profile do the same thing?

It’s not their job that concerns me, it’s mine as a manipulator of a raw processing workflow. Display designs usually provide some means to translate what I come up with to a gamut and tone that looks okay on the medium.

If the input data is already linear, this becomes a no-op and the only thing that might occur is a gamut transform but not a response curve transform. If it turns out to be anything other than a no-op, that’s a sure sign the input data is not linear (or, of course, might be linear but mistagged as being something else, that can always happen… In which case you’re now hosed and everything is wrong.)

Yup, and that usually includes documenting how to take linearly-encoded data and re-encode it so that the display will properly represent it. (In darktable, this is the output color transform module - it expects linearly-encoded input, and outputs whatever the user chose as the output profile.)

If the profile’s primaries properly represent the input data, yes, it’s a no-op but it gets the right profile in the image file without exiftool pet tricks… ??

I guess I’m a little head-shaped by all the “discussion” a few years ago about “scene-referred”. In rawproc, I choose my ops and their order with doing as little damage to the original data as possible. Not that I don’t mess with it from time to time; my color saturation tool is still a gawd-awful HSL algo :laughing:

The idea is that if the input is linear, the two approaches are identical. If it isn’t, they won’t be.

An alternative approach is, feed it to the CMS, tell the CMS to output linear data, and compare input to output. If the CMS didn’t do anything, or the result can be obtained with a simple matrix transform, it’s linear (or, again, might be nonlinear data mistagged as such… always a risk!)