"Aurelien said : basecurve is bad"

It is not about the transformations that the modules perform, but the representation of the data before and after.

With a few exceptions (eg exposure), none of the modules perform linear transformations, regardless of where they are in the workflow. If you think about it, for a pixelwise linear transformation (x \mapsto ax + b), you would just need one module with two parameters, so it would be fairly redundant to have more of them :wink:

[OK, if I want to be pedantic, a pixelwise matrix multiplication is also linear, eg the color calibration module]

ā€œLinearā€ in this context means that the numbers in the 3D array that represents the image correspond linearly to physical light measurements. If both the input and output are linearly represented, you are still in the scene-referred part of the workflow, even if the transformation itself is nonlinear.

3 Likes

I would disagree with the assertion that youā€™re still scene-referred. Display-referred but still linearly represented would be closer to a description of what comes out of sigmoid/filmic/basecurve for example.

1 Like

Based on the part you quoted, perhaps. But do take into account the phrase just before the part you quoted:

darktable says otherwise (tooltips over the modules)? Iirc, filmic includes a log transformā€¦
And of course, all three of those force their output to the range 0ā€¦1

2 Likes

I donā€™t think that the numbers that are used for display correspond to anything that can be understood as a linear representation.

@Entropy512 is correct here, both base curve and filmic output linear display-referred tristimulus data.

Both scene and display referred can be represented in linear terms in regards to emission strength, i.e. double the light, double the value. What really puts them apart is if the data is bounded (finite) or unbounded (infinite).

So yes the tooltip in the base-curve module seems to be incorrect because its output is linear and bounded.

1 Like

I donā€™t get this. The base curve and filmic certainly donā€™t look like theyā€™d obey the ā€˜double the light, double the valueā€™ rule.

1 Like

That is correct, the operation is non-linear but the representation of the output is linear.

I.e. double the emission from the display is double the value.
A counterexample, sRGB is a non-linear representation of the display emission to better utilize the limited 8-bit quantization. That data can however easily be linearized while still being in a display-referred space.

A fun example of linear 8-bit display-referred data is addressable LED strips like ws2812. They have horrible ā€œemission resolutionā€ in the shadows and you barely see a change in brightness for the top 25% of emission strength!

1 Like

OK, thanks for the clarification. I thought, for some reason, that by light you meant input (captured by the camera) rather than output (emitted by the display) light.

1 Like

Yup. The history actually goes back farther - other than that linear bit at the bottom end intended to make some mathematical operations play nicer, the gamma was chosen to be a reasonably close match to the fundamentally nonlinear behaviors of CRT displays.

Yup. Some WS2812 controllers will do temporal dithering to improve the depth. Also, linear vs nonlinear dimming is sometimes featured in reviews of dimmable LED bulbs, ones with linear dimming kinda suck.

Ages ago I implemented a sigma-delta modulator as a wrapper around Atmelā€™s software PWM. Their softPWM had massive performance impacts if you went past 8 bits on an AVR (since it was an 8-bit microcontroller), so the time-critical loops ran 8-bit PWM, and a less time-critical sigma-delta was wrapped around that, along with a lookup table so that I2C traffic could be 8-bit nonlinear representation: GitHub - Entropy512/I2C_RGB: I2C Controllable RGB LED driver in an Atmel ATTinyX5 (Edit: In fact, I2C_RGB/firmware/gammacurve_8to12.h at master Ā· Entropy512/I2C_RGB Ā· GitHub gives me an example of why sRGB had that linear bit at the bottom, since many input values mapped to 0/4095 or 1/4095 )

I abandoned that project when much more cost-effective off-the-shelf solutions (such as the WS2812) showed up, even if the project did have some technical advantages. Atmelā€™s USI had some reliability issues - turns out that their reference I2C implementation violated the I2C standard, it actively drove the bus high, which would lead to a round of Bus Fighter if the master tried to clock stretch. :frowning:

1 Like

Hmmā€¦ funny.

From https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/:

In reference to the scene, yes. In reference to the display - no - itā€™s linear. Nonlinearities in representation/encoding are not performed until the ā€œOutput Color Profileā€ module at the very end.

OK, I get it.*
Makes the term ā€œlinearā€ more than a bit of a moving target :confused:

1 Like

A lot of terms can wind up being a moving target due to corner cases.

For example, the HLG standard claims to be ā€œscene referredā€ - but is bounded! Due to that bounding, I consider it closer to display-referred than the standardā€™s attempts to claim otherwise.

Well, mathematically, itā€™s definitely related to a linear relationship of the magnitudes to the phenomenon they represent. Regarding light, twice the energy = 2x the number, no? Thatā€™s why the precedent adjective, e.g., ā€œscene-linearā€, is important.

1 Like

I canā€™t agree more. Unfortunately, there was a silent switch from scene-referred linearity to screen-referred linearity.

Guess Iā€™ll leave this discussion as well, not interested in shifting definitions.

2 Likes

Iā€™m probably guilty of that as well in some posts, assuming ā€œlinearā€ is solely related to the light at the scene. I am now sensitized to that, thanksā€¦

I think that these discussions would benefit from a definition of ā€œlinearā€ ā€” even if we disagree, it would help clarify the differences.

I consider a representation linear if I would find it meaningful to perform linear operations on it (eg a + b, \alpha \cdot a, A a for pixels a,b, scalar \alpha, matrix A). This is not something I would want to do with the output or filmic rgb or similar (artifacts), so I donā€™t find it helpful to think of those as linear.

My understanding is this: in a linear representation, if a value x represents an amount of light L, then Nx represents an amount of light NL. Here, both N and x are reals larger than 0.

1 Like

Yup, and that can be either scene light, OR display light.

Yes, in some cases you have an intermediate representation that is altered further, but here, the question in the case of display-linear:
If you took the data, as it was, and sent it to a display that accepted linear data, or assumed it was linear and applied an inverse EOTF for a particular display that wanted nonlinear data, would it be at least some semblance of correct, or would it be significantly altered because its meaning in terms of display light was misinterpreted? (See, for example, what happens if you misinterpret linear data as sRGB for example - doing this leads to the misconception that ā€œlinear looks darkā€ when it only looks dark if you mistakenly assume it has a gamma curve applied.)

My understanding is that a data representation / display mismatch will result in incorrect distribution of light intensities, including clipping. Iā€™m not sure how strong the effect would be. I guess you could use Glenn Butcherā€™s raw processor to test this, since it allows disabling the display-mapping transform.

1 Like