Profile without tonecurve: -1 EV exposure does not half L*a*b L

I have created a DCP profile without tonecurve (with DCamProf, make-dcp ... -t none. I would expect that, for example in Lightroom, without applying any other tonecurve correction, the L in Lab will show half of the value after I have corrected exposure by -1 EV, a quarter for -2 EV and so on.
But this is not the case: For a medium gray area of 50% L I get 38% (-1 EV) and 25% (-2 EV). I would expect 25% and 12,5%. What other curve is applied here or how can this result be explained?

Edit: I guess this is caused by the output profile’s gamma. In RawTherapee I have reset the working profile gamma to 1.0, and for a medium gray area the L is nearly correct: 16%, 9%, 5% (0, -1, -2 EV). But a bright area shows 71%, 54%, 39%.
Similar results for RGB and HSV V.
How can I get the real exposure value? -1 EV means half of the light’s energy, so I need a way to get the real light energy values instead of the toned values adapted to the human eye’s vision.

No, -1 EV will halve the XYZ values. L* is a transform that relates XYZ to human vision under specific lighting: crudely, L*=(G)^(1/3) (with RGB scaled to run from 0 to 1).

More precision is here:

1 Like

The nonlinear relations for L* , a* , and b* are intended to mimic the nonlinear response of the eye.

Thank you for pointing me to this. I had ignored it. So is there a way to get the RAW values with applied exposure compensation in RT?

We should describe how to examine raw values in RawPedia, and describe what those values mean (i.e. are they before or after black level subtraction, etc).

( unadjusted pixel values - #10 by Morgan_Hardwood )

I ask this question because I want to adapt a DCP to ETTR (expose to the right): I overexpose a photo under a fixed light source and after that must compensate for this in the RAW converter with a fixed negative EV. But then I need to adapt the tonecurve, too, because high values may be still too light, and low values will be too dark. I need to now how I can make a transformative tonecurve so that all tones will be correct with a fixed baseline exposure offset.

I’m ignorant of RawPedia. However, the relationship between light energy P and returned voltage V (after filtering accordin to the three filter colours) is typically V=a+bP, ie it’s affine. So removing the black point is really just removing the “a”… until you do that, V is not proportional to P and so applying a multiplicative factor like EV doesn’t work. However, V-a is proportional to P, so that’s what you need to use.
There is a long discussion of the (many) technicalities of ETTR display here:
https://blog.kasson.com/using-in-caera-histograms-for-ettr/

Nothing, unfortunately, is ever as simple as one first supposes :smiley:

This is what I already knew, so far. But I don’t understand why the (perceptive) L in L*a*b is not enough. I switched off all curves and I see a flat image, and L is already shifted towards our perception, but not enough. Anders Torger gives some hint:

A linear tone curve is the right thing for reproduction work, for example when we shoot a painted artwork and print on corresponding media. In this case the input “scene” and output media have the same dynamic range and will be displayed in similar conditions. However in general-purpose photography the actual scene has typically considerably higher dynamic range than the output media, that is the distance between the darkest shadow and the brightest highlight is higher than we can reproduce on screen or paper.

But I don’t believe him, since I have compared a real IT8.7 target under light to the display reproduction (with the same brightness of external light source and display, and a display with calibrated RGB curves): I need to add the curve to make real target and display view match, i. e. I don’t trust the L*a*b L, and a curve must be added to get a result that with good overall reproduction.

On the other hand, if I want “to make the camera into a colorimetric measurement device” (Torger), probably the L is what gives me a good result. Anyway, to measure brightness physically, I don’t need the L, but only the RAW value V (with the a subtracted) and multiplied by a constant.

I still have some lack of understanding, since the L must be good for something. But what?

There is a long discussion of the (many) technicalities of ETTR display here:
Using in-camera histograms for ETTR - the last word

Thanks for pointing to this, but I guess I don’t need a perfect in-camera histogram. Some minor settings to get it more close to RAW should be enough. I have to anyway analyze the RAW data with RAWDigger for my fixed setup.

The thing is, our perception of a photo changes with the light under which we view it… we don’t have unbounded logarithmic perception. There is a minimum, and an obvious maximum for a paper print (100% reflectance). So a photo viewed at 80 Lux can’t look the same as a photo viewed at 300 Lux
So getting close to raw will not solve the problem of helping anticipate what the image will look like after processing/adapting it to expected viewing conditions.

My opinion, anyway :slight_smile:

Le sam. 25 janv. 2020 Ă  19:42, Andreas via discuss.pixls.us noreply@discuss.pixls.us a Ă©crit :