What does linear RGB mean ?

Yes, the important consideration here is the depiction of the original energy relationships captured in the original encoding on the sensor.

Okay, not-math-guy here, wouldn’t ‘affine’ describe the exposure transform? i’ve been looking for a term that categorizes such transforms so we can place them accordingly in the linear-to-display journey… ??

Yes, affine is y = a \cdot x + b, and linear is the special case of affine where b = 0, so y = a \cdot x.

These seem to be contradictions.

If pixel values are proportional to intensity, then multiplying (or dividing) all pixels values by the same number will preserve that proportionality.

But adding (or subtracting) a value removes that proportionality.

For a more technical perspective: multiplying RGB values by the same number preserves chromaticity (x and y channels of xyY, which, rougly speaking, preserves hue and saturation). But adding the same number to RGB values changes chromaticity.

Subtracting a black offset from pixels may be needed to make pixels linear, ie proportional to intensity. Once that is done, any further addition or subtraction will remove proportionality.

2 Likes

The proportionality to be preserved is with the light emission, not with the input RGB pixel garbage. Depending on how the input RGB garbage has been prepared or massaged, you might need to offset the code values accordingly, for the reasons explained by @hanatos.

We absolutely don’t care about pixel values in themselves. These are code values, aka number garbage. They could be encoded as imaginary numbers, over a complex plane, that wouldn’t change a thing. We care about what they represent, which means we need to care about the pixel values and their encoding cipher. You can do whatever you want if it is to profile your

Subtracting the black offset normalizes the RGB in [0 ; 1], but that 0 means nothing to the light emission (zero light emission implies the picture was taken at -273°C). So that zero is still representing some non-zero energy level, and you will always have an offset somewhere between RGB values and real light. What is important is that, for a light emission of intensity l(i) giving a code value c(i), \dfrac{l(i + h) - l(i)}{h} = \dfrac{c(i + h) - c(i)}{h} (= first order derivative if h \rightarrow 0). Whatever the offset between c(i) and l(i) doesn’t change that relationship, because we care mostly about the consistency of the variation between input and output. An image is only a gradient field around an average value.

1 Like

Total absence of light doesn’t really exist, so there is always some offset we should apply if we want our pixel values to be truly proportional to intensity. Is that what you mean? I agree, but that offset is very very small. In normal photography I suggest this offset is too small to worry about, certainly less than one part in 65536. (It may be significant in astro-photography.)

But we can’t add or subtract arbitrary numbers and think this doesn’t change proportionality. This can make a big difference to results that we should worry about.

A specific example. Suppose we have RGB values of (1000,2000,3000) and that these are proportional to the light transmitted by red, green and blue filters. The green filter has transmitted twice as much light as the red filter.

We can multiply (or divide) these numbers by whatever we like and the proportionality remains. The lightness changes, but hue and saturation do not change.

But if we can add (or subtract) arbitrary numbers, we might subtract 1000 to get (0,1000,2000). The values are no longer proportional to the light. Lightness and saturation both change.

Related: Call for example raw files :wink:

In astrophotography, the issue more due to light pollution and sensor noise.

I am not sure I understand this. Is there a gamma in analogue photography? Or is this a stupid question?

I had to go looking… Found this:

https://www.kodak.com/uploadedfiles/motion/US_plugins_acrobat_en_motion_newsletters_filmEss_06_Characteristics_of_Film.pdf

From Page 51:

“There are two measurements of contrast. Gamma, represented by the Greek symbol γ, is a numeric value determined from the straight-line portion of the curve. Gamma is a measure of the contrast of a negative. Slope refers to the steepness of a straight line, determined by taking the increase in density from two points on the curve and dividing that by the increase in log exposure for the same two points.”

1 Like

Correct me if I’m wrong, but wasn’t the use of gamma curves also driven by the use of narrow range eight-bit (or less!) values in computers which couldn’t handle larger ranges?

@anon41087856 Where does one learn all this stuff? Could you make a thread with all your sources, books etc?

I’m kinda getting more and more interested in the technical aspect of cameras and post production whenever I read a new post from you. But the learning curve seems really steep (like a few years of learning steep). So it would be helpful to know where to start learning and where to find the information. Just generally, not any particular thing.

Some people read novels in their spare time. I think many of us would read study books and papers about cameras and post processing instead.

3 Likes

As @ggbutcher says. “Gamma” has too many meanings. I hate the word.

Confusingly, yes, analogue (film) photography does have a gamma, but it means something different. It is the slope of the straight-line portion of the characteristic curve. It is the change in density (no units) divided by the change in exposure (in stops) that creates that change. Note that exposure here is measured in stops, which is log base 2 of illumination. Doubling illumination is one more stop, which has a constant additive effect on density (in the straight-line portion).

In that sense, film is similar to (non-linear) sRGB, where an extra stop gives a constant additive effect on digital values.

When shooting negative film, we aim to expose for the straight-line portion. Highlights (clouds etc) may be in the non-linear shoulder, and require burning in the print.

See also https://en.wikipedia.org/wiki/Sensitometry

4 Likes

There’s some stuff here: Image Processing – Recommended Reading

2 Likes

:neutral_face: I’m confused… EV is related to the human vision??
I known EV was related to linear light…if I want +1EV I open 1stop the f-number or halve the time

1 Like

Gamma-correction was introduced in the first place to deal with the properties of the cathode ray tubes (monitor). However, the result also has the property of proper encoding, as pointed out above. As Charles Poynton (2012) writes

If gamma correction had not already been necessary for physical reasons at the CRT, we would have had to invent it for perceptual reasons.

This I do not understand. If this would be the case, CCD photometry in astronomy would not work. May I quote Ian S. McLean (Electronic imaging in astronomy, Springer 2008):

If operated properly, CCDs are linear detectors over an immense dynamic range. That is, the output voltage signal from a CCD is exactly proportional to the amount of light falling on the CCD to very high accuracy, often better than 0.1% of the signal. The good linearity makes it possible to calibrate obervations of very faint objects by using shorter - but accurately timed - exposures on much brighter photometric stadndard stars. Linearity curves are usually derived by observing a constant source with various exposure times.

McLean points out, that the signal has to be bias and dark corrected!

Hermann-Josef

Complement for the geeks : sensors are actually not truly linear to light emissions, and you see that clearly when your scene is not lit by a white-daylight illuminant. That’s why we need better input profiles than the bogus RGB → XYZ 3×3 matrice conversion.

what sort of function would you suggest to model the sensor response to an input light intensity I? An nth order polynomial function? We’d need to take shots of a colour checker target at n exposure levels to build the profile. I wonder if it is sufficient to vary the shutter speed to map out the non-linearity of the sensor’s response to intensity, or would we need to actually vary the level of the light illuminating the target?

1 Like

And we’d need an actually good colour target, not some overpriced 12-patches “colour checkers” :stuck_out_tongue: Would need actual IT8 target every time, with proper shooting technique and scene illumination.

To this thread I’d add link to

and recommend reading it (I’m just sad it takes so long for Troy to update those, but every new answer is great overall clarification)

4 Likes

Possibly. Small N, eg 2 or 3, would improve the situation. For accuracy, we would need larger N, but large polynomials tend to be badly behaved.

I think the obvious method is a 3D CLUT, what ImageMagick calls a hald-clut. Probably 32x32x32 would be large enough. And perhaps they could be compressed with the G’MIC method.

As far as I know, sensor response to light intensity is pretty linear (so long as you avoid saturation/clipping).
I think what @anon41087856 is referring to here is the fact that the RGB filters used in most cameras do not conform to the Luther-Ives (or Maxwell-Ives if you prefer) criterion - the RGB filters are not a linear combination of the CIE colour matching functions so you cannot use a 3x3 matrix to convert from sensor RGB to XYZ.
It is possible to convert to XYZ more accurately if you know the spectral response of the RGB filters and the spectra of the scene. The problem of course is that the scene spectra are usually unknown, so I’m not sure how useful a spectral solution would be in practice.

1 Like

Full answer here :

Sum-up : individually, each sensor R/G/B channel is affine (so, roughly linear) to the amount of photons captured in some part of the light spectrum. But, overall, the spectrum splitting done by the color filter array R/G/B is not uniform, so the spectral sensitivity overlap a lot more than in human vision (for example, the “green” channel is sensitive to almost the whole 400-800 nm range).

The proper way to profile sensors would be to map each sensor RGB vector to a light spectrum (using LUT or digital reconstructions), then simulate a digital retina to map spectrum to XYZ space, from which all standard RGB spaces derive. But for this, we need databases of the spectral sensitivity of each CFA/camera.

This also makes white balance adaptation super easy and way more accurate.

Exactly.

1 Like