What does linear RGB mean ?

I am not sure I understand this. Is there a gamma in analogue photography? Or is this a stupid question?

I had to go looking… Found this:

https://www.kodak.com/uploadedfiles/motion/US_plugins_acrobat_en_motion_newsletters_filmEss_06_Characteristics_of_Film.pdf

From Page 51:

“There are two measurements of contrast. Gamma, represented by the Greek symbol γ, is a numeric value determined from the straight-line portion of the curve. Gamma is a measure of the contrast of a negative. Slope refers to the steepness of a straight line, determined by taking the increase in density from two points on the curve and dividing that by the increase in log exposure for the same two points.”

1 Like

Correct me if I’m wrong, but wasn’t the use of gamma curves also driven by the use of narrow range eight-bit (or less!) values in computers which couldn’t handle larger ranges?

@anon41087856 Where does one learn all this stuff? Could you make a thread with all your sources, books etc?

I’m kinda getting more and more interested in the technical aspect of cameras and post production whenever I read a new post from you. But the learning curve seems really steep (like a few years of learning steep). So it would be helpful to know where to start learning and where to find the information. Just generally, not any particular thing.

Some people read novels in their spare time. I think many of us would read study books and papers about cameras and post processing instead.

3 Likes

As @ggbutcher says. “Gamma” has too many meanings. I hate the word.

Confusingly, yes, analogue (film) photography does have a gamma, but it means something different. It is the slope of the straight-line portion of the characteristic curve. It is the change in density (no units) divided by the change in exposure (in stops) that creates that change. Note that exposure here is measured in stops, which is log base 2 of illumination. Doubling illumination is one more stop, which has a constant additive effect on density (in the straight-line portion).

In that sense, film is similar to (non-linear) sRGB, where an extra stop gives a constant additive effect on digital values.

When shooting negative film, we aim to expose for the straight-line portion. Highlights (clouds etc) may be in the non-linear shoulder, and require burning in the print.

See also https://en.wikipedia.org/wiki/Sensitometry

4 Likes

There’s some stuff here: Image Processing – Recommended Reading

2 Likes

:neutral_face: I’m confused… EV is related to the human vision??
I known EV was related to linear light…if I want +1EV I open 1stop the f-number or halve the time

1 Like

Gamma-correction was introduced in the first place to deal with the properties of the cathode ray tubes (monitor). However, the result also has the property of proper encoding, as pointed out above. As Charles Poynton (2012) writes

If gamma correction had not already been necessary for physical reasons at the CRT, we would have had to invent it for perceptual reasons.

This I do not understand. If this would be the case, CCD photometry in astronomy would not work. May I quote Ian S. McLean (Electronic imaging in astronomy, Springer 2008):

If operated properly, CCDs are linear detectors over an immense dynamic range. That is, the output voltage signal from a CCD is exactly proportional to the amount of light falling on the CCD to very high accuracy, often better than 0.1% of the signal. The good linearity makes it possible to calibrate obervations of very faint objects by using shorter - but accurately timed - exposures on much brighter photometric stadndard stars. Linearity curves are usually derived by observing a constant source with various exposure times.

McLean points out, that the signal has to be bias and dark corrected!

Hermann-Josef

Complement for the geeks : sensors are actually not truly linear to light emissions, and you see that clearly when your scene is not lit by a white-daylight illuminant. That’s why we need better input profiles than the bogus RGB → XYZ 3×3 matrice conversion.

what sort of function would you suggest to model the sensor response to an input light intensity I? An nth order polynomial function? We’d need to take shots of a colour checker target at n exposure levels to build the profile. I wonder if it is sufficient to vary the shutter speed to map out the non-linearity of the sensor’s response to intensity, or would we need to actually vary the level of the light illuminating the target?

1 Like

And we’d need an actually good colour target, not some overpriced 12-patches “colour checkers” :stuck_out_tongue: Would need actual IT8 target every time, with proper shooting technique and scene illumination.

To this thread I’d add link to

and recommend reading it (I’m just sad it takes so long for Troy to update those, but every new answer is great overall clarification)

4 Likes

Possibly. Small N, eg 2 or 3, would improve the situation. For accuracy, we would need larger N, but large polynomials tend to be badly behaved.

I think the obvious method is a 3D CLUT, what ImageMagick calls a hald-clut. Probably 32x32x32 would be large enough. And perhaps they could be compressed with the G’MIC method.

As far as I know, sensor response to light intensity is pretty linear (so long as you avoid saturation/clipping).
I think what @anon41087856 is referring to here is the fact that the RGB filters used in most cameras do not conform to the Luther-Ives (or Maxwell-Ives if you prefer) criterion - the RGB filters are not a linear combination of the CIE colour matching functions so you cannot use a 3x3 matrix to convert from sensor RGB to XYZ.
It is possible to convert to XYZ more accurately if you know the spectral response of the RGB filters and the spectra of the scene. The problem of course is that the scene spectra are usually unknown, so I’m not sure how useful a spectral solution would be in practice.

1 Like

Full answer here :

Sum-up : individually, each sensor R/G/B channel is affine (so, roughly linear) to the amount of photons captured in some part of the light spectrum. But, overall, the spectrum splitting done by the color filter array R/G/B is not uniform, so the spectral sensitivity overlap a lot more than in human vision (for example, the “green” channel is sensitive to almost the whole 400-800 nm range).

The proper way to profile sensors would be to map each sensor RGB vector to a light spectrum (using LUT or digital reconstructions), then simulate a digital retina to map spectrum to XYZ space, from which all standard RGB spaces derive. But for this, we need databases of the spectral sensitivity of each CFA/camera.

This also makes white balance adaptation super easy and way more accurate.

Exactly.

1 Like

Thanks Aurélien, that is an interesting article. It answered my question about shutter speed, but for the rest it seems maybe I was asking the wrong question :slight_smile: I’ll take some time to digest and read up some more.

What I “cannot” or maybe more appropriately, “should not” do isn’t helpful; I’m more interested in “how bad is it?” and “what would one do alternatively?”, questions whose answers I can more constructively consider…

On the other side of the color conversion pipe are the output profiles, associated with particular devices. How bad is that correlation? We need to remember it’s about characterizing cameras in ways that provide reasonable transforms to output spaces.

@paulmiller, I’m not chewing on you; you just had a quoteable quote… :grin:

For RGB spaces that are defined from XYZ primaries (hence device independent), everything is fine by design. That is sRGB, Adobe RGB, ProphotoRGB, etc. The issue arises when you want to map a device-dependent RGB to XYZ, and get worse when gamut shrinking occurs.

Going from and to device-dependent spaces from and to device-independent spaces (mostly XYZ) using 3×3 matrices is sort of ok as long as your pictures are lit with a D50 or D65 illuminant (aka “white” is white) and for colours mildly saturated.

Things go bad with blue LED lighting (aka “white” is blue), red sunsets (aka “white” is red), etc. because you violate the premises of the whole ICC pipeline (“white” is white). Fixing these takes usually a large amount of fiddling, and the only clean way to do it would be to use spectral as a connecting space, and the medium spectral sensitivity as a conversion.

1 Like

For my D7000, I found a spectral dataset someone presumably collected from measurements with the appropriate test fixture. dcamprof has a workflow to use these to make a camera profile, but that still will use a ZYZ PCS, no? Alternatively, what would be the math?

http://rawtherapee.com/mirror/dcamprof/dcamprof.html#workflow_ssf

Edit: Spectral source:

They have data for 11 cameras…

Spectral profiling is definitely not something to be let to users to do. At very least, you need to either use a radiometer or a standard D illuminant bulb, to serve as a standard against which you compute the transmittance of the CFA. That’s a job for a proper lab with proper equipment and a proper team.

1 Like

Definitely. But it’s a problem of a similar order to that of producing target shots for ICC/DCP profiles; hard to do, but done once.

By the way, I checked a bit more about dcamprof and the spectral calibration seems completely bogus. They use absolute spectral reflective references (http://www.babelcolor.com/index_htm_files/CC_Avg30_spectrum_CGATS.txt) for the color checker which don’t mention the illuminant. Reflective references are not absolute, they only reflect a part of an emitted light spectrum, so they fully depend on what light emission they get in the first place (reflexion = emission - absorption). Basically, not accounting for the illuminant means the reference is garbage. Yet another glorious piece of opensource.