Someone posted a link to Display Prep Demo - Steve Yedlin, ASC recently, where the author argues that most modern imaging systems can be processed to look more or less the same in terms of color and tone. I took this as meaning “If you shoot raw, your camera does not matter”, and decided to try to take a deeper look myself. (This is just some idle Sunday musing of a bored signal processing nerd.)
I took some test shots with the cameras I have available, in reading order: A Ricoh GR, a Panasonic GM1, a Fuji X-T2, a Pentax Q7:
Ricoh GR: GR036909.dng (10.9 MB) GR036909.dng.xmp
Panasonic GM1: P2010420.rw2 (18.7 MB) P2010420.rw2.xmp
Fuji X-T2: _DSF8141.raf (22.3 MB) _DSF8141.raf.xmp
Pentax Q7: IMGP6126.dng (20.1 MB) IMGP6126.jpg.xmp
(I release these shots into the public domain)
I normalized lightness using the black level correction and exposure sliders in the Exposure module such that the white patch reads 95 L* and black reads 20 L* and normalized white balance on the second-to-brightest grey patch. No tone or color module was active besides the exposure module.
With this processing, the bottom row of grey values come out as the following (L* values, in the same order as above):
Clearly, there are some differences between these lightness responses. I’m trying to avoid saying “tone curve”, as that would imply some kind of processing, whereas I am interested in the camera’s own response here. According to Wikipedia, the L* values should read 95, 80, 65, 50, 35, 20.
As an aside, I know these test shots are not perfect. They are not focused well, and the light was whatever I had available, which was afternoon sunlight in slightly hazy conditions. But I am not trying to create a color profile, so that shouldn’t matter too much as long conditions were consistent between the four shots (right?).
What I am trying to understand is, what is the source of these lightness response differences?
I understand that camera sensors are essentially linear photon counters. They sample the light spectrum with color filters. The filter characteristics differ between cameras, and result in different primaries / color spaces. These get converted to a common working color space for processing, and finally to the display’s color space, which are essentially lossless conversions as long as the values are in gamut in all spaces. Please do correct me if I’m wrong. My goal here is to learn.
To the best of my understanding, this means that different sensors should respond essentially identically to lightness. So why do I see these differences in lightness response?
Thank you for your help, or for showing me papers or books that explain these matters in more detail. My background is audio signal processing, and I’m trying to wrap my head around image processing (for fun, not profit).