Difference between cameras (raw)?

Someone posted a link to Display Prep Demo - Steve Yedlin, ASC recently, where the author argues that most modern imaging systems can be processed to look more or less the same in terms of color and tone. I took this as meaning “If you shoot raw, your camera does not matter”, and decided to try to take a deeper look myself. (This is just some idle Sunday musing of a bored signal processing nerd.)

I took some test shots with the cameras I have available, in reading order: A Ricoh GR, a Panasonic GM1, a Fuji X-T2, a Pentax Q7:


Ricoh GR: GR036909.dng (10.9 MB) GR036909.dng.xmp
Panasonic GM1: P2010420.rw2 (18.7 MB) P2010420.rw2.xmp
Fuji X-T2: _DSF8141.raf (22.3 MB) _DSF8141.raf.xmp
Pentax Q7: IMGP6126.dng (20.1 MB) IMGP6126.jpg.xmp

(I release these shots into the public domain)

I normalized lightness using the black level correction and exposure sliders in the Exposure module such that the white patch reads 95 L* and black reads 20 L* and normalized white balance on the second-to-brightest grey patch. No tone or color module was active besides the exposure module.

With this processing, the bottom row of grey values come out as the following (L* values, in the same order as above):

tone ricoh tone Panasonic tone fuji tone pentax

Clearly, there are some differences between these lightness responses. I’m trying to avoid saying “tone curve”, as that would imply some kind of processing, whereas I am interested in the camera’s own response here. According to Wikipedia, the L* values should read 95, 80, 65, 50, 35, 20.

As an aside, I know these test shots are not perfect. They are not focused well, and the light was whatever I had available, which was afternoon sunlight in slightly hazy conditions. But I am not trying to create a color profile, so that shouldn’t matter too much as long conditions were consistent between the four shots (right?).

What I am trying to understand is, what is the source of these lightness response differences?

I understand that camera sensors are essentially linear photon counters. They sample the light spectrum with color filters. The filter characteristics differ between cameras, and result in different primaries / color spaces. These get converted to a common working color space for processing, and finally to the display’s color space, which are essentially lossless conversions as long as the values are in gamut in all spaces. Please do correct me if I’m wrong. My goal here is to learn.

To the best of my understanding, this means that different sensors should respond essentially identically to lightness. So why do I see these differences in lightness response?

Thank you for your help, or for showing me papers or books that explain these matters in more detail. My background is audio signal processing, and I’m trying to wrap my head around image processing (for fun, not profit).

Moin, @bastibe!

Interesting comparison – but I am not at all surprised that they differ (although only by a very small amount). I would have been far more surprised if the results had been identical.

Let’s think about what differences there are between your four shots.

  • four different lenses
  • four different sensors
  • four different kinds of camera firmware
  • different apertures, focal lengths, exposure biases …

So in my opinion, I think that you were skilled to get the four results so very close to each other!

MfG
Claes in Lund, Schweden

3 Likes

Hi @bastibe, I agree with Claes that it’s not easy to compare. Different lenses will likely not have 100% transmission over all wavelengths, causing minor color and luminance differences. Also, the exact angle under which you photograph your color chart makes a difference on the reflected light.

I did the same comparison as you, but in RawTherapee. Steps: correct distortion, crop to grey squares, put on color pickers somewhere in the middle of the squares, adjust exposure and black level so that the L* = 80 and L* = 20 match as well as possible. I noticed that the L* 95 square usually deviates quite strongly, so I haven’t used that for this ‘calibration’.

I get these results:
Ricoh

Panasonic

Fuji

Pentax

The margin of error is at most about (\pm) 2 luminance units. I would consider that pretty consistent between sensors. The exception is the brightest patch. I don’t know what causes the deviation there.

1 Like

That’s true. This could indeed explain the differences, particularly in the brightest patch where reflectivity is (probably?) strongest.

A maximum error of 2 L* units is very acceptable. I can accept that as measurement noise and conclude that the sensors perform essentially linearly. Interesting. This pretty much proves the point that “when you shoot raw, your camera does not matter”, even though these particular cameras differ quite severely in crop factor (1.5 on the Fuji and Ricoh, 2 on the GM1, 4.5 on the Q7) – talking solely about color and lightness reproduction here, of course.

Thank you! I love doing experiments myself (inexpertly) instead of relying only on third-party accounts of things. Especially with digital cameras, where it’s so easy to do.

Well, there are certainly more factors at play. Camera output range varies between 10-bit and 16-bit, with 14-bit being very common these days. There are also many ways the sensor circuitry can been constructed to reduce noise, or have things like dual-ISO modes. Then there is the way black and white levels are handled, which differs quite a bit between manufacturers.
So still, some differences are to be expected.

2 Likes

Hello @bastibe,
interesting test! Just a few thoughts about it.

If you are talking about L* values, you must have made a colour space assumption. Without knowing the colour space, you cannot convert to L*. I assume, your cameras all delivered sRGB. But how good is this sRGB really? I had asked myself this question for my two cameras (Panasonic movie camera, Sony Cybershot) and shot an IT8-target with both. From these I derived an ICC profile using Argyll. If both cameras would have delivered a good, standard sRGB colour space, the application of the profile should not make any difference in the colour appearance. But this was only true for the Sony, not at all for the Panasonic.
If I compare the camera’s colour space (mesh) with the sRGB space (solid body) I see why:
grafik
Here is the Sony comparison:
grafik
The comparison was done using ICCview.

In creating the colour space for a camera, the manufacturer has to use the filter transmission and detector response curves to approximate the colour space. Obviously this sometimes works well, sometimes not.

Thus I am not surprised to see these differences. I agree with @Claes and @Thanatomanic that it is surprising that the discrepancies are not larger.

Hermann-Josef

1 Like

No, they delivered their RAW files and their native color matrix. As far as I understand, sRGB only comes into play as Darktable’s output profile.

I didn’t look at the sRBG/ARGB-coded JPEGs at all. I don’t think the manufacturers ideas about sRGB play a role here at all. Right? Otherwise I must have fundamentally misunderstood something.

@bastibe
but if you are talking about L* you have to assume a colour space. Otherwise there is no way in calculating L* from the R,G,B-values. I am not familiar with Darktable.

What is the “native colour matrix”? Do you mean the R,G,B raw pixel values?

Hermann-Josef

Oh, I read your thread title right after I finished collecting spectral data for my three Nikon cameras with the specific intent to see the differences… :laughing:

I’m going to give you a different way to start your thinking, and I think it ends up in the same place, you can make the output of different cameras look approximately the same in tone and color, although I’d need a few more-different cameras to work through it.

Hardware-wise, the product of digital image capture is a set of measurements of light intensity, taken through different bandpass filters organized into the mosaic. Plotting that data across the spectrum wavelengths looks like this:

nikon_d7000_ssf_13

Now, here’s a plot for another Nikon camera:
nikon_z6_50mm_a

Look similar, do they? Okay, one more, from an ‘antique’ Nikon D50:

nikon_d50_ssf_cadillac_35mm

One would begin to think that there are a few Nikon engineers who pay attention to this specific thing, camera-to-camera.

I think this is a more appropriate way to compare hardware…

1 Like

Hm. No, not really. Take my first digital camera as an example, a Canon EOS 600D (Rebel T3i in other parts of the world). It delivered images that were extremely noisy. More modern camera bodies are much less noisy, RAW or not.

Have fun!
Claes in Lund, Schweden

1 Like

To continue my line of thinking, I first lamented I didn’t have more cameras to show, then realized I did, in the camspec SSF database. It’s essentially a text file of SSFs for 10 cameras, monochrome-measured by Christian Mauer in conjunction with his university thesis. I posted a page of their plots here:

https://glenn.pulpitrock.net/camspec_ssf_plots/

For license, I’ve cited his statement of publication in his thesis. I did the separate page in order to lay them out for easy comparison.

Note the two Nikons, they follow the pattern of the three cameras I posted previously. Note the Canons, they have a different, but consistent pattern. And, the Sony looks a bit like a Nikon…

FWIW…

Edit: Link to the thesis: https://www.image-engineering.de/content/library/diploma_thesis/christian_mauer_spectral_response.pdf

1 Like

@ggbutcher
Glenn, I think it is not astonishing that the response curves for Nikon cameras are very similar. They all, most likely, use the same filter technique, the same sensor technique, resulting in the same response curves. Lenses also play a role but I would assume, that is a minor issue. Otherwise changing the lens would ruin the colours…

As Hunt (2004) shows, the spectral sensitivity curves of digital cameras are matrixed to approximate the colour matching functions as close as possible. But how exactly, that certainly differs from manufacturer to manufacturer. He also cites work on the spectral sensitivities of digital cameras (Hubel, Sherman and Farrell, 1994).

Hermann-Josef

Oh, I quite agree. And, while I see differences in other cameras, I think there’s reason to believe that most cameras data can be manipulated to produce consistent colors. When a camera profile is built against a color target, the objective is to come up with a set of primaries that pull the respective camera recording of a patch as close to the XYZ reference for that patch as is possible. I see manifestation of that behavior in the reporting spewed by dcamprof for every profile I’ve created with that software.

Yes, that is certainly correct. Sorry about having worded that sloppily. What I meant is that the camera doesn’t matter in terms of color and tone reproduction within their envelope. Of course the edges of the envelope (very bright or dark, high ISO) differ wildly between cameras, as does the size of the envelope. My 12 MP Pentax Q7 crop 4.5 from 2013 is no match at all for my much newer APS-C 24 MP Fuji in terms of noise or dynamic range. But both should be able to reproduce a well-lit MacBeth color chart with similar accuracy.

Are we in agreement on that, or would you refute that claim as well? I’m genuinely curious.

@bastibe

I do not think this is correct in view of what I described above. As Glenn has described the sensitivity curves differ from manufacturer to manufacturer. Colour transformations can then bring the colours to a desired colour space, but not always identical and successful (see above comparison with sRGB).

Hermann-Josef

'tschuldigung, but spontaneously I would say No.

Just utterly fascinating! I must have subconsciously had your posts in mind when I started thinking about this issue yesterday. I think my question indeed boils down to sort of the interpretation of your response curves: what does it mean that the response curves are different? How do these differences manifest photographically?

I think both the theory and my haphazard measurements bear that it doesn’t mean a lot in terms of lightness response at least. But if I understand the math correctly, color response should be at least somewhat affected, right? There should be some remnants of differences that can’t be compensated bu color space transformations, I think.

Thank you for these additional resources!

I think I don’t understand your sRGB plots. Where did you get that sRGB from? My cameras do not produce sRGB raw files, but something with a markedly larger gamut. And as @ggbutcher showed, color primaries of different cameras are at the very least different, and therefore can’t all be a representation of sRGB, right? Or am I fundamentally misunderstanding something?

Please don’t take my words as criticism; I am genuinely curious in what your sRBG graphs are showing, and what this means photographically and in terms of signal processing.

What I showed was the spectral sensitivity, and that is indeed unique to each camera. Primaries, however, while usually particular to a camera, are not the singular representation of that camera’s color performance. You can get different sets of primaries for the same camera, depending on how you create them. If you shoot a 24-patch ColorChecker for your primaries, they will be different than if you shot a 128-patch IT8 target. That’s why it’s not really right to say raw data has A Single Colorspace; there are different matrices (or LUTs, if you’re that ate up about, like I’ve become… :laughing: ) that’ll do a decent job of enabling that first transform to a gamut-bound colorspace.

But they are only different in the sense that the mathematics is different!

Physically a camera can only have one well defined colour space.

The sRGB (solid body) is the standard sRGB colour space. The mesh is the colour space of the camera. If it would deliver colours in strict sRGB the two should be identical. If they are discrepant it means that the camera does not deliver genuine sRGB colours.

I am not an expert in digital cameras. But from what I read, also in raw data you do not get the signal delivered by the CCD or CMOS detectors. They are already modified by the firmware (see remark by Hunt, cited above).

How do you know? Do you have an ICC-profile for them? Normally, cameras offer a choice between sRGB and AdobeRGB, if you can select. Otherwise, sRGB is assumed as the default.

Hermann-Josef

You made a few very interesting points, thank you for that. But this particular remark I think (!) is wrong. Unless I am very much mistaken, the raw files are not sRGB. The JPEGs are either sRGB or ARGB, but not the raw files. If they were, it would be entirely pointless for the camera to produce ARGB JPEGs (being a more-or-less superset of sRGB).

Right?

Right, that makes sense. I think (hope) this comment made me understand something. So the sensor outputs are mapped to certain primaries, but that mapping is necessarily lossy (swapping spectral sampling with colors), and is sort of the crux of the problem. If I understand this correctly, the mapping can only ever be “correct” in a metameric sense, and therefore for a single, well-defined illumination spectrum and an assumed “standard” viewer.

But in a broader, all-light sense, there are always bound to be errors, both because of varying illumination spectra, and because neither the screen/print spectra nor my eyes’ sensitivities will perfectly adhere to the standard, resulting in less than perfect (metameric) color space transformations. (Am I making sense?)

I think this answers the question that prompted this thread for me. I had missed that spectral sensitivities are mapped to primaries, but do not define them.

Thank you (all) so much for helping me understand stuff! I know no other forum that allows for these kinds of discussions!