Difference between cameras (raw)?

Oh, I read your thread title right after I finished collecting spectral data for my three Nikon cameras with the specific intent to see the differences… :laughing:

I’m going to give you a different way to start your thinking, and I think it ends up in the same place, you can make the output of different cameras look approximately the same in tone and color, although I’d need a few more-different cameras to work through it.

Hardware-wise, the product of digital image capture is a set of measurements of light intensity, taken through different bandpass filters organized into the mosaic. Plotting that data across the spectrum wavelengths looks like this:

nikon_d7000_ssf_13

Now, here’s a plot for another Nikon camera:
nikon_z6_50mm_a

Look similar, do they? Okay, one more, from an ‘antique’ Nikon D50:

nikon_d50_ssf_cadillac_35mm

One would begin to think that there are a few Nikon engineers who pay attention to this specific thing, camera-to-camera.

I think this is a more appropriate way to compare hardware…

1 Like

Hm. No, not really. Take my first digital camera as an example, a Canon EOS 600D (Rebel T3i in other parts of the world). It delivered images that were extremely noisy. More modern camera bodies are much less noisy, RAW or not.

Have fun!
Claes in Lund, Schweden

1 Like

To continue my line of thinking, I first lamented I didn’t have more cameras to show, then realized I did, in the camspec SSF database. It’s essentially a text file of SSFs for 10 cameras, monochrome-measured by Christian Mauer in conjunction with his university thesis. I posted a page of their plots here:

https://glenn.pulpitrock.net/camspec_ssf_plots/

For license, I’ve cited his statement of publication in his thesis. I did the separate page in order to lay them out for easy comparison.

Note the two Nikons, they follow the pattern of the three cameras I posted previously. Note the Canons, they have a different, but consistent pattern. And, the Sony looks a bit like a Nikon…

FWIW…

Edit: Link to the thesis: https://www.image-engineering.de/content/library/diploma_thesis/christian_mauer_spectral_response.pdf

1 Like

@ggbutcher
Glenn, I think it is not astonishing that the response curves for Nikon cameras are very similar. They all, most likely, use the same filter technique, the same sensor technique, resulting in the same response curves. Lenses also play a role but I would assume, that is a minor issue. Otherwise changing the lens would ruin the colours…

As Hunt (2004) shows, the spectral sensitivity curves of digital cameras are matrixed to approximate the colour matching functions as close as possible. But how exactly, that certainly differs from manufacturer to manufacturer. He also cites work on the spectral sensitivities of digital cameras (Hubel, Sherman and Farrell, 1994).

Hermann-Josef

Oh, I quite agree. And, while I see differences in other cameras, I think there’s reason to believe that most cameras data can be manipulated to produce consistent colors. When a camera profile is built against a color target, the objective is to come up with a set of primaries that pull the respective camera recording of a patch as close to the XYZ reference for that patch as is possible. I see manifestation of that behavior in the reporting spewed by dcamprof for every profile I’ve created with that software.

Yes, that is certainly correct. Sorry about having worded that sloppily. What I meant is that the camera doesn’t matter in terms of color and tone reproduction within their envelope. Of course the edges of the envelope (very bright or dark, high ISO) differ wildly between cameras, as does the size of the envelope. My 12 MP Pentax Q7 crop 4.5 from 2013 is no match at all for my much newer APS-C 24 MP Fuji in terms of noise or dynamic range. But both should be able to reproduce a well-lit MacBeth color chart with similar accuracy.

Are we in agreement on that, or would you refute that claim as well? I’m genuinely curious.

@bastibe

I do not think this is correct in view of what I described above. As Glenn has described the sensitivity curves differ from manufacturer to manufacturer. Colour transformations can then bring the colours to a desired colour space, but not always identical and successful (see above comparison with sRGB).

Hermann-Josef

'tschuldigung, but spontaneously I would say No.

Just utterly fascinating! I must have subconsciously had your posts in mind when I started thinking about this issue yesterday. I think my question indeed boils down to sort of the interpretation of your response curves: what does it mean that the response curves are different? How do these differences manifest photographically?

I think both the theory and my haphazard measurements bear that it doesn’t mean a lot in terms of lightness response at least. But if I understand the math correctly, color response should be at least somewhat affected, right? There should be some remnants of differences that can’t be compensated bu color space transformations, I think.

Thank you for these additional resources!

I think I don’t understand your sRGB plots. Where did you get that sRGB from? My cameras do not produce sRGB raw files, but something with a markedly larger gamut. And as @ggbutcher showed, color primaries of different cameras are at the very least different, and therefore can’t all be a representation of sRGB, right? Or am I fundamentally misunderstanding something?

Please don’t take my words as criticism; I am genuinely curious in what your sRBG graphs are showing, and what this means photographically and in terms of signal processing.

What I showed was the spectral sensitivity, and that is indeed unique to each camera. Primaries, however, while usually particular to a camera, are not the singular representation of that camera’s color performance. You can get different sets of primaries for the same camera, depending on how you create them. If you shoot a 24-patch ColorChecker for your primaries, they will be different than if you shot a 128-patch IT8 target. That’s why it’s not really right to say raw data has A Single Colorspace; there are different matrices (or LUTs, if you’re that ate up about, like I’ve become… :laughing: ) that’ll do a decent job of enabling that first transform to a gamut-bound colorspace.

But they are only different in the sense that the mathematics is different!

Physically a camera can only have one well defined colour space.

The sRGB (solid body) is the standard sRGB colour space. The mesh is the colour space of the camera. If it would deliver colours in strict sRGB the two should be identical. If they are discrepant it means that the camera does not deliver genuine sRGB colours.

I am not an expert in digital cameras. But from what I read, also in raw data you do not get the signal delivered by the CCD or CMOS detectors. They are already modified by the firmware (see remark by Hunt, cited above).

How do you know? Do you have an ICC-profile for them? Normally, cameras offer a choice between sRGB and AdobeRGB, if you can select. Otherwise, sRGB is assumed as the default.

Hermann-Josef

You made a few very interesting points, thank you for that. But this particular remark I think (!) is wrong. Unless I am very much mistaken, the raw files are not sRGB. The JPEGs are either sRGB or ARGB, but not the raw files. If they were, it would be entirely pointless for the camera to produce ARGB JPEGs (being a more-or-less superset of sRGB).

Right?

Right, that makes sense. I think (hope) this comment made me understand something. So the sensor outputs are mapped to certain primaries, but that mapping is necessarily lossy (swapping spectral sampling with colors), and is sort of the crux of the problem. If I understand this correctly, the mapping can only ever be “correct” in a metameric sense, and therefore for a single, well-defined illumination spectrum and an assumed “standard” viewer.

But in a broader, all-light sense, there are always bound to be errors, both because of varying illumination spectra, and because neither the screen/print spectra nor my eyes’ sensitivities will perfectly adhere to the standard, resulting in less than perfect (metameric) color space transformations. (Am I making sense?)

I think this answers the question that prompted this thread for me. I had missed that spectral sensitivities are mapped to primaries, but do not define them.

Thank you (all) so much for helping me understand stuff! I know no other forum that allows for these kinds of discussions!

Interesting that the Arri Alexa has the green channel with a low sensitivity.

That doesn’t give it very much white balance flexibility, but presumably cinematographers are controlling the light going into the camera better (whether with artificial lighting, gelling of windows, or filters on camera).

It never ceases to amaze me the wealth of information Anders Torger has put into the dcamprof documentation. This has relevance to the discussion:

http://rawtherapee.com/mirror/dcamprof/camera-profiling.html#camera_colors

2 Likes

@ggbutcher
thanks Glenn for the above link! It clearly states that to go from raw RGB triple values to a well defined colour space like sRGB you do need a profile – either in DNG or ICC. Otherwise there is no way to calculate the Lab values, corresponding to the RGB triple in the raw data.

Hermann-Josef

Yes, but does that profile represent the raw data’s actual colorspace?

I have quite a few profiles for my Nikon D7000 now, including two matrix profiles made from different targets, one from a ColorChecker Passport with 24 patches, another from a Wolf Faust IT8 targer, with 128 patches. Here are their primaries, extracted with exiftool:

CC24:
Red Matrix Column : 0.78865 0.34801 0.05725
Green Matrix Column : 0.07399 0.82153 -0.28322
Blue Matrix Column : 0.09937 -0.1787 1.0423

IT8:
Red Matrix Column : 0.74454 0.30803 0.0374
Green Matrix Column : 0.14598 0.86247 -0.19815
Blue Matrix Column : 0.07367 -0.17049 0.98566

Similar, but not identical. So, which one represents the raw data ‘colorspace’?

A target-sourced camera profile represents a transform that makes colors from a particular camera that are consistent to the reference values for that target. For other images, linear interpolation fills in the cracks, but anchored to those 24 or 128 reference values. In the writings of the ‘big heads’ on this topic, I’ve seen such matrices referred to as ‘compromise matrices’, for this very reason.

Camera profiles are a well-misunderstood mechanism, with all the various devices they contain (don’t get me going about ‘look’ profiles… :scream: ). What is not-well recognized is their fundamental role to provide the starting point for the color management chain of transforms to eventually something that can be rendered so we like it. But that starting point is a contrivance, not based on some intrinsic aspect of the light measurements but instead based on some external reference point like a target. Even spectral sensitivity-based profiles involve the creation of a ‘virtual target’ to feed the profile generation logic.

So, while a camera profile represents a notion of a colorspace that can be used to transform raw data to something anchored in a colorimetrically-representative reference like CIE_1931 XYZ, none of then can be considered to be the definition of the raw data’s One True Colorspace, in the same manner sRGB primaries represent the bounds of a sRGB-encoded image…

That dcamprof link of yours was such a good read! My head is still spinning from it.

It now makes sense to me why camera profiles are both hugely important, and necessarily inadequate in any non-standardized light. The tri-stimulus world with its metamerisms and color spaces is a true mind-bender.

I have two years of reading it now, and I still get dizzy…

Perhaps another visualization of the camera raw gamut (not color space) helps, see e.g. half-way through this video. Camera raw gamut is a slightly odd looking volume with a shape that cannot be mathematically fitted to a tristimulus modelled color space which has a regular shape. You can just approximate it and choose what compromises you make during profiling.

1 Like

Hello Glenn,

None of these, I would say, since each profile is only an approximation of the transformation from the device dependent colour space to the device independent colour space. The accuracy of the transform depends mainly on two things: number of patched used, mathematical algorithm used to derive the transformation.

Matrices with only 9 numbers are only a crude description of this transform. Look-up tables do much better. If you take a look at Argyll, it provides various algorithms to calculate the transformation. I get quite different colours if I use a profile created with SilverFast (algorithm unknown) compared to those I obtain with an Argyll profile from the very same target scan.

The device dependant colour space is a cube with axis R, G and B, all running from 0 to 1 (normalized, 1 corresponding to 255 in case of 8 bits), so the cube is completely filled by the device, since all three values can take any number between 0 and 1. This space is mapped by the profile to a complicated volume (see examples above) in XYZ or Lab space. So it is evident from this consideration, that the number of patches used to derive the transform and the mathematical algorithm employed make up for the differences among the various profiles in use.

Hermann-Josef

PS: @bastibe By the way, some of the issues discussed here were also discussed previously in another thread.

2 Likes