Difference between cameras (raw)?

'tschuldigung, but spontaneously I would say No.

Just utterly fascinating! I must have subconsciously had your posts in mind when I started thinking about this issue yesterday. I think my question indeed boils down to sort of the interpretation of your response curves: what does it mean that the response curves are different? How do these differences manifest photographically?

I think both the theory and my haphazard measurements bear that it doesn’t mean a lot in terms of lightness response at least. But if I understand the math correctly, color response should be at least somewhat affected, right? There should be some remnants of differences that can’t be compensated bu color space transformations, I think.

Thank you for these additional resources!

I think I don’t understand your sRGB plots. Where did you get that sRGB from? My cameras do not produce sRGB raw files, but something with a markedly larger gamut. And as @ggbutcher showed, color primaries of different cameras are at the very least different, and therefore can’t all be a representation of sRGB, right? Or am I fundamentally misunderstanding something?

Please don’t take my words as criticism; I am genuinely curious in what your sRBG graphs are showing, and what this means photographically and in terms of signal processing.

What I showed was the spectral sensitivity, and that is indeed unique to each camera. Primaries, however, while usually particular to a camera, are not the singular representation of that camera’s color performance. You can get different sets of primaries for the same camera, depending on how you create them. If you shoot a 24-patch ColorChecker for your primaries, they will be different than if you shot a 128-patch IT8 target. That’s why it’s not really right to say raw data has A Single Colorspace; there are different matrices (or LUTs, if you’re that ate up about, like I’ve become… :laughing: ) that’ll do a decent job of enabling that first transform to a gamut-bound colorspace.

But they are only different in the sense that the mathematics is different!

Physically a camera can only have one well defined colour space.

The sRGB (solid body) is the standard sRGB colour space. The mesh is the colour space of the camera. If it would deliver colours in strict sRGB the two should be identical. If they are discrepant it means that the camera does not deliver genuine sRGB colours.

I am not an expert in digital cameras. But from what I read, also in raw data you do not get the signal delivered by the CCD or CMOS detectors. They are already modified by the firmware (see remark by Hunt, cited above).

How do you know? Do you have an ICC-profile for them? Normally, cameras offer a choice between sRGB and AdobeRGB, if you can select. Otherwise, sRGB is assumed as the default.

Hermann-Josef

You made a few very interesting points, thank you for that. But this particular remark I think (!) is wrong. Unless I am very much mistaken, the raw files are not sRGB. The JPEGs are either sRGB or ARGB, but not the raw files. If they were, it would be entirely pointless for the camera to produce ARGB JPEGs (being a more-or-less superset of sRGB).

Right?

Right, that makes sense. I think (hope) this comment made me understand something. So the sensor outputs are mapped to certain primaries, but that mapping is necessarily lossy (swapping spectral sampling with colors), and is sort of the crux of the problem. If I understand this correctly, the mapping can only ever be “correct” in a metameric sense, and therefore for a single, well-defined illumination spectrum and an assumed “standard” viewer.

But in a broader, all-light sense, there are always bound to be errors, both because of varying illumination spectra, and because neither the screen/print spectra nor my eyes’ sensitivities will perfectly adhere to the standard, resulting in less than perfect (metameric) color space transformations. (Am I making sense?)

I think this answers the question that prompted this thread for me. I had missed that spectral sensitivities are mapped to primaries, but do not define them.

Thank you (all) so much for helping me understand stuff! I know no other forum that allows for these kinds of discussions!

Interesting that the Arri Alexa has the green channel with a low sensitivity.

That doesn’t give it very much white balance flexibility, but presumably cinematographers are controlling the light going into the camera better (whether with artificial lighting, gelling of windows, or filters on camera).

It never ceases to amaze me the wealth of information Anders Torger has put into the dcamprof documentation. This has relevance to the discussion:

http://rawtherapee.com/mirror/dcamprof/camera-profiling.html#camera_colors

2 Likes

@ggbutcher
thanks Glenn for the above link! It clearly states that to go from raw RGB triple values to a well defined colour space like sRGB you do need a profile – either in DNG or ICC. Otherwise there is no way to calculate the Lab values, corresponding to the RGB triple in the raw data.

Hermann-Josef

Yes, but does that profile represent the raw data’s actual colorspace?

I have quite a few profiles for my Nikon D7000 now, including two matrix profiles made from different targets, one from a ColorChecker Passport with 24 patches, another from a Wolf Faust IT8 targer, with 128 patches. Here are their primaries, extracted with exiftool:

CC24:
Red Matrix Column : 0.78865 0.34801 0.05725
Green Matrix Column : 0.07399 0.82153 -0.28322
Blue Matrix Column : 0.09937 -0.1787 1.0423

IT8:
Red Matrix Column : 0.74454 0.30803 0.0374
Green Matrix Column : 0.14598 0.86247 -0.19815
Blue Matrix Column : 0.07367 -0.17049 0.98566

Similar, but not identical. So, which one represents the raw data ‘colorspace’?

A target-sourced camera profile represents a transform that makes colors from a particular camera that are consistent to the reference values for that target. For other images, linear interpolation fills in the cracks, but anchored to those 24 or 128 reference values. In the writings of the ‘big heads’ on this topic, I’ve seen such matrices referred to as ‘compromise matrices’, for this very reason.

Camera profiles are a well-misunderstood mechanism, with all the various devices they contain (don’t get me going about ‘look’ profiles… :scream: ). What is not-well recognized is their fundamental role to provide the starting point for the color management chain of transforms to eventually something that can be rendered so we like it. But that starting point is a contrivance, not based on some intrinsic aspect of the light measurements but instead based on some external reference point like a target. Even spectral sensitivity-based profiles involve the creation of a ‘virtual target’ to feed the profile generation logic.

So, while a camera profile represents a notion of a colorspace that can be used to transform raw data to something anchored in a colorimetrically-representative reference like CIE_1931 XYZ, none of then can be considered to be the definition of the raw data’s One True Colorspace, in the same manner sRGB primaries represent the bounds of a sRGB-encoded image…

That dcamprof link of yours was such a good read! My head is still spinning from it.

It now makes sense to me why camera profiles are both hugely important, and necessarily inadequate in any non-standardized light. The tri-stimulus world with its metamerisms and color spaces is a true mind-bender.

I have two years of reading it now, and I still get dizzy…

Perhaps another visualization of the camera raw gamut (not color space) helps, see e.g. half-way through this video. Camera raw gamut is a slightly odd looking volume with a shape that cannot be mathematically fitted to a tristimulus modelled color space which has a regular shape. You can just approximate it and choose what compromises you make during profiling.

1 Like

Hello Glenn,

None of these, I would say, since each profile is only an approximation of the transformation from the device dependent colour space to the device independent colour space. The accuracy of the transform depends mainly on two things: number of patched used, mathematical algorithm used to derive the transformation.

Matrices with only 9 numbers are only a crude description of this transform. Look-up tables do much better. If you take a look at Argyll, it provides various algorithms to calculate the transformation. I get quite different colours if I use a profile created with SilverFast (algorithm unknown) compared to those I obtain with an Argyll profile from the very same target scan.

The device dependant colour space is a cube with axis R, G and B, all running from 0 to 1 (normalized, 1 corresponding to 255 in case of 8 bits), so the cube is completely filled by the device, since all three values can take any number between 0 and 1. This space is mapped by the profile to a complicated volume (see examples above) in XYZ or Lab space. So it is evident from this consideration, that the number of patches used to derive the transform and the mathematical algorithm employed make up for the differences among the various profiles in use.

Hermann-Josef

PS: @bastibe By the way, some of the issues discussed here were also discussed previously in another thread.

2 Likes

Just thought I’d point out that, technically, the Alexa doesn’t have less green sensitivity, but (if the chart is taken at face value) more blue sensitivity. Green is still normalized to 1.0. That said, based on the charachteristic response of silicon sensors and how similar the shape of the blue channel is to nikon/sony, I’d be willing to be that the blue response reflects additional gain added to the blue channel in A/D conversion based on the white balance selected in camera.

Cripes, I confused datasets, the SSF plots to which I posted the link are actually a collection assembled by the Open Film Tools folks. The link to their page:

https://www.hdm-stuttgart.de/open-film-tools/english/camera_responsivities

The revised link to my page:

https://glenn.pulpitrock.net/openfilmtools_ssf_plots/

I’m still trying to find their license for it; if I can’t find one, I’ll probably take it down.

And most important of all, SSF shapes: if they were the same as physiological cone fundamental responses we may not need a transform at all, see for instance here. As it is, matrix definition is an overdetermined problem with an infinite number of solutions, so the best we can do is choose one based on reasonable criteria.

1 Like

I listed only the things the user has to deal with. The spectral response functions are hidden in the transform.

e.g. least squares fit to minimize deltaEs

Hermann-Josef

1 Like

It seems to me that this is where things get interesting. Which is a better starting point for a profile: a matrix that results in the smallest possible average deltaE for a wide gamut of patches, or a matrix that results in a nearly perfect response for some patches, but at the cost of a higher deltaE for others? Which colors to prioritize? As I’ve gotten into making my own custom profiles I’ve found that there is really a lot more to it than I ever suspected!

Again this is only for the camera to produce its jpg…raw data is raw data of the sensor demosaiced and then converted by a matrix into usable RGB… here is ground zero Developing a RAW photo file 'by hand' - Part 1 Part two of this gets more to this discussion…

Neat! Thank you! This looks like an amazing resource, that I will add to my reading list immedately.