You made a few very interesting points, thank you for that. But this particular remark I think (!) is wrong. Unless I am very much mistaken, the raw files are not sRGB. The JPEGs are either sRGB or ARGB, but not the raw files. If they were, it would be entirely pointless for the camera to produce ARGB JPEGs (being a more-or-less superset of sRGB).
Right?
Primaries, however, while usually particular to a camera, are not the singular representation of that camera’s color performance.
Right, that makes sense. I think (hope) this comment made me understand something. So the sensor outputs are mapped to certain primaries, but that mapping is necessarily lossy (swapping spectral sampling with colors), and is sort of the crux of the problem. If I understand this correctly, the mapping can only ever be “correct” in a metameric sense, and therefore for a single, well-defined illumination spectrum and an assumed “standard” viewer.
But in a broader, all-light sense, there are always bound to be errors, both because of varying illumination spectra, and because neither the screen/print spectra nor my eyes’ sensitivities will perfectly adhere to the standard, resulting in less than perfect (metameric) color space transformations. (Am I making sense?)
I think this answers the question that prompted this thread for me. I had missed that spectral sensitivities are mapped to primaries, but do not define them.
Thank you (all) so much for helping me understand stuff! I know no other forum that allows for these kinds of discussions!