Difference between cameras (raw)?

Interesting that the Arri Alexa has the green channel with a low sensitivity.

That doesn’t give it very much white balance flexibility, but presumably cinematographers are controlling the light going into the camera better (whether with artificial lighting, gelling of windows, or filters on camera).

It never ceases to amaze me the wealth of information Anders Torger has put into the dcamprof documentation. This has relevance to the discussion:

http://rawtherapee.com/mirror/dcamprof/camera-profiling.html#camera_colors

2 Likes

@ggbutcher
thanks Glenn for the above link! It clearly states that to go from raw RGB triple values to a well defined colour space like sRGB you do need a profile – either in DNG or ICC. Otherwise there is no way to calculate the Lab values, corresponding to the RGB triple in the raw data.

Hermann-Josef

Yes, but does that profile represent the raw data’s actual colorspace?

I have quite a few profiles for my Nikon D7000 now, including two matrix profiles made from different targets, one from a ColorChecker Passport with 24 patches, another from a Wolf Faust IT8 targer, with 128 patches. Here are their primaries, extracted with exiftool:

CC24:
Red Matrix Column : 0.78865 0.34801 0.05725
Green Matrix Column : 0.07399 0.82153 -0.28322
Blue Matrix Column : 0.09937 -0.1787 1.0423

IT8:
Red Matrix Column : 0.74454 0.30803 0.0374
Green Matrix Column : 0.14598 0.86247 -0.19815
Blue Matrix Column : 0.07367 -0.17049 0.98566

Similar, but not identical. So, which one represents the raw data ‘colorspace’?

A target-sourced camera profile represents a transform that makes colors from a particular camera that are consistent to the reference values for that target. For other images, linear interpolation fills in the cracks, but anchored to those 24 or 128 reference values. In the writings of the ‘big heads’ on this topic, I’ve seen such matrices referred to as ‘compromise matrices’, for this very reason.

Camera profiles are a well-misunderstood mechanism, with all the various devices they contain (don’t get me going about ‘look’ profiles… :scream: ). What is not-well recognized is their fundamental role to provide the starting point for the color management chain of transforms to eventually something that can be rendered so we like it. But that starting point is a contrivance, not based on some intrinsic aspect of the light measurements but instead based on some external reference point like a target. Even spectral sensitivity-based profiles involve the creation of a ‘virtual target’ to feed the profile generation logic.

So, while a camera profile represents a notion of a colorspace that can be used to transform raw data to something anchored in a colorimetrically-representative reference like CIE_1931 XYZ, none of then can be considered to be the definition of the raw data’s One True Colorspace, in the same manner sRGB primaries represent the bounds of a sRGB-encoded image…

That dcamprof link of yours was such a good read! My head is still spinning from it.

It now makes sense to me why camera profiles are both hugely important, and necessarily inadequate in any non-standardized light. The tri-stimulus world with its metamerisms and color spaces is a true mind-bender.

I have two years of reading it now, and I still get dizzy…

Perhaps another visualization of the camera raw gamut (not color space) helps, see e.g. half-way through this video. Camera raw gamut is a slightly odd looking volume with a shape that cannot be mathematically fitted to a tristimulus modelled color space which has a regular shape. You can just approximate it and choose what compromises you make during profiling.

1 Like

Hello Glenn,

None of these, I would say, since each profile is only an approximation of the transformation from the device dependent colour space to the device independent colour space. The accuracy of the transform depends mainly on two things: number of patched used, mathematical algorithm used to derive the transformation.

Matrices with only 9 numbers are only a crude description of this transform. Look-up tables do much better. If you take a look at Argyll, it provides various algorithms to calculate the transformation. I get quite different colours if I use a profile created with SilverFast (algorithm unknown) compared to those I obtain with an Argyll profile from the very same target scan.

The device dependant colour space is a cube with axis R, G and B, all running from 0 to 1 (normalized, 1 corresponding to 255 in case of 8 bits), so the cube is completely filled by the device, since all three values can take any number between 0 and 1. This space is mapped by the profile to a complicated volume (see examples above) in XYZ or Lab space. So it is evident from this consideration, that the number of patches used to derive the transform and the mathematical algorithm employed make up for the differences among the various profiles in use.

Hermann-Josef

PS: @bastibe By the way, some of the issues discussed here were also discussed previously in another thread.

2 Likes

Just thought I’d point out that, technically, the Alexa doesn’t have less green sensitivity, but (if the chart is taken at face value) more blue sensitivity. Green is still normalized to 1.0. That said, based on the charachteristic response of silicon sensors and how similar the shape of the blue channel is to nikon/sony, I’d be willing to be that the blue response reflects additional gain added to the blue channel in A/D conversion based on the white balance selected in camera.

Cripes, I confused datasets, the SSF plots to which I posted the link are actually a collection assembled by the Open Film Tools folks. The link to their page:

https://www.hdm-stuttgart.de/open-film-tools/english/camera_responsivities

The revised link to my page:

https://glenn.pulpitrock.net/openfilmtools_ssf_plots/

I’m still trying to find their license for it; if I can’t find one, I’ll probably take it down.

And most important of all, SSF shapes: if they were the same as physiological cone fundamental responses we may not need a transform at all, see for instance here. As it is, matrix definition is an overdetermined problem with an infinite number of solutions, so the best we can do is choose one based on reasonable criteria.

1 Like

I listed only the things the user has to deal with. The spectral response functions are hidden in the transform.

e.g. least squares fit to minimize deltaEs

Hermann-Josef

1 Like

It seems to me that this is where things get interesting. Which is a better starting point for a profile: a matrix that results in the smallest possible average deltaE for a wide gamut of patches, or a matrix that results in a nearly perfect response for some patches, but at the cost of a higher deltaE for others? Which colors to prioritize? As I’ve gotten into making my own custom profiles I’ve found that there is really a lot more to it than I ever suspected!

Again this is only for the camera to produce its jpg…raw data is raw data of the sensor demosaiced and then converted by a matrix into usable RGB… here is ground zero Developing a RAW photo file 'by hand' - Part 1 Part two of this gets more to this discussion…

Neat! Thank you! This looks like an amazing resource, that I will add to my reading list immedately.