Question about raw data from digital cameras

Hello,
the raw data from a digital camera are in the camera’s colour space, i.e. device dependent colours. If I set the camera to sRGB or AdobeRGB the JPG images come out supposedly in these colour spaces. How is that done? Somewhere must be an ICC-profile to transform the device colours to the standard colour spaces. The same question holds for demosaicing in RT. Here also an ICC-profile must come into play. Where does this come from? I looked in RawPedia but did not find the answer.

Hermann-Josef

Yes, that transform has to be done, but not necessarily with a full ICC profile.

dcraw contains hard-coded D65-anchored matrices for sRGB, Adobe, ProPhoto, WideGamut, ACES, and the option to just deliver it in XYZ, and the gamut transform is done in-code. The color primaries for each are well-disseminated. The tone part of the transform is a separate parameter, and can be nulled out if desired. With those and the camera primaries, contained in a humungous table named adobe_coeff (named so because most of the information comes from the Adobe DNG converter), dcraw does the appropriate color transform prior to output.

In rawproc, I use dcraw-style camera data, but I use the oh-so-well-implemented LittleCMS to build internal ICC profiles from them, as well as file-based output profiles (Adobe, Prophoto, sRGB, Rec2020, etc., from @elle’s collection) and the LCMS cmsTransform() routine to do the color transform.

Just to give you a general idea of the variation in mechanics…

Glenn, thanks for clarification.

So the manufacturer of the camera also has to build into the firmware sort of an ICC-profile to transform the raw data into e.g. JPG sRGB. Presumably this is not done specific for each camera individuum but globally for the whole model series? However, a flatfield would have to be supplied specifically for each detector…

Hermann-Josef

I wish they did. But I don’t know of a single camera that includes camera primaries in its metadata. if someone does, I’d be interested in hearing…

Nor have I seen anything related to flat-field correction in camera metadata. The blackpoint and its subtraction I’ve discussed in the linear thread might relate, as it effectively removes from consideration pixels in that regime. Some cameras do have a sort of “High-ISO Noise Reduction” that will shoot a flatfield after the normal exposure and apply it somewhere (hopefully, JPEG processing), but I don’t think that second image is supplied in the metadata. Again, others’ experiences??..

I’m not sure, but I THINK Pentax’s forced auto-DFS in older cameras did apply to the RAW data - you didn’t get the separate darkframe and the exposure.

That auto-DFS algorithm is one of the things that caused me to eventually leave Pentax despite a heavy investment in glass.

Cameras that have native DNG support in theory include the color matrix in their metadata, since it’s required by the spec - but native DNG happens to be seen more often in cheap Chinesium (not always, but very frequently) and it seems like these manufacturers always botch their implementation of the DNG spec by embedding a vastly wrong matrix - such as the Xiaomi Mi Sphere (see the Better Color Representation of https://sites.google.com/view/h360/misphere-converter ) - also at least earlier firmwares of my DJI Phantom 2 Vision+ quadcopter had similar broken color matrix metadata in their DNGs.

I don’t know of any cameras with proprietary raw formats that embed color matrix data (although maybe I just haven’t noticed) - as a result, many open source software programs use the color matrix that is spit out by Adobe DNG Converter for that camera model by default. https://github.com/darktable-org/darktable/blob/master/src/external/adobe_coeff.c for example

I don’t know of any camera manufacturer that does per-unit profiling - unless you profile your own camera, you’ll be using a color matrix intended to be “close enough” for all units of that particular model. (The color matrix may change a little bit due to manufacturing tolerances, but it will change more significantly as a result of design changes to the CFA or other aspects of the sensor)

From what I understand, the situation is somewhat messy. The colour transformation need not be stored in the metadata. The firmware could use it internally without reporting it to the outside…

There seems also to be a great difference from camera to camera model. After I profiled my Sony F828 the colours did not change noticeably. However, after profiling the still shots of my video camera, the colours improved a lot.

Hermann-Josef

PS: Could you please explain the acronyms DFS and CFA?

I think this Rawpedia page (Color Management) is what you’re looking for.

As I see it, each manufacturer knows its sensors, thus knows which are the primaries for a certain camera model. Once hardcoded in the electronics, the camera will know exactly how to convert the image to sRGB or AdobeRGB. But as @Entropy512 said, unless you profile your camera, you will be using «close enough» primaries.

DFS: dark-field (or frame) subtraction https://en.wikipedia.org/wiki/Dark-field_subtraction

CFA: color filter array https://en.wikipedia.org/wiki/Color_filter_array

1 Like