Question about raw data from digital cameras

Hello,
the raw data from a digital camera are in the camera’s colour space, i.e. device dependent colours. If I set the camera to sRGB or AdobeRGB the JPG images come out supposedly in these colour spaces. How is that done? Somewhere must be an ICC-profile to transform the device colours to the standard colour spaces. The same question holds for demosaicing in RT. Here also an ICC-profile must come into play. Where does this come from? I looked in RawPedia but did not find the answer.

Hermann-Josef

Yes, that transform has to be done, but not necessarily with a full ICC profile.

dcraw contains hard-coded D65-anchored matrices for sRGB, Adobe, ProPhoto, WideGamut, ACES, and the option to just deliver it in XYZ, and the gamut transform is done in-code. The color primaries for each are well-disseminated. The tone part of the transform is a separate parameter, and can be nulled out if desired. With those and the camera primaries, contained in a humungous table named adobe_coeff (named so because most of the information comes from the Adobe DNG converter), dcraw does the appropriate color transform prior to output.

In rawproc, I use dcraw-style camera data, but I use the oh-so-well-implemented LittleCMS to build internal ICC profiles from them, as well as file-based output profiles (Adobe, Prophoto, sRGB, Rec2020, etc., from @elle’s collection) and the LCMS cmsTransform() routine to do the color transform.

Just to give you a general idea of the variation in mechanics…

1 Like

Glenn, thanks for clarification.

So the manufacturer of the camera also has to build into the firmware sort of an ICC-profile to transform the raw data into e.g. JPG sRGB. Presumably this is not done specific for each camera individuum but globally for the whole model series? However, a flatfield would have to be supplied specifically for each detector…

Hermann-Josef

I wish they did. But I don’t know of a single camera that includes camera primaries in its metadata. if someone does, I’d be interested in hearing…

Nor have I seen anything related to flat-field correction in camera metadata. The blackpoint and its subtraction I’ve discussed in the linear thread might relate, as it effectively removes from consideration pixels in that regime. Some cameras do have a sort of “High-ISO Noise Reduction” that will shoot a flatfield dark frame after the normal exposure and apply it somewhere (hopefully, JPEG processing), but I don’t think that second image is supplied in the metadata. Again, others’ experiences??..

I’m not sure, but I THINK Pentax’s forced auto-DFS in older cameras did apply to the RAW data - you didn’t get the separate darkframe and the exposure.

That auto-DFS algorithm is one of the things that caused me to eventually leave Pentax despite a heavy investment in glass.

Cameras that have native DNG support in theory include the color matrix in their metadata, since it’s required by the spec - but native DNG happens to be seen more often in cheap Chinesium (not always, but very frequently) and it seems like these manufacturers always botch their implementation of the DNG spec by embedding a vastly wrong matrix - such as the Xiaomi Mi Sphere (see the Better Color Representation of H360 - MiSphere Converter for Android ) - also at least earlier firmwares of my DJI Phantom 2 Vision+ quadcopter had similar broken color matrix metadata in their DNGs.

I don’t know of any cameras with proprietary raw formats that embed color matrix data (although maybe I just haven’t noticed) - as a result, many open source software programs use the color matrix that is spit out by Adobe DNG Converter for that camera model by default. darktable/adobe_coeff.c at master · darktable-org/darktable · GitHub for example

I don’t know of any camera manufacturer that does per-unit profiling - unless you profile your own camera, you’ll be using a color matrix intended to be “close enough” for all units of that particular model. (The color matrix may change a little bit due to manufacturing tolerances, but it will change more significantly as a result of design changes to the CFA or other aspects of the sensor)

From what I understand, the situation is somewhat messy. The colour transformation need not be stored in the metadata. The firmware could use it internally without reporting it to the outside…

There seems also to be a great difference from camera to camera model. After I profiled my Sony F828 the colours did not change noticeably. However, after profiling the still shots of my video camera, the colours improved a lot.

Hermann-Josef

PS: Could you please explain the acronyms DFS and CFA?

I think this Rawpedia page (Color Management) is what you’re looking for.

As I see it, each manufacturer knows its sensors, thus knows which are the primaries for a certain camera model. Once hardcoded in the electronics, the camera will know exactly how to convert the image to sRGB or AdobeRGB. But as @Entropy512 said, unless you profile your camera, you will be using «close enough» primaries.

DFS: dark-field (or frame) subtraction Dark-frame subtraction - Wikipedia

CFA: color filter array Color filter array - Wikipedia

1 Like

A color matrix is baked into the camera firmware, many are included in dcraw and in Adobe DNG Converter, ICC and DCP input profiles contain them, and camconst.json has them.

The input color matrix converts from the camera’s device-dependent space to a device-independent XYZ space. From there it can be converted to sRGB, AdobeRGB, or anything else.

It’s done per camera model.

No.

That would be a dark frame.

1 Like

@Morgan_Hardwood

So how is the flatfield correction done in practice?

Hermann-Josef

Does this help? Flat-Field - RawPedia

@Thanatomanic
Not really, since I am wondering, how the pixel-to-pixel variations are corrected for in a digital camera without post-processing. That one can do this in post-processing as in RT is obvious. However, I would like to differentiate between sensor characteristics and optical effects like vignetting.

Hermann-Josef

They aren’t. Do you have any information to the contrary?

I’m not sure what @Jossie is asking. Some cameras (eg Nikon D800) have in-camera vignette correction, which can be set to high, normal, low or off. I assume this only affects JPEGs, not raw, but I haven’t tested that. I also assume it uses a mathematical model of vignetting (per lens model, possibly per channel, possibly per aperture setting) rather than proper flat-field images.

@Morgan_Hardwood
This is strange. With astronomical cameras one of the first corrections is for flatfield, i.e. the pixel-to-pixel-variations. Are these variations negligible small in digital cameras?

Hermann-Josef

Perhaps there was some confusion in terminology here - what RT calls “flatfield correction” includes potential lens effects.

It’s similar to, but not exactly the same (and I apologize for mixing the two up myself) as darkfield correction - which can be considered a subset of flatfield correction focusing ONLY on sensor behaviors that can be determined by taking an exposure with the shutter closed. (thus eliminating anything involving the lens).

Some cameras definitely do perform automatic dark field subtraction - Pentax forced this in many of their cameras if the exposure was longer than a particular threshold.

Sony has been identified as doing some form of RAW scaling/correction (theorized to be a compensation for microlens-induced vignetting) with lenses that report optical profile data to the body. (The trigger is theorized to be the same optical formula data used to correct off-center PDAF sites, but no one has publically reverse engineered the optical formula reporting protocols to the point where someone can run experiments by feeding a body bogus data).

So ideally, raw is actually raw - but there are plenty of examples where camera manufacturers have been caught “cooking” the raw data. Two examples were given above, Sony’s “star eater” is another such example. (Star Eater appears to be an alternative method of attempting to correct for hot pixels in long exposures without taking a darkframe and subtracting it out)

@Entropy512

This is not correct. Dark subtraction and flatfield correction are two very different things: Dark subtraction corrects in an additive way the dark current. Flatfield is a multiplicative correction, which corrects in its strict sense sensitivity variations from pixel to pixel and across the detector. Mathematically it is the same as vignetting correction since both are multiplicative operations. But physically flatfield and vignetting have nothing in common. The first is a detector property, the second is an optical property.

My Sony F828 definitively does a dark correction, once the exposure time is above a certain limit: After the shutter is closed, it takes a second exposure of the same length, which is subtracted from the data. I would assume, that it is also subtracted by the firmware from the raw data.

It may be, that the detectors in digital cameras are quite homogeneous in their sensitivity across the detector (CCD or CMOS does not make a fundamental difference here). They presumably are bulk chips, i.e. not thinned for enhanced blue sensitivity as are CCDs for astronomical applications. Since astronomical CCDs are cooled, dark current is small and usually needs not be corrected for (except maybe for spectroscopy where the background is low). Just for completeness, what I mean, here is an example of an astronomical image before and after flatfield correction.


Above left is the raw science image, above right is the flatfield exposure, which shows a gradient across the field of view due to sensitivity variations. Below left is the flatfield corrected image, showing interference fringes originating in the thinned CCD due the night sky emission lines. Below right is the final image, where the interference fringes have been modelled and subtracted.

All the gory details are described in the EMVA standard 1288Standards for characterization of image sensors and cameras”.

Hermann-Josef

Thanks for the clarification.

Yup. Pentax did the exact same thing. I think newer Sonys do perform it if you turn it on, they call it Long Exposure Noise Reduction. I’ve made a point of turning LENR off in all of my cameras (if I want to do DFS, I’ll do it manually), so I may be wrong about exactly how LENR behaves.

There’s been a long-running controversy because even when LENR is off, if exposure is longer than 3.2 seconds, a lot of Sonys will “cook” the RAW in such a way as to filter out likely hot pixels by applying a spatial filter. This can also “eat” a star in an astrophotography image, hence “Star Eater”. Apparently Nikon did something similar in some of their cameras.

I do recall during a discussion I saw a year or two ago regarding the various ways in which manufacturers have been caught “cooking” their RAW data, and some of that was effectively applying sensor calibration corrections. It may be that many cameras ARE doing something like this internally, it’s just hidden from you.

I will say that “RAW” has been definitively proven on many cameras to be at least partially cooked and not quite that raw.

@Entropy512

This is exactly what I wanted to find out in this thread. :slight_smile:

Hermann-Josef

At DPReview, Iliah Borg has intimated his suspicions regarding white balance pre-scaling in some cameras.

Thing is, without some kind of knowledge of the real ADUs captured at the ADC output (geesh, or even the raw analog values from the photosite, if the shenanigans are occuring before digital conversion), we’ll never know unless the vendors “give it up”. And, I’m not read to void my Z6 warranty to disassemble enough to stick probes into the circuitry, yet… :smile:

1 Like