Question about raw data from digital cameras

I’m not sure, but I THINK Pentax’s forced auto-DFS in older cameras did apply to the RAW data - you didn’t get the separate darkframe and the exposure.

That auto-DFS algorithm is one of the things that caused me to eventually leave Pentax despite a heavy investment in glass.

Cameras that have native DNG support in theory include the color matrix in their metadata, since it’s required by the spec - but native DNG happens to be seen more often in cheap Chinesium (not always, but very frequently) and it seems like these manufacturers always botch their implementation of the DNG spec by embedding a vastly wrong matrix - such as the Xiaomi Mi Sphere (see the Better Color Representation of H360 - MiSphere Converter for Android ) - also at least earlier firmwares of my DJI Phantom 2 Vision+ quadcopter had similar broken color matrix metadata in their DNGs.

I don’t know of any cameras with proprietary raw formats that embed color matrix data (although maybe I just haven’t noticed) - as a result, many open source software programs use the color matrix that is spit out by Adobe DNG Converter for that camera model by default. darktable/adobe_coeff.c at master · darktable-org/darktable · GitHub for example

I don’t know of any camera manufacturer that does per-unit profiling - unless you profile your own camera, you’ll be using a color matrix intended to be “close enough” for all units of that particular model. (The color matrix may change a little bit due to manufacturing tolerances, but it will change more significantly as a result of design changes to the CFA or other aspects of the sensor)

From what I understand, the situation is somewhat messy. The colour transformation need not be stored in the metadata. The firmware could use it internally without reporting it to the outside…

There seems also to be a great difference from camera to camera model. After I profiled my Sony F828 the colours did not change noticeably. However, after profiling the still shots of my video camera, the colours improved a lot.

Hermann-Josef

PS: Could you please explain the acronyms DFS and CFA?

I think this Rawpedia page (Color Management) is what you’re looking for.

As I see it, each manufacturer knows its sensors, thus knows which are the primaries for a certain camera model. Once hardcoded in the electronics, the camera will know exactly how to convert the image to sRGB or AdobeRGB. But as @Entropy512 said, unless you profile your camera, you will be using «close enough» primaries.

DFS: dark-field (or frame) subtraction Dark-frame subtraction - Wikipedia

CFA: color filter array Color filter array - Wikipedia

1 Like

A color matrix is baked into the camera firmware, many are included in dcraw and in Adobe DNG Converter, ICC and DCP input profiles contain them, and camconst.json has them.

The input color matrix converts from the camera’s device-dependent space to a device-independent XYZ space. From there it can be converted to sRGB, AdobeRGB, or anything else.

It’s done per camera model.

No.

That would be a dark frame.

1 Like

@Morgan_Hardwood

So how is the flatfield correction done in practice?

Hermann-Josef

Does this help? Flat-Field - RawPedia

@Thanatomanic
Not really, since I am wondering, how the pixel-to-pixel variations are corrected for in a digital camera without post-processing. That one can do this in post-processing as in RT is obvious. However, I would like to differentiate between sensor characteristics and optical effects like vignetting.

Hermann-Josef

They aren’t. Do you have any information to the contrary?

I’m not sure what @Jossie is asking. Some cameras (eg Nikon D800) have in-camera vignette correction, which can be set to high, normal, low or off. I assume this only affects JPEGs, not raw, but I haven’t tested that. I also assume it uses a mathematical model of vignetting (per lens model, possibly per channel, possibly per aperture setting) rather than proper flat-field images.

@Morgan_Hardwood
This is strange. With astronomical cameras one of the first corrections is for flatfield, i.e. the pixel-to-pixel-variations. Are these variations negligible small in digital cameras?

Hermann-Josef

Perhaps there was some confusion in terminology here - what RT calls “flatfield correction” includes potential lens effects.

It’s similar to, but not exactly the same (and I apologize for mixing the two up myself) as darkfield correction - which can be considered a subset of flatfield correction focusing ONLY on sensor behaviors that can be determined by taking an exposure with the shutter closed. (thus eliminating anything involving the lens).

Some cameras definitely do perform automatic dark field subtraction - Pentax forced this in many of their cameras if the exposure was longer than a particular threshold.

Sony has been identified as doing some form of RAW scaling/correction (theorized to be a compensation for microlens-induced vignetting) with lenses that report optical profile data to the body. (The trigger is theorized to be the same optical formula data used to correct off-center PDAF sites, but no one has publically reverse engineered the optical formula reporting protocols to the point where someone can run experiments by feeding a body bogus data).

So ideally, raw is actually raw - but there are plenty of examples where camera manufacturers have been caught “cooking” the raw data. Two examples were given above, Sony’s “star eater” is another such example. (Star Eater appears to be an alternative method of attempting to correct for hot pixels in long exposures without taking a darkframe and subtracting it out)

@Entropy512

This is not correct. Dark subtraction and flatfield correction are two very different things: Dark subtraction corrects in an additive way the dark current. Flatfield is a multiplicative correction, which corrects in its strict sense sensitivity variations from pixel to pixel and across the detector. Mathematically it is the same as vignetting correction since both are multiplicative operations. But physically flatfield and vignetting have nothing in common. The first is a detector property, the second is an optical property.

My Sony F828 definitively does a dark correction, once the exposure time is above a certain limit: After the shutter is closed, it takes a second exposure of the same length, which is subtracted from the data. I would assume, that it is also subtracted by the firmware from the raw data.

It may be, that the detectors in digital cameras are quite homogeneous in their sensitivity across the detector (CCD or CMOS does not make a fundamental difference here). They presumably are bulk chips, i.e. not thinned for enhanced blue sensitivity as are CCDs for astronomical applications. Since astronomical CCDs are cooled, dark current is small and usually needs not be corrected for (except maybe for spectroscopy where the background is low). Just for completeness, what I mean, here is an example of an astronomical image before and after flatfield correction.


Above left is the raw science image, above right is the flatfield exposure, which shows a gradient across the field of view due to sensitivity variations. Below left is the flatfield corrected image, showing interference fringes originating in the thinned CCD due the night sky emission lines. Below right is the final image, where the interference fringes have been modelled and subtracted.

All the gory details are described in the EMVA standard 1288Standards for characterization of image sensors and cameras”.

Hermann-Josef

Thanks for the clarification.

Yup. Pentax did the exact same thing. I think newer Sonys do perform it if you turn it on, they call it Long Exposure Noise Reduction. I’ve made a point of turning LENR off in all of my cameras (if I want to do DFS, I’ll do it manually), so I may be wrong about exactly how LENR behaves.

There’s been a long-running controversy because even when LENR is off, if exposure is longer than 3.2 seconds, a lot of Sonys will “cook” the RAW in such a way as to filter out likely hot pixels by applying a spatial filter. This can also “eat” a star in an astrophotography image, hence “Star Eater”. Apparently Nikon did something similar in some of their cameras.

I do recall during a discussion I saw a year or two ago regarding the various ways in which manufacturers have been caught “cooking” their RAW data, and some of that was effectively applying sensor calibration corrections. It may be that many cameras ARE doing something like this internally, it’s just hidden from you.

I will say that “RAW” has been definitively proven on many cameras to be at least partially cooked and not quite that raw.

@Entropy512

This is exactly what I wanted to find out in this thread. :slight_smile:

Hermann-Josef

At DPReview, Iliah Borg has intimated his suspicions regarding white balance pre-scaling in some cameras.

Thing is, without some kind of knowledge of the real ADUs captured at the ADC output (geesh, or even the raw analog values from the photosite, if the shenanigans are occuring before digital conversion), we’ll never know unless the vendors “give it up”. And, I’m not read to void my Z6 warranty to disassemble enough to stick probes into the circuitry, yet… :smile:

1 Like

Even if you do that, without decapping the ISP and microprobing it, you might have issues.

(You’re better off figuring out how to crack the firmware decryption, decrypt it, and feed the results to IDA assuming it’s a “known” CPU architecture. Beyond my skillset, not necessarily beyond the skillset of the likes of leegong… To explain who he is, it’s probably best to just say “google his name”, or find any thread related to camera reverse engineering. If it’s related to REing a camera-related component, leegong has probably stopped by. He provided some INCREDIBLY useful insight on the Sony E-mount protocol obtained via firmware disassembly that corrected some rather “how the hell did I get that one wrong?” bogus assumptions/observations I had made…)

In an ideal situation, you could feed exact numbers into the camera’s image processing pipeline, but in the real world - you can’t, for example, easily feed a camera a fake single hot pixel.

While it would be nice to assume that RAW is actually RAW, there’s no shortage of evidence that RAW cooking happens on a regular basis, such as:

  1. The whitebalance prescaling Iliah is fairly certain is happening based on certain statistical anomalies in raw histogram analysis
  2. Effectively “smoking gun” evidence of spatial noise reduction occurring - basically universal for any camera which advertises “extended ISO” modes, and I recall recently one of the DPR gurus caught a camera manufacturer doing it at shockingly low ISO settings.
  3. Spatial hotpixel rejection
  4. Automatic dark frame subtraction
  5. Some Sony cameras appear to be performing some partial component of their vignetting correction on RAW data. Current prevailing theory is that they are compensating for microlens interaction with the lens based on exit pupil distance reported by the lens, leaving more lens-specific as opposed to sensor-dependent behaviors uncorrected.
  6. Sony replaces OSPDAF sensel data with interpolated data derived from adjacent pixels. Good for analysts who want to figure out what OSPDAF sensel structure a new Sony camera has.
  7. Someone who was (if I recall correctly) a retired camera firmware engineer indicated that some of the flatfield correction behaviors described by @Jossie may actually be fairly routine/common. This is from the unfortunate category of “I’m positive I saw someone say this, I even have a guess between one of 4-5 people who it was, but DPR’s search function sucks so badly I’ll never be able to find it again.”.

I know of one case where I know EXACTLY who said something on DPR but can’t find the post in question… Which would be nice to find again because the post in question debunks a commonly regurgitated marketing-department-originated myth regarding how the Sigma MC-11 works when a Sigma Global Vision lens is attached.

@Entropy512
ISP = ?
DPR = ?
OSPDAF = ?

OnSensorPhaseDetectionAutoFocus ?

1 Like

Sorry.
Image Signal Processor (this is often referred to with marketing names like EXPEED, DIGIC, BIONZ, etc. - although frequently those names refer to the ISP and attached applications processor for cameras. For mobile phones/tablets, the ISP is almost always referred to by the CPU manufacturer as a separate subcomponent of the System-On-Chip - same chip, different intellectual property block/core)
DPR - DPReview.com
OSPDAF - @heckflosse got that one already

1 Like