Roles of camera and raw developer in determining color

I suspect that most here are already aware of this, but it’s a good, easy to understand, explanation.

It’s also a good virtual “bludgeon” if someone starts blathering about color science and camera “looks” :laughing:

3 Likes

(Aside; ever wonder why there’s an underscore character in front of some raw files? If it’s there, it indicates that the preview image is in the 1998 Adobe RGB color space; otherwise it’s in sRGB.)

I had no idea… the more you know. Thanks for sharing.

Edit: Those color blind deficient stats are crazy. I had no idea 8% of men were color deficient.

1 Like

Good link and nice to see a Foveon mention!

Here’s an image using raw unconverted data from one of mine:

None of that pixelated Bayer CFA stuff …

I am not sure I would be convinced by this post if I did not know these things already. The article is very thin on actual details, contains no image comparisons, and just glosses over key parts.

Also, don’t know why he mentions hot mirrors / IR filters at all. AFAIK most of them cut off the spectrum quite sharply around 700nm. For practical purposes, they are as “perfect” as they need to be.

From the spectral responses I have seen, most camera dyes appear to be pretty decent approximations for S and M (“blue”, “green”), but the L response is not matched very well, there is an abrupt cutoff for “red” somewhere between 550nm and 600nm in camera sensors. But this is not problematic and can be corrected using a linear model in most cases.

In practice almost the same “neutral” image can be recovered from RAW files. And from then on, it is indeed just processing.

2 Likes

There’s a saying in academic writing, that out of a panel of three (male) reviewers, you’ve got a one-in-four chance that at least one of them has color vision deficiencies. So you better make sure not to graph results in red vs green, and always differentiate things by more than just hue.

(IIRC I read this first in Kovesi’s beautiful paper Good Colour Maps: How to Design Them)

4 Likes

From 1974-1990 I worked for a publisher of pictures, mostly greeting cards. It was part of my work (until I sidestepped into teach-yourself computer systems management) to order colour separation (for litho printing) and to accept/reject/comment on the proofs.

I think I was once quite good at colour matching and assessment. Maybe I was always wrong about that or maybe these things just change with times. But I was aware that, over the years, I was finding it harder, and feedback from respected colleagues said I was getting worse, rather than better at it. And I hate those graphs with similar-hue lines!

Now I am a keen photographer, working with darktable and editing raws. I try very hard to get people’s skin tones something like right. But I don’t even have “the original” by my side when I am doing it. I try to eliminate the peculiar purple or orange tones that weird light can do to dark skin [as seen by camera]. I absolutely know that I do not have the colour memory to get their clothes “right.”

But hey, I am not running a portrait studio. I’m photographing musicians on stage. I aim for “realishness” and a nice picture, not realism. And I don’t get many complaints (and also-hey, the pics are freely given anyway).

Back in my publishing days I learned how many people think they can remember and judge colour accuracy. And no, if you are not looking at the original, under the same light, mostly we cannot.

4 Likes

Kasson’s main point is that the raw converter is more responsible than sensors for color accuracy.

There’s a similar misconception in the world of Foveon when they introduced a new sensor model the F20 “Merrill”. At first, folks marveled about the detail and/or microcontrast but that slowly morphed into “too sharp”, “halos”, na-ni-na, with the complaints mainly being about “the sensor”.

Now it is generally acknowledged that problem lies with the proprietary raw converter Sigma Photo Pro V.5, which for some reason had much more sharpening than previous versions. I have tested that sensor’s raw output edge spread response and, like most sensors, it’s slightly soft.

As to color, the Foveon needs far more color correction than Bayer CFA sensors and probably X-trans, so Kasson’s POV is certainly true for Foveon sensors.

1 Like

Conflating the camera and the raw development is a fundamental error here, albeit one that is rarely stated explicitly. I think many photographers unconsciously perform this pernicious elision.

While this is true, for the end user, in darktable (and other raw converters too), it will very much be a case of one camera looking better or worse (or at least different) than the other. The biggest problem seems to be that no raw converter (that I know of) has succeeded to normalize output of all sensors/raw files formats/cameras.

The Adobe Standard DCPs give photographers a common, baseline interpretation of color that is consistent from camera to camera.

And if Adobe, with their budget in their calibration lab, has tried but not succeeded, why is this such a difficult problem to solve?

Support for dual-illuminant calibration would go a long way to mitigate this problem.

Since sensors have different spectral responses, this is an impossible problem to solve.

The spectrum of light entering the camera is an infinite-dimensional object (a function of the wavelength), which is compressed to 3 dimensions. No matter how you map this information, there will be scenarios where the same image will show up differently using different sensors; it cannot be otherwise.

Camera bodies employ various heuristics to map colors to OOC JPEG, they are not a simple LUT. I suspect that some may even involve “recognizing” objects (eg people, so that they can do skin colors well).

But once you are developing raw files, this becomes your job. A lot can be done using presets, but that still involves recognozing what is in the photo and aesthetic decision about how colors should look.

3 Likes

I am ignorant in these matters but I have a question: Doesn’t the lens also impact this calibration? I have some lenses that have very obvious color shifts (with the same white balance).

2 Likes

This is a fair question. Generally, for the same sensor, you can get away with a diagonal matrix correction (eg RR, GG, BB sliders which have a default value 1.0 in color calibration), which you can store as a preset, to be applied either before or after the main instance of that module (mathematically either works, but if you are doing auto color calibration, do this before).

I find that fiddling with just white balance though (in color calibration, custom illuminant) works in most cases as a quick fix.

2 Likes

Indeed it can. S’Why the gold standard for spectral sensitivity measurement involves taking a series of “pictures” of patches of narrow-band light without the lens.

That said, I’ve gotten some good data doing it on the cheap, single shots of the full spectrum, where you need the lens to focus it. Manufacturers are producing as achromatic a lens as they can, but there’s still slight band-specific attenuation in most any glass.

3 Likes

So what we’re saying is that the talk about sensor “looks”, as @Donatzsky mentioned in his first post, is real? :slightly_smiling_face:

I do understand they will never be identical, but as it stands now the difference often big leading to photographers drawing very logical conclusions about the magical properties of some sensors (as discussed in this thread). I think there’s definitely room for improvement when it comes to normalizing color. Just a simple calibration with two cameras using a color checker proves that.

Other properties inherent to the sensor, such as dynamic range needs massaging with other tools. My experience is that a sensor with higher dynamic range (such as my a73) needs more contrast in sigmoid than one with a lower dynamic range (such as my eos m200). I use 0.3 less for my m200 as a baseline.

Yes, and at least historically different manufacturers had different ideals so the colour of photographs taken with different brand lenses could be perceptible. Historically no one achieved total consistency but they did try. Even now you have to prioritize and different designers and even companies do it differently. Zeiss allowing vignette etc.

I do think lenses play a larger part of perceived brand colour science than most acknowledge.

2 Likes

It is not a simple as that since the sensor does not generate a “look”, it is just a bunch of numbers. The raw processor generates the look.

Consider the same image x, shot with two cameras, having sensors A and B with different spectral responses. The raw processor can provide a mapping f and g so they are more or less the same for practical purposes, ie f(A(x)) \approx g(B(x)).

But then consider a different image y. f(A(y)) and g(B(y)) may be different. This is more pronounced if the scene has elements with very narrow spectral bands.

In practice, everyday objects rarely have narrow spectra. The exception is LED light which requires correction. An experienced user should be able to get the colors they want on any kind of sensor.

Sorry, I did not write an exact quote. @Donatzsky wrote “camera looks” and “color science” and I think that’s mostly what I hear when photographers talk about these matters. Certain cameras (or camera brands) have certain looks that that can be attributed to color science.

If the raw processor does some kind of interpretation, I guess that’s where some of the look is shaped as a result the “color science”.

Seems like this thread boils down to semantics and that we concluded that certain cameras actually produce different looks that are sometimes hard to adjust for by just profiling.

An experienced user should be able to get the colors they want on any kind of sensor.

Yes; but at what cost (in terms of both time and money)? Matching colors between cameras can be extremely time consuming without some sort of profile or LUT. As you hint, without a profile you can make some adjustments (with, for example, the color eq module) that may match the other camera in one lighting condition, but not in next.

A wedding photographer that delivers 100 photos from two different camera models needs these things to “just work”. I myself would rather spend my time working creatively in post, than correcting for technical differences between camera models.

When working with video there are technical LUTs and software like CineMatch that do the boring work. Granted, this is mostly for log video, not actual raw video footage.

“Color science”, in the context of lifting the raw data into a standard color space like XYZ, is an attempt to glorify something that is actually a very simple transformation.

Again, this happens but in real life the differences are either tiny, or happen under very special circumstances (narrow-band light).

Very little cost. Eg in Darktable this requires learning color calibration and color equalizer. They are so powerful tools that learning them makes sense anyway.

Once these tools are understood, the actual adjustment is quick, and can be copied as a preset.

Also, color calibration gets you there 80% of the time.

2 Likes

All this verbiage with no actual examples of camera-to-color conversion.

Early Foveon sensor to XYZ:

Once we get to XYZ on my Sigma SD9 camera, XYZ is the profile connection space to any subsequent color appearance, right or wrong.

1 Like

Indeed.

The camera’s “discretion” with regard to color encoding is all in the performance of the color separation of the sensor’s photosites. In the Bayer array, it’s done with dye layers on individual photosites which are demosaiced, so the dye selection is the determinant. In Foveon sensors, the color separation is done with photon “penetration depth” of the individual photosite, no dye. Either way, the camera is encoding a RGB triple that represents the color presented at a photosite, as referenced to a color-matching standard like the CIE 1931 experiments.

2 Likes

I am not sure why you think this is helpful; most people have no intuitive understanding of XYZ as numbers, and in any case the “RGB” for each camera is different.

These are just numbers calibrated in a lab to match a color chart under some standardized illuminant, with the constraint that the Y row sums to 1 (linear in luminosity).

The interesting question is: why is everyone using linear transforms? In theory, any function f: \mathbb{R}^3 \to \mathbb{R}^3 should be allowed, with the constraint that

f(\lambda R, \lambda G, \lambda B) = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \lambda & 0 \\ 0 & 0 & 1 \end{bmatrix} f(R, G, B)

Yes, I understand that a linear approximation may be a good starting point, and how it made sense in 1931 when people used slide rules etc. But in practice it is unclear why people stick to it.