GIMP 2.10.6 out-of-range RGB values from .CR2 file

Except it isn’t green, but rather analogous to staring at XYZ or any other arbitrary set of primary light colour intensities dumped through sRGB lights. It’s the way the camera sees spectral intensities, distilled down into three discrete intensity values.

Indeed. The reason it is darker is due to the linear data being power functioned down by 2.2.

I copied the RGB values for a pixel in the cloud to the left of the headlight, where one wants it to be white:

0.448171,0.977346,0.732921

Definitely not white in the red and blue channels, but not a straightforward green dominance either. Yes, I know these values are scaled for visualization, but R=G=B needs to hold here, I think…

The sensor is roughly an accumulation of photons after spectral filtering, low level hardware and software engineering notwithstanding.

Should the precise same number of photons per photosite be captured?

Here is a target chart assigned the Identity ICC profile color space, which is basically XYZ. The Identity ICC profile color space also is used as the output space, so absolutely no color space transform is involved except to display the resulting colors on the calibrated and profiled monitor. The “white balance” is R=G=B=1, passthrough, no white balance multipliers:

In the above screenshot, the colors are green. The background cloth behind the target is actually neutral gray. The background of the target chart itself is neutral gray. The bottom row of patches are neutral gray. But the whole image obviously has a decidedly pronounced green color cast. @anon11264400 - where does this green color cast come from?

The following screenshot is exactly as above, except I used the white balance scaling multipliers for the Canon 400D to scale the colors, R=2, G=1, B=1.58. The resulting colors are painfully awful to look at, but the stuff that’s supposed to be neutral gray now is neutral gray instead of green:

So @anon11264400 - or anyone who has an answer other than handwaving at sRGB “lights” - why are the colors in the first target chart screenshot green?

It isn’t green.

If we consider the sensor is capturing photons under spectral filters, we can reverse this process and create a projector. Project the ratios of light captured per plane, under the exact same filters the camera sensor has, and we roughly project what the camera captured.

Conversely, projecting the camera sensor quantities of light through sRGB filters yields…

To make sure this triplet is understood, these are the RGB values of a pixel that should be approximately white, taken from the internal working image, not the displayed image. No sRGB here…

Have a look at the spectral composition for a fluorescent lamp, an LED, a tungsten lamp, or anything else. Are the spectral distributions equal? Do you expect them to be? Why or why not? Would you expect some magical ratio between an arbitrary greenish lamp, some other arbitrary reddish lamp, and some random blueish lamp?

Except your particular display and mental model that you are suggesting the image has a green bias via. :wink:

For this image, the illuminant is daylight, at about 10::am, with some cloud cover. So, I’d expect a rather “normal” color composition, somewhere around 6500K. And, if the sensor’s sensitivity were calibrated to a different white, I’d not expect the color cast to be a green/cyan, although this may be an unreasonable expectation grounded in my film days. And to make sure, know that these numbers are the 16-bit integers of the 14-bit camera data from the raw file 1) converted to 0.0-1.0 floating point, and 2) ‘half’ demosaiced according to the algorithm described in the previous post. No colorspace conversion of any sort.

I surely didn’t see this when I stood by the tracks looking at the real light and reflectance, so I think it has to do with some combination of the camera’s spectral sensitivity and the mosaic mechanism. I just don’t know enough to tease them apart…

It’s not.

image

Now imagine if I distilled down that irregular D65 curve into three intensities, based on arbitrary filtering of that shape[1].

You wouldn’t take those intensities that were distilled down into three emissions, project them either mentally in your head or physically out of a screen and say “See! See how the A channel is lower than the other two? It’s not white!”

You wouldn’t be saying “It looks yellowy greeny when I look at the ABC ratios!!!” because you would understand that there is a mechanic to take us from the distilled spectral into the psychophysical domain of colour.

[1] Where arbitrary filtering that is congruent with roughly reddish, greenish, and blueish filters, that would mean sampling the right side, then the middle, then the left side, with potential for strange spikes / irregularities. Also note that as a camera vendor, I could just as easily make the coloured filters out of any set of colours. Maybe my second filter looks yellowy and I store it in the third plane of the raw file.

If I don’t say that, and do something about it, my wife will think I’m a crappy photographer and won’t let me buy any more gear… :smile:

So, I’m going to assume it’s all about the filters, and the varying spectral sensitivity of the human eye. Particularly, it’s not about the preponderance of green photosites (I’ve seen that assertion in some writings), and my averaging of them takes that out of the half demosaic equation.

The wavelength of light hitting a photosite is a given intensity, and it hits a filter of either R, G, or B transmissiveness. If I assume each filter material passes the same energy, then the difference in what a red, green, or blue photosite looks like to me is dependent upon my sensitivity, which according to the literature is about 550nm, which is smack dab in the middle of the green part of the rainbow. For the same intensity but in the blue part of the spectrum, I’m not as sensitive and it looks dimmer. Am I getting warm?

This probably goes back to your lament of display technology, where we have to pump different things into it to replicate the scene we would just see linearly, in-person…

My day job is rooted in the phenomenon of gravity, 9.8 m/sec squared. So much simpler to deal with… :smile:

1 Like

That’s the only intelligent comment in this whole “is it green” discussion. It looks green on the screen. If uploaded to the web, it will look green on other people’s screens. If sent out to be printed, the print will come back green. And when color-picked, the picked color is green. What the image might look like on @anon11264400 's hypothetical projector has no bearing on what needs to be done in the digital darkroom to make the image look (and be) “not green” in the digital darkroom.

It’s green because you are looking at a snapshot of raw sensor output and interpreting it as sRGB space. Treat it as camera space, profile it, and then convert to sRGB, and it will no longer look green.

Scale the raw sensor output, and you’ll have to re-profile it, but you will end up with the same result. Of course in a real photo processing pipeline, one may well scale the raw sensor output to make the ranges easier to deal with in latter processing, as well as deal with all the other foibles of sensor data.

2 Likes

I don’t know if you are responding to @ggbutcher or to me or to both of us. But see comment 34 above:

A camera input profile was applied to the raw file. That camera input profile was made by white balancing the target chart to D50 in the usual fashion of making camera input profiles. The raw file was not white balanced but rather left at “uniwb”.

The raw file looks green after applying the camera input profile. There is no sRGB anywhere in the processing or display chain, not even my monitor profile, except for the conversion from the monitor profile to sRGB when I made the screenshot.

Sorry, without understanding the details of what you mean by that, it’s impossible to analyze why you get that result.

Note though, that there is a lot of white point manipulation and normalization going on in a typical test chart based ICC input profile, so it’s hard to reason about what’s happening when you apply such a profile. i.e. typically the XYZ reference values are not those of the chart under the illuminant used to shoot the photo, but are those for a different (typically artificial) D50 illuminant. By default such a profile will be scaled and chromatically adapted to make the chart white ICC PCS white. And I’m not familiar with exactly what the tool you are using is doing under the hood. Only by stripping all this down to understood processes and mathematics, can it be properly analyzed.

Then something is not right in the workflow. A properly made (absolute colorimetric) profile applied to the correct workflow should result in XYZ values of the patches matching those in the chart reference file to some level of accuracy. (Note that ICC input profiles don’t naturally support absolute colorimetric intent for cLUT profiles, because the table data is scaled to the white point, and clipped.)

Open the raw file with a raw processor such as darktable. A camera input profile is chosen automatically. Either accept that camera input profile or choose another camera input profile. This is the normal way people assign/apply/pick your term a camera input profile. My apologies for not expressing myself more clearly.

I think the point you are missing is that the raw file is not white balanced in the discussion above. That’s the whole point of this “it looks green” discussion - the raw file looks green before white balancing. That is, it looks green when using the white balance multipliers 1,1,1, “uniwb”. The way to make the green go away is to apply appropriate white balance multipliers, such as by shooting a neutral target under the same light, or by choosing the appropriate light source in the raw processor, or by picking some presumably neutral area in the raw file, or etc.

I rather suspect that this “looks green” is true of almost all cameras that shoot raw. The raw file looks green, even after applying the camera input profile, unless/until you white balance the raw file. Or else if you shoot through a suitable magenta filter. Or else if you don’t white balance the target chart when making the input profile and then let the camera input profile do the white balancing. But that leads to other problems - putting the way Adobe dng works to one side, please, that just complicates things even further.

Right, but since you don’t have a display that matches the camera colorspace, you can’t actually say anything about what the raw file looks like, without converting or interpreting it in some other display colorspace. If the raw sensor values have different basic sensitivities to a white (i.e. uniformly reflective sample illuminated by a reasonable daylight like illuminant) test patch, then of course it won’t look color balanced when interpreted as a display RGB colorspace where white has R=G=B.

Profiled correctly, the camera raw, with any sort of individual scaling of the raw channels, should result in reasonable XYZ values because the profiling process should account for such details. i.e. the raw image doesn’t have to look color balanced when interpreted in some sort of RGB space.

To put it concretely, say the profile is a simple 3x3 matrix. Then if the scene white has a raw green channel value 2 x R & B, then the 3x3 matrix will have green multipliers that are roughly half the other values.

Please read this post by @anon11264400:

Especially the part where he says this in response to @ggbutcher

What does @anon11264400 mean by “there is no green bias”? What does he mean by sRGB lights as the source of the green that’s somehow not really there?

I don’t understand what you might mean. I use ICC profile color-managed software. I profile my monitor. I assign an appropriate camera input profile to the interpolated raw file. Usually the next step is to convert to some other ICC profile color space such as Rec.2020 or sRGB. But whether this conversion is done or not, the image looks basically the same, because the color values are being sent to the monitor profile via ICC profile color management.

So what you just said, that I quoted just now, are you saying that even with ICC profile color management, I “can’t actually say anything about what the raw file looks like”? I’m not actually looking at the raw file. I’m looking at the image file that results from interpolating the raw file. If I don’t white balance the raw file during interpolation - if I leave the white balance multipliers as “1,1,1” the interpolated image looks green. Why is this simple statement generating such resistance? And why does @anon11264400 insist that there’s no green bias in a raw file? And how in the world did “sRGB lights” get into this discussion?

“Green multipliers that are roughly half the other values”. This is the green bias that @ggbutcher and I have been trying to get @anon11264400 to understand.

Without appropriate white balance multipliers, the interpolated raw file looks green even when a suitable camera input profile is assigned to the interpolated raw file and even when using ICC profile color management to view the resulting image.

This was the whole point of trying various ways to show @anon11264400 that the file really is green if it’s not properly white balanced, if “uniwb” is used without some compensating mechanism such as a magenta filter.

Nobody was suggesting that it’s somehow the right thing to do to assign some profile other than an actual camera input profile to an interpolated raw file - this was only a way to try to get around some objection or other that @anon11264400 raised to something or other.

But I give up now and I should have given up about 20 posts ago.

My apologies to the forum, I’ll try in the future to not get trapped into such stupid, ridiculous, totally pointless exchanges.

You can see that there is a difference to scaling in the RGB domain versus the spectral / XYZ domain.

But I am sure you understood this already @Elle.

https://pdfs.semanticscholar.org/1603/5e2125f5d3f0e937fc05a1e925b6ff320d4c.pdf

This is also wrong, not that you will listen.

You can profile just fine and take RGB data directly into the spectral response domain. For example, when working off of as-close-to-the-metal CMOSIS CMV12000 engineered from the ground up, the raw sensor data looks something like the following, which will be coloured according to whatever display type you are viewing this on:

image

Then something is wrong with the profiling or the way it is applied.

i.e. if what you were doing was white balancing the raw then creating the profile, and then applying that profile to a raw file that hadn’t been processed in exactly the same way, then of course the profile converted result looks wrong.

All that is to say that it doesn’t matter what the relative levels are between raw channels in terms of their colorimetric interpretation, because (by definition) a colorimetric interpretation defines white as white :slight_smile:

1 Like