GIMP 2.10.6 out-of-range RGB values from .CR2 file

In raw processing, white balance multipliers also remove the green bias of the bayer mosaic. This is shown in the second screenshot here:

http://glenn.pulpitrock.net/Individual_Pictures/raw_processing/

In a raw processing transform chain, one would apply the camera multipliers prior to demosaic; I reversed the order in the article** to show the bias.

So bear-of-little-brain (me) thinks also that cameras have a fixed spectral response and white balance multiplication also removes illumination temperature bias. Both removals appear to be baked into the ‘as shot’ camera multipliers one finds in the raw’s metadata.

** I’ll eventually move this article to pixls.us, I need to work on the rawproc logic of applying this particular thing, to the monochrome bayer array prior to demosaic, to the RGB array after demosaic. I intend the article and a special rawproc AppImage preconfigured to support the article to go hand-in-hand.

A better choice than xy for chromaticity plot/selection is u’v’, since it is much more perceptually uniform, yet still a linear transform of XYZ. If you don’t need the latter, then a*b* is a better choice again.

Yep - and it doesn’t accord with best practice color science. First step in a color science based workflow is to convert to a calibrated (i.e. human visual system based) colorspace rather than sensor space, then apply known color science validated chromatic adaptations such as Bradford. Scaling (i.e. Von Kries type) chromatic adaptations in sensor space may be palatable or not, depending on how close to cone space/sharpened cone space primaries the sensors are, but doing so isn’t based on the best color science.

[ One of the cute things about the Adobe DNG idea is that they combine multiple calibrations in a way that improves the possible white point adaptation accuracy. ]

I don’t believe this is correct, as there is no inherent “green bias” other than the green double sampling to increase luminance precision. If you are seeing a literal “green bias”, this is likely because of a projection of incorrect camera ratios through sRGB or other lights.

Correct me if I am wrong Graeme, but scaling along RGB isn’t even the worst case XYZ scaling in the Von Kries sense, is it? It’s even worse than XYZ scaling given it is in a non-orthogonal RGB encoding?

@gwgill - do you know of any currently available raw processor that already does white balancing as you describe? I’m aware of what Adobe Dng does, with the two calibrations at different white points and scaling between the two, but that’s not what you are describing.

Regardless of whether there is already such a raw processor, what are the actual sequence of equations? Would you start with using the camera input profile to convert to XYZ - without first applying any white balance multipliers - and then convert to LMS?

This is green:

In the above screenshot no white balance multipliers have been applied to the target chart shot. In fact I disabled the white balance module. The same green result is obtained by setting the white balance multipliers to 1,1,1. I assigned the camera input profile and also asked for the file to be output still in the camera input profile. The screenshot (taken using GIMP) has been converted from the monitor profile to sRGB, but what you see on the screen is what the file looks like after the camera input profile has been applied.

The image in darktable - which is ICC profile color-managed - is using the same monitor profile to show the image as GIMP assigned to the screenshot before converting the screenshot to sRGB. The screenshot and the image in darktable look the same. This is green. There is no trickery here, no slight of hand using “lights” - it really is green.

On the other hand, this is not green, even though I used the same darktable parameters as for the above target chart screenshot, that is, the white balance module is disabled:

The reason the image in the above screenshot shows no green bias is because I removed the green bias by putting a heavy magenta filter - a real, actual physical filter - in front of the lens before taking the photograph.

If there is no green bias in a raw file, how is it that putting a physical magenta filter in front of the lens does in fact remove a green bias?

1 Like

I don’t have any quantitative information on XYZ vs. RGB chromatic adaptation, but I would suspect that XYZ is probably worse, since the primaries are broad, while cone space, sharpened cone space and RGB are typically narrower.

Sorry no, I’m not aware of such details.

DNG is kind of cleverer, since the different white point/illuminant calibration matrices inherently make a spectral allowance, as well as chromatic.

Lots of workflows will work, the main point is to transform into a best practice chromatic adaptation space to set the white point/apply chromatic adaptation, and to do that you have to know or assume a profile. Like any Von Kries type chromatic adaptation, if your profile is a 3x3 matrix, you don’t have to convert to XYZ space you just need to concatenate the profile/Von Kries/un-profile matrices.

Of course more sophisticated camera profiles (i.e. camera spectral sensitivities) could possibly take more sophisticate approaches, since a photos white point is often connected with illuminant spectrum.

It’s been a long time since I did any reading on the topic. But I’m pretty sure the “green bias” isn’t because there are twice as many green sites as red or blue sites on the sensor. The problem is two-fold: Sensors are inherently more sensitive to green light. And the little red, green, and blue filter caps that allow the sensor to capture color aren’t “single wavelength pass filters” - my apologies, I’m guessing this is the wrong vocabulary here! But the red filters let green and blue in, and the blue filters let green and probably a very small amount of red in. And the green filters also let red and blue in.

For example, I have raw files from two cameras, a Canon 400d and a Sony A7 mk 1. Here are the DXOMark pages showing how much red, green, and blue is passed through each filter cap color:

Note the white balance scales for each camera: the white balance red scale is 2.46 for the Sony, vs. 2.00 for the Canon. I don’t know how to interpret these numbers, but more green gets into the red filter for the Sony, compared to the Canon.

Also note the “Sensitivity metamerism index” - for the Sony it’s 82, and for the Canon it’s 81. For what this index is, see this page:

https://www.dxomark.com/About/In-depth-measurements/Measurements/Color-sensitivity

A high level summary of the above page is that digital camera sensors don’t respond to color information the same as humans, and the SMI is a measure of “how close” for any given camera.

This is an interesting discussion of camera SMI over time and across brands, implying that some brands over time have sacrificed color accuracy for other things like higher dynamic range.

The above thread also discusses “fudging” of measurements to get artificially higher marks.

Regarding “you have to know or assume a [camera input] profile”, this is something that I’ve wondered about. The only way I have to make a custom camera input profile is to use ArgyllCMS.

To make the profile I white-balance the target chart shot to D50. It’s possible to make a profile without white balancing the target chart shot, but using the resulting profile produces edge artifacts, at least in RawTherapee and I think generally speaking - I’ll have to double-check this.

Would it be possible/worthwhile to leverage the DXOMark white scale information - whatever that actually might be based on - to feed into the profiling process, and then let the camera profile making process take the white balance the rest of the way to D50, and then use this profile as the starting point for scaling in whatever cone space seems appropriate?

And/or just doing a Bradford transform from D50 to the desired color temperature? RawTherapee CIECAM02 module has provision for changing the color temperature of an image file, and it just seems like there’s great potential there for making better white balances than the standard RGB scaling.

1 Like

It’s not well documented, from what I’ve been able to find in the literature, but that also may be that it’s something well-understood in the imaging world and I’m just a goofball…

So, I’ve been working my next version of rawproc to do, among other things, allow folk to explore the mechanics and ramifications of “real” raw processing, that is, starting with the camera-delivered array and working from there:

  • Libraw provides a means to get the image array directly from the raw file, so I provided a property to enable that.

  • I needed a demosaic tool, so I cobbled together a ‘half’ implementation, took about 15 minutes to implement a walk of the array that takes each RGGB quad q and populates a RGGB pixel p with Rp=Rq, Bp=Bq, and Gp=(G1q+G2q)/2. Took a bit more thinking to make it work with the possible quad combinations, so that’s why what is in github looks more convoluted.

  • I was already working on a whitebalance tool that did the regular stuff: patch average, gray world, and manual entry of multipliers, so I added an option to use the camera multipliers from the raw metadata. That’s where the bias became evident, as you can’t just use these multipliers anywhere, as I’ll explain below.

Armed with all this power, I turned off color management (didn’t want the display profile to do something additional), opened my reference train image with the rawdata=1 setting, and got the expected dark monochrome image with the checkerboard pattern. To that, I applied the demosaic tool, and the histogram opened up to the three channels (I load all images, color and monochrome, to three-channel arrays of float 0.0-1.0), all well and good. Curious to see how it was going (linear, underexposed image is still dark), I applied a black-white point stretch to make the image regardable, and this is what I got:

Just to see what it would do, I applied the whitebalance tool with the camera multipliers, here’s what I got:

Now, this is not how one would do proper raw processing, still missing the color conversion from camera space to a working or output space, and that blackwhitepoint scaling is just for illustration, oh, and I guess you’d want to do the whitebalance first before demosaic, but my point is illustrated: an array of camera measurements demosaiced has a green bias.

I really want to understand this, as I don’t feel fully qualified to write an article on something until I can develop an explanation rooted in the specific cause, and not just hand-wave it like a lot of the internet writings do.

Correct me if I am wrong, but in the first instance are you not dumping raw camera light out through assumed sRGB projected lights? IE: Your first image hasn’t converted from camera primaries to sRGB?

That is correct. I know my display is close to sRGB in chromaticity, so I’m just trying to eliminate as many variables as possible. The image definitely shows the signs of clamping out-of-gamut colors, and there may be some influence there in contribution to the green bias. But I’m not sure what I’d look at instead… :smiley:

Edit: Yes, and displayed through that ‘secret DAC’ of an LCD… I think I get that now, thanks!

Except it isn’t green, but rather analogous to staring at XYZ or any other arbitrary set of primary light colour intensities dumped through sRGB lights. It’s the way the camera sees spectral intensities, distilled down into three discrete intensity values.

Indeed. The reason it is darker is due to the linear data being power functioned down by 2.2.

I copied the RGB values for a pixel in the cloud to the left of the headlight, where one wants it to be white:

0.448171,0.977346,0.732921

Definitely not white in the red and blue channels, but not a straightforward green dominance either. Yes, I know these values are scaled for visualization, but R=G=B needs to hold here, I think…

The sensor is roughly an accumulation of photons after spectral filtering, low level hardware and software engineering notwithstanding.

Should the precise same number of photons per photosite be captured?

Here is a target chart assigned the Identity ICC profile color space, which is basically XYZ. The Identity ICC profile color space also is used as the output space, so absolutely no color space transform is involved except to display the resulting colors on the calibrated and profiled monitor. The “white balance” is R=G=B=1, passthrough, no white balance multipliers:

In the above screenshot, the colors are green. The background cloth behind the target is actually neutral gray. The background of the target chart itself is neutral gray. The bottom row of patches are neutral gray. But the whole image obviously has a decidedly pronounced green color cast. @anon11264400 - where does this green color cast come from?

The following screenshot is exactly as above, except I used the white balance scaling multipliers for the Canon 400D to scale the colors, R=2, G=1, B=1.58. The resulting colors are painfully awful to look at, but the stuff that’s supposed to be neutral gray now is neutral gray instead of green:

So @anon11264400 - or anyone who has an answer other than handwaving at sRGB “lights” - why are the colors in the first target chart screenshot green?

It isn’t green.

If we consider the sensor is capturing photons under spectral filters, we can reverse this process and create a projector. Project the ratios of light captured per plane, under the exact same filters the camera sensor has, and we roughly project what the camera captured.

Conversely, projecting the camera sensor quantities of light through sRGB filters yields…

To make sure this triplet is understood, these are the RGB values of a pixel that should be approximately white, taken from the internal working image, not the displayed image. No sRGB here…

Have a look at the spectral composition for a fluorescent lamp, an LED, a tungsten lamp, or anything else. Are the spectral distributions equal? Do you expect them to be? Why or why not? Would you expect some magical ratio between an arbitrary greenish lamp, some other arbitrary reddish lamp, and some random blueish lamp?

Except your particular display and mental model that you are suggesting the image has a green bias via. :wink:

For this image, the illuminant is daylight, at about 10::am, with some cloud cover. So, I’d expect a rather “normal” color composition, somewhere around 6500K. And, if the sensor’s sensitivity were calibrated to a different white, I’d not expect the color cast to be a green/cyan, although this may be an unreasonable expectation grounded in my film days. And to make sure, know that these numbers are the 16-bit integers of the 14-bit camera data from the raw file 1) converted to 0.0-1.0 floating point, and 2) ‘half’ demosaiced according to the algorithm described in the previous post. No colorspace conversion of any sort.

I surely didn’t see this when I stood by the tracks looking at the real light and reflectance, so I think it has to do with some combination of the camera’s spectral sensitivity and the mosaic mechanism. I just don’t know enough to tease them apart…