GIMP 2.10.6 out-of-range RGB values from .CR2 file

Discussing these things with you becomes exhausting as you are so focused on ICCs that it turns into madness.

Simply put, you realize chromatic adaptation in this context has nothing to do with ICCs and that the technique I provided as an example is legitimate?

A camera data file is always normalized as they are encoded using integers, which are in turn normalized against a maximum value. Hence there is no “clipping”, unless as per the aforementioned hardware level for noise etc.

What is the goal of a white balance multiplier?

please be polite!

@anon11264400 - what do you mean by “normalize”? Do you mean perhaps the equivalent of using the “-H 1” switch in dcraw? That is, if the RGBG white balance multipliers are, for example, 2.5, 1.0, 1.7. 1.0, then divide all four multipliers by 2.5 (largest of the multipliers) and then multiply the channel data by these “maximum of 1.0” multipliers to set the white balance?

What do you mean by “which are in turn normalized against a maximum value”? Do you mean that the result of multiplying the channel values by the “maximum of 1.0” multipliers that result from using something like the dcraw “-H 1” switch is then in turn multiplied or divided by something else? Perhaps to bring the maximum channel value in the interpolated and now white balanced image file up to the full well value for the camera? Or to 65535? Or something else entirely?

My apologies, I realize you’ve been using this word “normalize” - possibly to mean the result of two different steps in raw processing - and I’ve been completely overlooking how you might be using this word, but I think your word “normalize” might be the crux of some misunderstandings.

What the word normally means when talking about algebra. For example, from the top result at Google from Quora:

Usually when mathematicians say that something is normalized , it means that some important property of that thing is equal to one. For instance, a normalized linear functional on an operator algebra is a linear functional which takes the identity to 1.

Most camera data files are encoded using device referred encodings, where the minimum and maximum are on a scale of integers in accordance with the bit depth. EG: At 16 bit, values range from 0 to 65535, so the float range would be 0.0 to Value/65535.

You know this.

No!

White balancing is a psychophysical goal. Using multipliers are rubbish in this domain as the values are not based on spectral in conjunction with the Standard Observer. Hence the values are not terribly accurate when using multipliers.

This is why I asked you what the goal of white balance multipliers were. They are a very poor implementation of chromatic adaptation that aren’t accurate and introduce bias.

Yes, if the camera data is normalized against sixteen bit integers.

I try not to get lost in software when talking about colour issues, unless it is unavoidable.

What software do you use to process raw files?

Doesn’t relate to the OP’s question nor any of the statements I have made.

The reason I ask what software you are using is because as far as I know white balance multipliers are how all the free/libre softwares that I’ve ever used for processing raw files actually do set the white balance. This includes darktable, dcraw (-r m1 m2 m3 m4), rawtherapee, rawproc, photoflow, ufraw, photivo (if photivo is still being developed, I don’t know if it is) and all the other free/libre softwares that I’ve ever used for processing raw files do set the raw file white balance.

Or else I still don’t know what you are talking about. By “white balance multipliers” I mean for example the multipliers that dcraw prints to the screen if you use the “-v” switch, as for example this:

dcraw -v 090916-1718-109-1543.cr2
Loading Canon EOS DIGITAL REBEL XTi image from 090916-1718-109-1543.cr2 ...
Scaling with darkness 255, saturation 3726, and
multipliers 2.630775 1.000000 1.249379 1.000000

I’ve looked at the dcraw code very carefully, all it does is multiply to set the white balance. There’s no chromatic adaptation going on.

Well, it does relate, in that if you are using a raw processor that does one thing, and the OP is using a raw processor that does something else, such that indeed the raw processor he’s using can easily output channel values >1.0 just from setting the white balance, then your advice to him to just clip any RGB channel values > 1.0 is based on a completely different situation than he’s actually facing.

@Pixelator said he’s using darktable. darktable can very easily output interpolated raw files that have channel values > 1.0 just from setting the white balance. There’s no automatic normalization such as you describe.

In raw processing, white balance multipliers also remove the green bias of the bayer mosaic. This is shown in the second screenshot here:

http://glenn.pulpitrock.net/Individual_Pictures/raw_processing/

In a raw processing transform chain, one would apply the camera multipliers prior to demosaic; I reversed the order in the article** to show the bias.

So bear-of-little-brain (me) thinks also that cameras have a fixed spectral response and white balance multiplication also removes illumination temperature bias. Both removals appear to be baked into the ‘as shot’ camera multipliers one finds in the raw’s metadata.

** I’ll eventually move this article to pixls.us, I need to work on the rawproc logic of applying this particular thing, to the monochrome bayer array prior to demosaic, to the RGB array after demosaic. I intend the article and a special rawproc AppImage preconfigured to support the article to go hand-in-hand.

A better choice than xy for chromaticity plot/selection is u’v’, since it is much more perceptually uniform, yet still a linear transform of XYZ. If you don’t need the latter, then a*b* is a better choice again.

Yep - and it doesn’t accord with best practice color science. First step in a color science based workflow is to convert to a calibrated (i.e. human visual system based) colorspace rather than sensor space, then apply known color science validated chromatic adaptations such as Bradford. Scaling (i.e. Von Kries type) chromatic adaptations in sensor space may be palatable or not, depending on how close to cone space/sharpened cone space primaries the sensors are, but doing so isn’t based on the best color science.

[ One of the cute things about the Adobe DNG idea is that they combine multiple calibrations in a way that improves the possible white point adaptation accuracy. ]

I don’t believe this is correct, as there is no inherent “green bias” other than the green double sampling to increase luminance precision. If you are seeing a literal “green bias”, this is likely because of a projection of incorrect camera ratios through sRGB or other lights.

Correct me if I am wrong Graeme, but scaling along RGB isn’t even the worst case XYZ scaling in the Von Kries sense, is it? It’s even worse than XYZ scaling given it is in a non-orthogonal RGB encoding?

@gwgill - do you know of any currently available raw processor that already does white balancing as you describe? I’m aware of what Adobe Dng does, with the two calibrations at different white points and scaling between the two, but that’s not what you are describing.

Regardless of whether there is already such a raw processor, what are the actual sequence of equations? Would you start with using the camera input profile to convert to XYZ - without first applying any white balance multipliers - and then convert to LMS?

This is green:

In the above screenshot no white balance multipliers have been applied to the target chart shot. In fact I disabled the white balance module. The same green result is obtained by setting the white balance multipliers to 1,1,1. I assigned the camera input profile and also asked for the file to be output still in the camera input profile. The screenshot (taken using GIMP) has been converted from the monitor profile to sRGB, but what you see on the screen is what the file looks like after the camera input profile has been applied.

The image in darktable - which is ICC profile color-managed - is using the same monitor profile to show the image as GIMP assigned to the screenshot before converting the screenshot to sRGB. The screenshot and the image in darktable look the same. This is green. There is no trickery here, no slight of hand using “lights” - it really is green.

On the other hand, this is not green, even though I used the same darktable parameters as for the above target chart screenshot, that is, the white balance module is disabled:

The reason the image in the above screenshot shows no green bias is because I removed the green bias by putting a heavy magenta filter - a real, actual physical filter - in front of the lens before taking the photograph.

If there is no green bias in a raw file, how is it that putting a physical magenta filter in front of the lens does in fact remove a green bias?

1 Like

I don’t have any quantitative information on XYZ vs. RGB chromatic adaptation, but I would suspect that XYZ is probably worse, since the primaries are broad, while cone space, sharpened cone space and RGB are typically narrower.

Sorry no, I’m not aware of such details.

DNG is kind of cleverer, since the different white point/illuminant calibration matrices inherently make a spectral allowance, as well as chromatic.

Lots of workflows will work, the main point is to transform into a best practice chromatic adaptation space to set the white point/apply chromatic adaptation, and to do that you have to know or assume a profile. Like any Von Kries type chromatic adaptation, if your profile is a 3x3 matrix, you don’t have to convert to XYZ space you just need to concatenate the profile/Von Kries/un-profile matrices.

Of course more sophisticated camera profiles (i.e. camera spectral sensitivities) could possibly take more sophisticate approaches, since a photos white point is often connected with illuminant spectrum.

It’s been a long time since I did any reading on the topic. But I’m pretty sure the “green bias” isn’t because there are twice as many green sites as red or blue sites on the sensor. The problem is two-fold: Sensors are inherently more sensitive to green light. And the little red, green, and blue filter caps that allow the sensor to capture color aren’t “single wavelength pass filters” - my apologies, I’m guessing this is the wrong vocabulary here! But the red filters let green and blue in, and the blue filters let green and probably a very small amount of red in. And the green filters also let red and blue in.

For example, I have raw files from two cameras, a Canon 400d and a Sony A7 mk 1. Here are the DXOMark pages showing how much red, green, and blue is passed through each filter cap color:

Note the white balance scales for each camera: the white balance red scale is 2.46 for the Sony, vs. 2.00 for the Canon. I don’t know how to interpret these numbers, but more green gets into the red filter for the Sony, compared to the Canon.

Also note the “Sensitivity metamerism index” - for the Sony it’s 82, and for the Canon it’s 81. For what this index is, see this page:

https://www.dxomark.com/About/In-depth-measurements/Measurements/Color-sensitivity

A high level summary of the above page is that digital camera sensors don’t respond to color information the same as humans, and the SMI is a measure of “how close” for any given camera.

This is an interesting discussion of camera SMI over time and across brands, implying that some brands over time have sacrificed color accuracy for other things like higher dynamic range.

The above thread also discusses “fudging” of measurements to get artificially higher marks.

Regarding “you have to know or assume a [camera input] profile”, this is something that I’ve wondered about. The only way I have to make a custom camera input profile is to use ArgyllCMS.

To make the profile I white-balance the target chart shot to D50. It’s possible to make a profile without white balancing the target chart shot, but using the resulting profile produces edge artifacts, at least in RawTherapee and I think generally speaking - I’ll have to double-check this.

Would it be possible/worthwhile to leverage the DXOMark white scale information - whatever that actually might be based on - to feed into the profiling process, and then let the camera profile making process take the white balance the rest of the way to D50, and then use this profile as the starting point for scaling in whatever cone space seems appropriate?

And/or just doing a Bradford transform from D50 to the desired color temperature? RawTherapee CIECAM02 module has provision for changing the color temperature of an image file, and it just seems like there’s great potential there for making better white balances than the standard RGB scaling.

1 Like

It’s not well documented, from what I’ve been able to find in the literature, but that also may be that it’s something well-understood in the imaging world and I’m just a goofball…

So, I’ve been working my next version of rawproc to do, among other things, allow folk to explore the mechanics and ramifications of “real” raw processing, that is, starting with the camera-delivered array and working from there:

  • Libraw provides a means to get the image array directly from the raw file, so I provided a property to enable that.

  • I needed a demosaic tool, so I cobbled together a ‘half’ implementation, took about 15 minutes to implement a walk of the array that takes each RGGB quad q and populates a RGGB pixel p with Rp=Rq, Bp=Bq, and Gp=(G1q+G2q)/2. Took a bit more thinking to make it work with the possible quad combinations, so that’s why what is in github looks more convoluted.

  • I was already working on a whitebalance tool that did the regular stuff: patch average, gray world, and manual entry of multipliers, so I added an option to use the camera multipliers from the raw metadata. That’s where the bias became evident, as you can’t just use these multipliers anywhere, as I’ll explain below.

Armed with all this power, I turned off color management (didn’t want the display profile to do something additional), opened my reference train image with the rawdata=1 setting, and got the expected dark monochrome image with the checkerboard pattern. To that, I applied the demosaic tool, and the histogram opened up to the three channels (I load all images, color and monochrome, to three-channel arrays of float 0.0-1.0), all well and good. Curious to see how it was going (linear, underexposed image is still dark), I applied a black-white point stretch to make the image regardable, and this is what I got:

Just to see what it would do, I applied the whitebalance tool with the camera multipliers, here’s what I got:

Now, this is not how one would do proper raw processing, still missing the color conversion from camera space to a working or output space, and that blackwhitepoint scaling is just for illustration, oh, and I guess you’d want to do the whitebalance first before demosaic, but my point is illustrated: an array of camera measurements demosaiced has a green bias.

I really want to understand this, as I don’t feel fully qualified to write an article on something until I can develop an explanation rooted in the specific cause, and not just hand-wave it like a lot of the internet writings do.

Correct me if I am wrong, but in the first instance are you not dumping raw camera light out through assumed sRGB projected lights? IE: Your first image hasn’t converted from camera primaries to sRGB?

That is correct. I know my display is close to sRGB in chromaticity, so I’m just trying to eliminate as many variables as possible. The image definitely shows the signs of clamping out-of-gamut colors, and there may be some influence there in contribution to the green bias. But I’m not sure what I’d look at instead… :smiley:

Edit: Yes, and displayed through that ‘secret DAC’ of an LCD… I think I get that now, thanks!