GIMP 2.10.6 out-of-range RGB values from .CR2 file

I think it’s “instead of” instead of “yet another”.

Now, by way of disclosure, my total exposure to mathematics in the pursuit of three college degrees is four. Four math courses. Early on when I was young and stupid, I didn’t care much for math and tried to avoid it. Now that I’m only stupid, I at least see the err of those ways… @shreedhar, the following is my take on all this, kinda in response to your take.

Anyhow, when light is captured, it’s done through the R, G, and B filters of the bayer array, so what’s measured is an intensity of a jumble of wavelengths necked down by a bandpass filter. That’s what the camera knows, a set of light intensity measurements. The ‘color’ is about the bandpass filter. So, these measurements aren’t XYZ or even RGB, they’re energy intensities.

Demosaic is (I think) a statistical assertion, what a thing called color might be at a pixel location, based on the intensity measurements made there and in the surrounding locations. At the end of this dance you now have an array of statistical assertions, or as we usually refer to them, RGB triplets called pixels.

The camera’s ability to resolve light intensities determines its “spectral response”, how reliable are its measurements of light intensity across the spectrum we’re interested in, visible light. This is communicated in the matrix we’ve been discussing, along with a triplet that expresses the camera’s white point reference. The 3x3 matrix are the reddest red, greenest green, and bluest blue color we can expect the camera to resolve through this measurement and statistical gonkulator (look it up) described above. Youll see them referred to as the “RGB primaries”. Those are usually expressed in that XYZ system, I’ll leave that to a subsequent discussion. This matrix and triple are calculated from an image made by the camera of a specifically designed target; in my previous post this work was done for me by scanin and colprof, tools that are part of the excellent Argyll CMS package written by @gwgill.

Once the RGB image is produced, you can attempt to look at these numbers on your screen, but they won’t look like what you regarded in the scene because the gamut of the display is so much smaller than the camera’s spectral response. It’s like listening to Rachmaninoff on a transistor radio; you know there’s a lot of aural richness in the room, but all you’re getting is “plink-plink-bong” through the radio’s 1-inch speaker. The purpose of that matrix and triple is to provide the information to the gonkulator that will map the richness captured by the camera eventually to the oh-so-limited display medium.

If you examine the chromaticity chart posted by @anon11264400, you’ll see dots plotted along the routes between the red, green, and blue primaries (which are for a particular, non-specified camera) and that camera’s white reference. This is a good illustration to describe how the mapping of color is done; it’s essentially (and maybe too simplistically) a lookup of the appropriate number along the line radiating from the reference white through the R, G, or B primary coordinate. The white coordinate anchors all these lookups; if white isn’t properly characterized, these lookups will produce different numbers.

(Now, the rest is a bit of speculation on my part, based on the discourse of this thread…)

So, the average photographer doesn’t want to shoot a target at each location, which would be the most reliable way to capture the camera’s response to the light at that location. Accordingly, the prevailing convention for characterizing camera responses is to shoot the target in daylight, and assume the photographer will do a separate correction if they don’t like the colors they see. That’s the whitebalance, usually expressed as multipliers to be applied to each R and B (G is usually used as the anchor to which the other channels should be moved). @gwgill’s point is (I believe) that it’s better to use the white point for what the camera actually sees, rather than munging with the data after the fact.

So if I understand you, the image looks green because of the disparity between the white balance multipliers given to the target chart and the uniwb white balance multipliers given to the green-looking interpolated raw file. And the image would look green if printed and also when displayed on any properly calibrated and profiled monitor, assuming the use of ICC profile color management, the image would still look green. If I’m wrong here, please let me know! If I’m wrong, I need to throw out a whole lot of what I understand about ICC profile color management!

My understanding is that the act of assigning a camera input profile does give meaning to the camera-produced colors simply because the camera input profile specifies the matrix for correlating the camera RGB values to specific XYZ values. Is this correct? - I’m assuming a matrix input profile throughout this discussion as that seems to be the most commonly used type of input profile, with good reason given the small number of color patches on commercially available target charts.

That’s an interesting point - putting a physical filter in front of the camera also changes the device, and even just the lens has its own usually very slight color cast, though I don’t know if this applies to all lenses. But I hadn’t thought about white balancing the raw file as effectively changing the device.

I’m assuming that “whatever raw values correspond to scene white” will depend on the scene’s actual lighting, yes? Which for the general purpose camera input profiles that many perhaps most photographers use, isn’t all that likely to be the same lighting as was used to shoot the target chart. Hence the need to somehow white balance the image during raw processing.

I think you are saying that raw processors that white balance by using RGB multipliers aren’t using a color managed way to white balance during raw processing?

And my special case of using uniwb when the actual target chart shot was white balanced to D50 prior to making the input profile, is just an extreme example of the way it seems most raw processors actually do white balancing, assuming the user’s goal is artistic rather than a total misunderstanding of how to use the camera input profile?

[ And yes, this is where we started - a lot of camera workflows seem to be non-color accurate in applying white balance to the raw encoding values, rather than applying it in a cone sharpened device independent colorspace. ]

If you don’t mind, I have some questions regarding actually implementing a better white balance, my apologies if the questions are just plain dumb!

  1. Is there general agreement that white balancing by using RGB multipliers in “camera space” gives better results than white balancing in other color spaces such as XYZ, sRGB, Rec.2020, etc? I’ve read this and it seems true in my own experience, but maybe it’s not really true?

  2. When making a target chart shot, is the initial white balancing of the target chart using RGB multipliers already a problem?

  3. Or is this “multiply the RGB values to change the white balance of the image” only a problem when trying to change the white balance from that used to make the camera input profile? But using the same multipliers isn’t a problem?

  4. If using RGB multipliers in camera space is already a problem when white balancing the target chart shot that’s used to make the camera input profile, is the/a solution to use uniwb for the target chart when sending it to scanin, and then allow colprof to make an input profile that not only correlates the target chart color patch RGB values with XYZ values, but also and simultaneously white balances the target chart shot?

  5. Is applying the RGB multipliers in sharpened cone space in any way “the same” as doing a chromatic adaptation from one color temperature to another?

Well, I think that’s all my questions about a better white balance, at least for now!

Read that paper. In most instances, it is agreed that scaling RGB is not ideal, including every CAM designed etc.

Scaling RGB isn’t awful, and bests CAM02 under the paper’s metric. However, it is worse than sharpened spectral in the XYZ domain. It can obviously depend on the metrics utilized.

There was another paper out there contending that Bradford was more ideal for manipulations, but for the life of me I can’t find it again in my bookmarks.

You’ve changed the fundamentals, so the original profile is completely the wrong colour space transform for the image.

Quite close within the limits of my understanding.

A couple of additional relevant points:

  • In addition to several resolutions the CIE tabled, the basis vectors for XYZ were calculated around luminous flux, and as such, they are non-orthogonal. Y is purely isolated as “perceptual energy” via the luminous flux function. This means multiplication (indirect lighting) is colour space dependent.
  • As per @garagecoder, the spectral responses of the rods and cones are not narrow band, producing many to one metameric combinations. Deriving XYZ requires a fit.

It might be closer to say “playing back” via a piano with an entirey different tuning. That is where this “green” madness comes in; the chords are composed of entirey different almost randomized positions, and using the original key positions from the camera results in wonky output.

You can’t look at the key position and shout out “THAT’S A G AND THAT’S ALWAYS A C!!!1!1”, but rather sample the note the key produces, and translate it to the song.

@shreedhar these links will be of interest to you, and will flesh things out mathematically better than I ever could:

Roughly correct. It is agreed upon by colour scientists that chromatic adaptation must be achieved within the LMS domain with sharpened spectral responses. There has been some further research into pure spectral approaches that may yield better results[2].

Some contend that “white balance” and “chromatic adaptation” are two different goals[1]. With that said, chromatic adaptation is traditionally accomplished in LMS sharpened spectral, which is indeed another matrix transform away from XYZ.

@gwgill I believe the DNG spec, and in turn Adobe, suggests a minimum of two XYZ transforms for differing illuminants, correct? Hence why your reference above that DNG offered better handling?

From the DNG specification:

To find the interpolation weighting factor between the two tag sets, find the correlated color temperature for the user-selected white balance and the two calibration illuminants. If the white balance temperature is between two calibration illuminant temperatures, then invert all the temperatures and use linear interpolation. Otherwise, use the closest calibration tag set.

[1] An entire tome is dedicated to colour constancy. https://www.wiley.com/en-us/Color+Constancy-p-9780470058299 Ebner worked with Fairchild on the development of IPT as a side note.
[2] http://rit-mcsl.org/fairchild/PDFs/PRO28.pdf

Doesn’t that depend on whether I actually want the green color cast? And maybe it depends on what I intend to do next with the image?

For example, if the image is still in the camera input color space when it’s saved to disk from the raw processor, then whether you apply the appropriate white balance multipliers during raw processing or after opening the interpolated image file using GIMP, results are the same. Which allows for convenient multiple layers with masks for different white balances for different parts of the image that are illuminated by different light sources.

Now go ahead, have a field day, tell me all the ways I’m wrong and doing everything wrong, and I’ll still continue editing my images as suits my purposes. At least until the day we have demonstrably better white balancing in our raw processors from using something other than scaling in camera space. Which day will come faster when we have a clear plan of implementation - white balance target chart shot this way, convert to sharpened cone using this matrix, etc.

Edit: In case anyone forgot, the whole reason I brought up the uniwb possibility when using an input profile that was white-balanced to D50 was as a way to show that the green color has nothing to do with sRGB or with my particular monitor.

Also, can someone explain why, if there is no green bias in the sensor itself, that somehow the green multiplier - whether in the profile itself or in the white balancing of the target chart - is usually about half the value of the red and blue multipliers for a whole lot of cameras? Is this not precisely a green bias?

Some already do it in the XYZ domain. Adobe does I believe in accordance with the DNG specification.

This all started regarding discussions of out of range values. If you do proper chromatic adaptation, the resultant primaries move RGB values out of gamut in the destination reference space. They must be clipped or will cause problems during compositing and manipulation.

No one forgot. You are using the wrong transform.

There is no green.

It’s been explained dozens of times. Perhaps unbounded mode solves it.

Here is a question about these out of gamut values. I’m excluding negative channel values at this point. So here’s the question: If I open up a floating point exr file with Blender or Nuke, and the floating point exr file has some channel values that are greater than 1.0f, should these channel values be clipped summarily upon opening the file so that future compositing and manipulation problems can be avoided?

Is this a serious question? Maybe it’s meant as sarcasm? Or a put-down?

Depends on the context:

  • Are the values display linear or scene linear?
  • Are the values a byproduct of sampling via something like SinC?
  • Are the values a byproduct of a colourspace transform to a smaller gamut?

In short, are the values data or non data?

If the values are negative, yes you’ll need to clip them for manipulation in a linear light model.

Entirely sarcasm. I can’t fathom someone explaining it yet again.

Go here and download the tar.gz file.
http://download.savannah.nongnu.org/releases/openexr/openexr-images-1.7.0.tar.gz

Unpack it and navigate to the Chromaticities folder and open the file named Rec709.exr. That file. Should it be clipped above 1.0f in Blender or Nuke?

I would encourage you to think about what the intention of the light values are in the encoded file.

If they represent colour, then as @gwgill has pointed out several times, those colours are only given meaning by attaching it to the standard observer model.

If you do that, then adjust the colour values, and some end up out of gamut, should they be clipped for manipulation?

IE: The crux of this entire question is whether RGB scaling is generating appropriate colour values in the first place. The point I’ve tried to make is that it is questionable, given the nature of chromatic adaptation, whether RGB scaling is delievering actual adapted values.

TL;DR I personally don’t believe that RGB axis scaling is delivering the values that the device would capture the illuminant as at the sensor photon level, hence a portion of that transform is invalid data.

Addendum: Sorry didn’t realize you were referencing the EXR samples. I haven’t looked at that file, so it depends on the encoding. If that is the digital Lad image, it is scene referred with highlights in Marcie’s hair extending up to an arbitrary value that I can’t remember.

Well, could you take a look and let me know whether that exr file should be clipped upon opening it with Blender or Nuke?

That’s the flower. It’s scene referred, so no clip. You’ll need a proper camera view transform to view the values appropriately as they extend up to 2.0+ in some instances.

OK, what about the file “WideColorGamut.exr” in the folder “TestImages”? That file has a lot of negative channel values. What should be done with it when opening it with Nuke or Blender?

Some pixels in this RGB image have extremely saturated colors, outside the gamut that can be displayed on a video monitor whose primaries match Rec. ITU-R BT.709. All RGB triples in the image correspond to CIE xyY triples with xy chromaticities that represent real colors.

So convert to your reference space, and clip.

My apologies, I don’t use the same terminology as you use. What do you mean by “reference space”?

The space you are going to align all of your imagery to. It has a fixed gamut if in additive RGB.

Are there criteria for choosing a reference space? In particular, can the reference space color gamut be different from the monitor color gamut?

The reason I ask, is that given the name and description of the file, I’m guessing it was produced in WideGamutRGB and converted to sRGB to make the test image. There are some negative channel values in the extreme green and blue corners, but that might be from a difference in assumed primaries for WideGamutRGB - AFAIK Adobe never published a standard for this particular color space. So I’m guessing the entire image is supposed to be in gamut wrt to WideGamutRGB.

Also, what does “it has a fixed gamut if in additive RGB” mean? What would a “not fixed” gamut look like or how would it be different from a fixed gamut color space? And what is “additive RGB” as opposed to ?? I wasn’t aware of a “not additive” RGB color space.

Why yes I did. What makes you think I didn’t? Did you see the words “given the name and description of the file”?

Why yes I did, if you mean the description file that accompanies the image file. What makes you think I didn’t read it? What meaning did you extract from the description file that seems to have escaped my notice?

Again, what makes you think I didn’t read the description file? What did you glean from the description file?

That’s nice. I don’t know what you are talking about. And I’d really like to know what it is that I said that you think indicates that I didn’t read whatever it is that you think I didn’t read. If you think someone missed an important point, the usual procedure is to quote what you think they missed and offer an interpretation. Y

Your manner of speaking isn’t very helpful. It would be better if you just explain what’s behind your enunciation that I didn’t read whatever it is that you think I didn’t read,

See, I’m asking questions that are as straightforward as I can manage, and you are giving entirely unhelpful answers consisting of repeating over and over and over again that I didn’t read something or other, to the point where I’ve totally lost track of whatever it is you think I did read - something you posted? The image description file?

Moving right along to my next question, as I don’t think I’m going to get a helpful response to why it is that @anon11264400 thinks I didn’t read whatever it is I didn’t read . . .

What if I open Rec709.exr with its >1.0 channel values, but this time open it with GIMP or darktable instead of with Blender or Nuke? What’s the appropriate thing to do next?