GIMP 2.10.6 out-of-range RGB values from .CR2 file

I know you don’t and you won’t believe me, or an individual who has written an entire colour management system from scratch and has been around colour for gosh knows how many years. I could bring a parade of people in here with colour science backgrounds to restate what Mr. Gill has tried to impress upon you, but you wouldn’t believe them either.

All we have are ratios of light gathered at a sensor. What gives those ratios meaning? The camera. That is, if you don’t have precisely the same coloured filters as the camera has, you aren’t looking at anything that resembles the intention of those ratios, which can be expressed as xy coordinates based on the light gathered.

Further, and I’d encourage you to actually check this, your “white balance” is not balancing white. It’s making R=G=B equal some approximation of an illuminant. This just happens to appear as the same colour “neutral white” as your display or what you are accustomed to seeing. If you were on a DCI-P3 projector and used the RGB encoding? Guess what? It’d look greeny! There I said it.

So you have conflated that R=G=B stretches the base camera RGB such that it equates with whatever illuminant in the image. But that’s not the case. Test the other swatches and you’ll see that they are also well off their mark. The native camera R=G=B is nothing close to it, and you can’t change that. Hence, R=G=B is not green.

Now if you ask how complex it would be for you to do a white point adjustment using any old tool, I’m willing to wager that you, @Elle, could see how to do it pretty easily post-XYZ transformed. Why? Because it’s not that challenging to someone like you with colour experience. Dare I say it’s almost trivial, and a simple math matrix node would do it, or an equivalent in any software.

So no, “it”, being the ratios as a single set, is not green. You don’t have to believe me, but I’d encourage you to believe Mr. Gill.

I have asked @gwgill to explain how the following steps somehow involve sRGB, and he has so far declined to answer:

  1. Make matrix camera input profile using a target chart white balanced to D50. Or else just pick the dcraw default input profile or when using darktable, the enhanced profile if it’s available for your camera.
  2. Interpolate a raw file but use uniwb.
  3. Assign the camera input profile from step one and review the results using ICC profile color management and a calibrated and profiled monitor.

Because it’s silly?

Why don’t you do that?

  1. Balance the achromatic axis in the camera RGB encoding.
  2. Sample the chromaticities of known swatches.

Are they correct?

If they aren’t, what exactly have you “white balanced”? Or is it perhaps a bit of math trickery to make the achromatic axis align with R=G=B?

White balancing changes all of the colours in the image such that they would appear under a given illuminant within the standard observer model. Yet when we sample the colour swatches after we align / scale the achromatic axis such that R=G=B in RGB, lo and behold, the colours aren’t correct. So again, have you white balanced? In proper white balancing approaches, as most folks realize, we are actually rotating and adjusting the primaries themselves. In fact, you have only performed a portion of the overall transform. Examining that partial math aspect isn’t valid unto itself.

Now if you make that matrix profile and converted the RGB values to your display’s RGB values, I can assure you that the RGB ratios didn’t change; they still represent the camera light ratios and you have transformed them to your display appropriately. Sure the ratios might look whacky to you, but they still represent legitimate xy coordinates. Further, equal energy camera sensor values are natively what they are.

When you are suggesting that the ratios “are green”, you’re leaning on your learned experience of familiar light ratios. But again, the ratio set I posted above is not greeny cyan; you are interpreting the data wrong.

Right, so an expected result. The profile doesn’t match the device setup (because you white balanced the raw for profiling, but not for application), and as a result you don’t get correct colors, and for your particular device (camera), it looks green.

It may seem like a technicality, but it’s pretty fundamental to the understanding of the difference between device color spaces and device independent color spaces. The camera raw colorspace doesn’t have any color meaning until it is interpreted (i.e. converted) to a device independent representation. At that point it has a color meaning. Done using a reasonably accuracy device color profile, white is white. If white isn’t white, then the color profile isn’t accurate for that device space. Changing the gain of the channels in the raw file, changes the device space, so it need to be profiled in that condition for the profile to be valid.

It may all seem a tautology, because it is. Colorimetrically, white in camera raw space is whatever raw values correspond to scene white. So even if you change those values by modifying the raw file, you haven’t actually changed the color, just the encoding of it.

But of course you can assemble any sort of workflow you like, including not quite color accurate ones. So given a fixed camera profile you are applying to the raw images, you can certainly change the end white balance by changing the raw channel values while not changing the profile to compensate, but this isn’t a color managed way of doing it.
[ And yes, this is where we started - a lot of camera workflows seem to be non-color accurate in applying white balance to the raw encoding values, rather than applying it in a cone sharpened device independent colorspace. ]

1 Like

Yep. There is a practicality in terms of interchangeability, in “white balancing” the raw data.

You could still get a more color accurate result there, if the white balancing was done using a 3x3 profile approximation. i.e.:

raw → XYZ → Sharpened cone → white balance → sharpened cone → raw

[ This all assumes that black = 0, which is not always the case for raw raw. You’d need to have a 3x3 + offset for the raw ↔ XYZ to compensate. And raw is assumed to be reasonably linear light :slight_smile: ]

So, having nothing better to do after raking leaves, I set out to produce a whitebalancing camera profile. Starting with my target shot raw file from last year’s profiling, I opened it in rawproc using the rawdata property, so I started with the camera data straight out of the NEF, converted to floating point, no assigned camera profile. I demosaiced it with half, then cropped to the target corners and saved it. Absolutely no processing except convert to float, demosaic. Here’s a screenshot of it:

Next, I ran it through the bash script I wrote for last year’s profiling:

PATH=$PATH:/d/Documents/Argyll_V1.9.2/bin
scanin -dipn -v -G1.0 -p $1.tif /d/Documents/Argyll_V1.9.2/ref/ColorChecker.cht /d/Documents/Argyll_V1.9.2/ref/ColorChecker.cie
colprof -v -am -u -C"No copyright, use freely." -O"Nikon_D7000_Sunlight_UniWB.icc" -D"Nikon_D7000_Sunlight_UniWB.icc" $1

That produced an ICC profile that I then assigned to my reference train image upon opening. Same treatment: rawdata, demosaic; to that I added a colorspace conversion to Rec2020 g1.8, and a scaling to put black and white at the data container limits (a poor man’s display transform). Here’s a screenshot:

I used to have to tweak white balance to get this. It’s still a tad blue, but based on what we’ve been discussing, I think it’s due to the shooting of the target in good, bright sunlight, and the train was shot on a cloudy day, n’est ce pas? On second thought…

I’d shot multiple exposures of my target, and using the one I’d used for my original camera profile produced a bit of garishness (NOT GREEN!!!). On inspection, the white patch was a bit blown out, so i moved to the next lower exposure, re-produced the profile, and 'ere y’go.

You can read and read and read all the prose out there on color management, but there’s nothing like a good example to drive things home. @pixelator, thanks for letting us hijack your thread. @Elle, @gwgill, @anon11264400, thank you for the discourse…

I am a pure mathematician (pure is used as opposed to being an applied mathematician) by profession who is very much interested in photography. It is of natural interest to me to understand things abstractly. So I would like to explain what I think is going on in Linear Algebraic terms. Please let me know if I am making sense and forgive me if I am stating things that are obvious.

Any element in a 3 dimensional vector space can be represented by three co-ordinates. But to do that one needs to fix a basis ( an ordered set of three linearly independent vectors like (1,00), (0,1,0) and (0,0,1) usually called the X,Y,Z axis). If one chooses a different basis, the coordinates will change (for example (1,0,0)= (1,-1,-1) in the basis (1,1,1), (0,1,0),(0,0,1)).

Now, there is an actual thing in this world whose image we like to capture. The light reflected from every cell of that object can be expressed as a combination of Red, Green and Blue (these primaries form the basis of the light space) hence has three coordinates. The sensor (which is divided into millions of triplets of pixels of individual Red, Green and Blue sensitivity) cuts the reflected light into millions of small points (each of those points is big enough to cover a Red, Green and Blue pixel each) and measures the three coordinates of only those cells of the image. Now, the resultant observations are not recorded with respect to R,G,B coordinates but X,Y,Z coordinates. On top of it, there is an interpolation (i.e. guesswork) that has to be done because we are measuring one point by three distinct pixels, each only measuring one co-ordinate of the triple. Plus, there is a physical limitation of how much sensitivity a pixel has. Thus all possible records will not fill up the whole three dimensional space but only a region say R. (for example no negative value can be recorded and if maximum is normalized as 1, no value bigger than one can be recorded etc.) Of course, we will like to see the final capture. During the guesswork, it is ASSUMED that different vectors will add co-ordinatewise (linearity of ratios). If we choose another basis (say Red, Green, Blue) and try to express the observed vectors with respect to the new basis, the mathematical formula is V becomes PVP^{-1} where P is a 3x3 matrix whose columns are coordinates of Red, Green and Blue in the XYZ basis.

Thus the entire R is now changed into new region R’ by this transformation. The display pixels have their own limitations and hence can only reproduce elements from a region say S and hence we can only see points that lie in the intersection of R’ and S. The points in R’ that lie outside S now becomes non-data (@anon11264400’s terminology). Also, even amongst the allowed vectors, some vectors that were representing a certain color before are now representing another color (i.e. There is no green in the RAW data). The camera profiling attempt is to multiply each of the coordinate vectors (R,G,B) by a constant (that is independent of the scene but only dependent on the camera) is assumed to rectify the newly obtained colors to reality. The contention of @anon11264400 and @gwgill is that it is too simplistic an operation and one needs to multiply by yet another 3x3 matrix to truly get the original colors in relation to each other.

Am I right in this abstraction?

Well so much as the change of basis causes meaningless/out of range values, yes. Sadly there’s an additional complication of trying to minimise errors in the integral (in practice a sum) of the spectral responsivity functions, caused by the transforms (matrix or otherwise) - often by regression. Perhaps the discussion here was related to choice of basis for WB & workflow, but I’m really not certain…

I think it’s “instead of” instead of “yet another”.

Now, by way of disclosure, my total exposure to mathematics in the pursuit of three college degrees is four. Four math courses. Early on when I was young and stupid, I didn’t care much for math and tried to avoid it. Now that I’m only stupid, I at least see the err of those ways… @shreedhar, the following is my take on all this, kinda in response to your take.

Anyhow, when light is captured, it’s done through the R, G, and B filters of the bayer array, so what’s measured is an intensity of a jumble of wavelengths necked down by a bandpass filter. That’s what the camera knows, a set of light intensity measurements. The ‘color’ is about the bandpass filter. So, these measurements aren’t XYZ or even RGB, they’re energy intensities.

Demosaic is (I think) a statistical assertion, what a thing called color might be at a pixel location, based on the intensity measurements made there and in the surrounding locations. At the end of this dance you now have an array of statistical assertions, or as we usually refer to them, RGB triplets called pixels.

The camera’s ability to resolve light intensities determines its “spectral response”, how reliable are its measurements of light intensity across the spectrum we’re interested in, visible light. This is communicated in the matrix we’ve been discussing, along with a triplet that expresses the camera’s white point reference. The 3x3 matrix are the reddest red, greenest green, and bluest blue color we can expect the camera to resolve through this measurement and statistical gonkulator (look it up) described above. Youll see them referred to as the “RGB primaries”. Those are usually expressed in that XYZ system, I’ll leave that to a subsequent discussion. This matrix and triple are calculated from an image made by the camera of a specifically designed target; in my previous post this work was done for me by scanin and colprof, tools that are part of the excellent Argyll CMS package written by @gwgill.

Once the RGB image is produced, you can attempt to look at these numbers on your screen, but they won’t look like what you regarded in the scene because the gamut of the display is so much smaller than the camera’s spectral response. It’s like listening to Rachmaninoff on a transistor radio; you know there’s a lot of aural richness in the room, but all you’re getting is “plink-plink-bong” through the radio’s 1-inch speaker. The purpose of that matrix and triple is to provide the information to the gonkulator that will map the richness captured by the camera eventually to the oh-so-limited display medium.

If you examine the chromaticity chart posted by @anon11264400, you’ll see dots plotted along the routes between the red, green, and blue primaries (which are for a particular, non-specified camera) and that camera’s white reference. This is a good illustration to describe how the mapping of color is done; it’s essentially (and maybe too simplistically) a lookup of the appropriate number along the line radiating from the reference white through the R, G, or B primary coordinate. The white coordinate anchors all these lookups; if white isn’t properly characterized, these lookups will produce different numbers.

(Now, the rest is a bit of speculation on my part, based on the discourse of this thread…)

So, the average photographer doesn’t want to shoot a target at each location, which would be the most reliable way to capture the camera’s response to the light at that location. Accordingly, the prevailing convention for characterizing camera responses is to shoot the target in daylight, and assume the photographer will do a separate correction if they don’t like the colors they see. That’s the whitebalance, usually expressed as multipliers to be applied to each R and B (G is usually used as the anchor to which the other channels should be moved). @gwgill’s point is (I believe) that it’s better to use the white point for what the camera actually sees, rather than munging with the data after the fact.

So if I understand you, the image looks green because of the disparity between the white balance multipliers given to the target chart and the uniwb white balance multipliers given to the green-looking interpolated raw file. And the image would look green if printed and also when displayed on any properly calibrated and profiled monitor, assuming the use of ICC profile color management, the image would still look green. If I’m wrong here, please let me know! If I’m wrong, I need to throw out a whole lot of what I understand about ICC profile color management!

My understanding is that the act of assigning a camera input profile does give meaning to the camera-produced colors simply because the camera input profile specifies the matrix for correlating the camera RGB values to specific XYZ values. Is this correct? - I’m assuming a matrix input profile throughout this discussion as that seems to be the most commonly used type of input profile, with good reason given the small number of color patches on commercially available target charts.

That’s an interesting point - putting a physical filter in front of the camera also changes the device, and even just the lens has its own usually very slight color cast, though I don’t know if this applies to all lenses. But I hadn’t thought about white balancing the raw file as effectively changing the device.

I’m assuming that “whatever raw values correspond to scene white” will depend on the scene’s actual lighting, yes? Which for the general purpose camera input profiles that many perhaps most photographers use, isn’t all that likely to be the same lighting as was used to shoot the target chart. Hence the need to somehow white balance the image during raw processing.

I think you are saying that raw processors that white balance by using RGB multipliers aren’t using a color managed way to white balance during raw processing?

And my special case of using uniwb when the actual target chart shot was white balanced to D50 prior to making the input profile, is just an extreme example of the way it seems most raw processors actually do white balancing, assuming the user’s goal is artistic rather than a total misunderstanding of how to use the camera input profile?

[ And yes, this is where we started - a lot of camera workflows seem to be non-color accurate in applying white balance to the raw encoding values, rather than applying it in a cone sharpened device independent colorspace. ]

If you don’t mind, I have some questions regarding actually implementing a better white balance, my apologies if the questions are just plain dumb!

  1. Is there general agreement that white balancing by using RGB multipliers in “camera space” gives better results than white balancing in other color spaces such as XYZ, sRGB, Rec.2020, etc? I’ve read this and it seems true in my own experience, but maybe it’s not really true?

  2. When making a target chart shot, is the initial white balancing of the target chart using RGB multipliers already a problem?

  3. Or is this “multiply the RGB values to change the white balance of the image” only a problem when trying to change the white balance from that used to make the camera input profile? But using the same multipliers isn’t a problem?

  4. If using RGB multipliers in camera space is already a problem when white balancing the target chart shot that’s used to make the camera input profile, is the/a solution to use uniwb for the target chart when sending it to scanin, and then allow colprof to make an input profile that not only correlates the target chart color patch RGB values with XYZ values, but also and simultaneously white balances the target chart shot?

  5. Is applying the RGB multipliers in sharpened cone space in any way “the same” as doing a chromatic adaptation from one color temperature to another?

Well, I think that’s all my questions about a better white balance, at least for now!

Read that paper. In most instances, it is agreed that scaling RGB is not ideal, including every CAM designed etc.

Scaling RGB isn’t awful, and bests CAM02 under the paper’s metric. However, it is worse than sharpened spectral in the XYZ domain. It can obviously depend on the metrics utilized.

There was another paper out there contending that Bradford was more ideal for manipulations, but for the life of me I can’t find it again in my bookmarks.

You’ve changed the fundamentals, so the original profile is completely the wrong colour space transform for the image.

Quite close within the limits of my understanding.

A couple of additional relevant points:

  • In addition to several resolutions the CIE tabled, the basis vectors for XYZ were calculated around luminous flux, and as such, they are non-orthogonal. Y is purely isolated as “perceptual energy” via the luminous flux function. This means multiplication (indirect lighting) is colour space dependent.
  • As per @garagecoder, the spectral responses of the rods and cones are not narrow band, producing many to one metameric combinations. Deriving XYZ requires a fit.

It might be closer to say “playing back” via a piano with an entirey different tuning. That is where this “green” madness comes in; the chords are composed of entirey different almost randomized positions, and using the original key positions from the camera results in wonky output.

You can’t look at the key position and shout out “THAT’S A G AND THAT’S ALWAYS A C!!!1!1”, but rather sample the note the key produces, and translate it to the song.

@shreedhar these links will be of interest to you, and will flesh things out mathematically better than I ever could:

Roughly correct. It is agreed upon by colour scientists that chromatic adaptation must be achieved within the LMS domain with sharpened spectral responses. There has been some further research into pure spectral approaches that may yield better results[2].

Some contend that “white balance” and “chromatic adaptation” are two different goals[1]. With that said, chromatic adaptation is traditionally accomplished in LMS sharpened spectral, which is indeed another matrix transform away from XYZ.

@gwgill I believe the DNG spec, and in turn Adobe, suggests a minimum of two XYZ transforms for differing illuminants, correct? Hence why your reference above that DNG offered better handling?

From the DNG specification:

To find the interpolation weighting factor between the two tag sets, find the correlated color temperature for the user-selected white balance and the two calibration illuminants. If the white balance temperature is between two calibration illuminant temperatures, then invert all the temperatures and use linear interpolation. Otherwise, use the closest calibration tag set.

[1] An entire tome is dedicated to colour constancy. https://www.wiley.com/en-us/Color+Constancy-p-9780470058299 Ebner worked with Fairchild on the development of IPT as a side note.
[2] http://rit-mcsl.org/fairchild/PDFs/PRO28.pdf

Doesn’t that depend on whether I actually want the green color cast? And maybe it depends on what I intend to do next with the image?

For example, if the image is still in the camera input color space when it’s saved to disk from the raw processor, then whether you apply the appropriate white balance multipliers during raw processing or after opening the interpolated image file using GIMP, results are the same. Which allows for convenient multiple layers with masks for different white balances for different parts of the image that are illuminated by different light sources.

Now go ahead, have a field day, tell me all the ways I’m wrong and doing everything wrong, and I’ll still continue editing my images as suits my purposes. At least until the day we have demonstrably better white balancing in our raw processors from using something other than scaling in camera space. Which day will come faster when we have a clear plan of implementation - white balance target chart shot this way, convert to sharpened cone using this matrix, etc.

Edit: In case anyone forgot, the whole reason I brought up the uniwb possibility when using an input profile that was white-balanced to D50 was as a way to show that the green color has nothing to do with sRGB or with my particular monitor.

Also, can someone explain why, if there is no green bias in the sensor itself, that somehow the green multiplier - whether in the profile itself or in the white balancing of the target chart - is usually about half the value of the red and blue multipliers for a whole lot of cameras? Is this not precisely a green bias?

Some already do it in the XYZ domain. Adobe does I believe in accordance with the DNG specification.

This all started regarding discussions of out of range values. If you do proper chromatic adaptation, the resultant primaries move RGB values out of gamut in the destination reference space. They must be clipped or will cause problems during compositing and manipulation.

No one forgot. You are using the wrong transform.

There is no green.

It’s been explained dozens of times. Perhaps unbounded mode solves it.

Here is a question about these out of gamut values. I’m excluding negative channel values at this point. So here’s the question: If I open up a floating point exr file with Blender or Nuke, and the floating point exr file has some channel values that are greater than 1.0f, should these channel values be clipped summarily upon opening the file so that future compositing and manipulation problems can be avoided?

Is this a serious question? Maybe it’s meant as sarcasm? Or a put-down?

Depends on the context:

  • Are the values display linear or scene linear?
  • Are the values a byproduct of sampling via something like SinC?
  • Are the values a byproduct of a colourspace transform to a smaller gamut?

In short, are the values data or non data?

If the values are negative, yes you’ll need to clip them for manipulation in a linear light model.

Entirely sarcasm. I can’t fathom someone explaining it yet again.

Go here and download the tar.gz file.
http://download.savannah.nongnu.org/releases/openexr/openexr-images-1.7.0.tar.gz

Unpack it and navigate to the Chromaticities folder and open the file named Rec709.exr. That file. Should it be clipped above 1.0f in Blender or Nuke?

I would encourage you to think about what the intention of the light values are in the encoded file.

If they represent colour, then as @gwgill has pointed out several times, those colours are only given meaning by attaching it to the standard observer model.

If you do that, then adjust the colour values, and some end up out of gamut, should they be clipped for manipulation?

IE: The crux of this entire question is whether RGB scaling is generating appropriate colour values in the first place. The point I’ve tried to make is that it is questionable, given the nature of chromatic adaptation, whether RGB scaling is delievering actual adapted values.

TL;DR I personally don’t believe that RGB axis scaling is delivering the values that the device would capture the illuminant as at the sensor photon level, hence a portion of that transform is invalid data.

Addendum: Sorry didn’t realize you were referencing the EXR samples. I haven’t looked at that file, so it depends on the encoding. If that is the digital Lad image, it is scene referred with highlights in Marcie’s hair extending up to an arbitrary value that I can’t remember.

Well, could you take a look and let me know whether that exr file should be clipped upon opening it with Blender or Nuke?

That’s the flower. It’s scene referred, so no clip. You’ll need a proper camera view transform to view the values appropriately as they extend up to 2.0+ in some instances.