GIMP 2.10.6 out-of-range RGB values from .CR2 file

With respect to your three screenshots, the first one shows what I think @gwgill meant by sRGB being somehow associated with an interpolated raw file, which is very different from not doing a proper white balance on a raw file during interpolation, but still assigning an input profile.

For Troy’s “uniwb” input profile, the proper white balance is “uniwb”. But for our current raw processors this creates the situation where attempting to use normal wb tools such as clicking on a neutral patch produce unexpected results, such as gray turns red.

Troy is a very smart person who knows a lot about color - more than me no doubt, even if he does have an irrational dislike of ICC profile color management. My impression is that there’s a lot to be said for using chromatic adaptation to do white balancing - I’m not unaware of the issues despite Troy’s efforts to make it seem that way! and I’d love to explore the topic, perhaps incorporating stuff we’ve already talked about in other threads with respect to using RawTherapee’s CIECAM02 module chromatic adaptation to modify white balances.

It would be interesting to do some real-world checks on whether incorporating the white balance into the camera input profile is better than not doing this, or if it perhaps makes no difference, when the goal is to use chromatic adaptation to white balance raw files.

Thanks @Elle! Very nice explanation.

I think compositing, evaluating, and color grading in the production of motion pictures and video will go on as structured by ACES and the like, where a common reference is needed to coordinate the efforts of the post-production multitude. We’re not going to break that here. Personally, I find ACES to be an extremely interesting study in the application of engineering to imaging, where ‘engineering’ is more about getting a gaggle of humans to build a complex thing and less about the particular technology.

Accordingly, post processing of digital images by individuals will continue as structured in the ways promulgated by ICC & Adobe, because most won’t have the inclination to pick apart the mechanism to really understand it at the level we’ve discussed here. But, if the toolbox continues to produce pleasing, marketable images, that’s good too. Personally, I prefer the Ansel Adams perspective, where I want to pick it apart to understand how to control it.

The essential thing I think I’ve learned in all the scene vs. display-referred discourse is that the manipulations of tone and color values are best done with respect to their original light-based relationships, and the transformation for human regard should be left for last. Indeed, I’ve re-arranged how I stack tools in rawproc with this in mind, well, for the most part; I’m still partial to doing my camera->working space transition to Rec2020-elle-V4-g18.icc, ‘g18’ is 1.8 gamma, which is probably a nod to the display-referenced legacy of my curve tool. But that too will eventually be addressed, because I’m still having fun figuring this all out.

I do, and at least one comment has been removed since it adds nothing to the conversation.

I don’t know the backstory behind all this, but I’d urge everyone to be polite and resolve the issues between you all. We can edit out, silence, and ban all we want as moderators, but that doesn’t really solve the problem.

3 Likes

If I may, I tend to be quite antisocial in real life. It isn’t on purpose even though I am very careful. I guess the key is to take a break and get back to it later (or never). I often mess up royally when I act compulsively or already have so much going on in my life that some of the negativity inside seeps out with a vengeance.

1 Like

In trying to dismantle the “ZOMFG GREEN” thing, I hammered a bit of Colour to generate the following images. The camera native primaries, as deduced from the matrix, are plotted on the chart. The grey dots are how a pure primary, or any RGB value with no complimentary channel, marches towards the camera photosites equal energy colour as equal compliments are added.

This should help to explain how when one dumps the values out through an sRGB display, the ratios look green; because the original camera filter / light ratios at equal energy produce quite a radically different colour than anything close to D50 or D65. In order to pull it towards those values, you need to add more of the “green” and “blue” channel primary, which increases the tension towards those primaries.

As you can see, pure primaries of each the “blue” and “green” aren’t even representative of real colours. How can this happen from a physical device? Because of the spectral componentry of the CMOS array. The three-light idealized matrix assumes pure primaries based in XYZ, whereas in reality there are nonlinear crosstalk elements happening in the filter arrays.

Needless to say, hopefully this diagram helps to show why the colour ratios in the camera raw file aren’t green at all; the basis filters represent an entirely different spectral composition to what you are accustomed to seeing.

I didn’t proof check the math, and apologies if there are glaring inaccuracies.

1 Like

And now I feel complete; you’ve graphically illustrated what my ‘little brain’ had started to intuit. ‘Green’ was probably an unfortunate observation; under inspection, it turned out to be more of a cyan in my particular case, and even then, I get it, it’s about what happens when you push such an expansive image into a sRGB-class display with a different reference to white.

I’m going to redo my camera profile with an uncorrected TIFF, see how that melds into my workflow. But @Elle has some points to consider, I can see dragons with respect to established still-image workflows. Also, my end worry is about making images look respectable in places that don’t manage color, which in some cases looks to be sub-sRGB. Seems the church we attend has got no religion about color management in their AV… :smile:

Yes, it’s cyan, not dead-central green, though surely the exact color is camera-dependent. This doesn’t change anything at all. And as I keep pointing out, sRGB is not involved in the example that I gave. It’s just not involved:

  1. Make matrix camera input profile using a target chart white balanced to D50. Or else just pick the dcraw default input profile or when using darktable, the enhanced profile if it’s available for your camera.

  2. Interpolate a raw file but use uniwb.

  3. Assign the camera input profile from step one and review the results using ICC profile color management and a calibrated and profiled monitor.

sRGB isn’t involved in the above 3 steps.

Has anyone actually bothered to check the simple steps outlined above? All you have to do is open a raw file using darktable and disable the white balance module - does the raw file have a green (cyan) color cast? Yes or no?

If you don’t want to use darktable, I gave the dcraw commands.

If you don’t want to use dcraw, try PhotoFlow and choose “Area” or “Spot” to reset the multiplers to uniwb.

Anyone? Please? Open a raw file and follow the three steps? Does it look (cyan) green or not?

If it looks green, and it eyedroppers as a green color, what does it mean to say “it’s not green”?

The reason it looks and actually is green is because it wasn’t properly white balanced to go along with the way the target chart was white balanced when the input profile was made.

The shape of the camera matrix input profile is irrelevant. Most of the space is just wasted because of how the primaries are calculated to fit the supplied data from the target chart shot. Most of the image channel values will fall within the xy values inside the “horseshoe shape” on the xy plane that includes all real colors.

Only bright saturated yellow/yellow-green and dark saturated violet-blue typically cause any issues with falling outside the realm of real colors. Though laser lights and neon lights (and stars?) and such also will cause issues - these sorts of colors are outside the usual target chart color patches.

If it looks green because it was improperly white balanced, it’s still green. The solution isn’t to say “it’s not green”. The solution is to go back and apply the right white balance.

Is the red target chart shot in @shreedhar’s third example (post #82 above) “not really red”? It sure looks red to me. And it’s red because it was given a white balance that is not in agreement with how the “uniwb” target chart shot was white balanced before making the input profile.

It’s not green.

Camera sensor values [0.241847781051476 0.679785574382394 0.579977543946796] = xy [0.3127, 0.3290]

It’s not green.

Let me check again…

No, it’s still not green.

I’ll check again later.

What exactly do you mean when you say an image that clearly is green, isn’t green? What is the point of saying it’s not green, when it really is green? No matter how it got that way, deliberate white balance, accidental white balance, creative white balance, it’s still green. There is a photographer who makes, I believe using a pin-hole lens, images that are green because she deliberately doesn’t white balance the raw files. Are her images not really green? They are green!

What exactly do you mean by “it’s not green”? Define “it”? What “it” isn’t green? And how did you arrive at those particular sensor values? For which camera? Which raw file?

Look at exactly what I posted above.

You are misinterpreting it because you can’t see that you are interpreting everything as though it was projected through your current display.

Step through this:

  1. The camera sensor captures photons in wells.
  2. The camera sensor has filters on the photosites.
  3. When you read a raw file, you are getting photon accumulation ratios. Aka ratios of light, according to the colours of the filters.
  4. Given the above, the following ratio [0.241847781051476 0.679785574382394 0.579977543946796] is not, in any way, under any circumstances, green[1].

No your fulcrumed white balance isn’t changing things (Heck the image is still in the camera primaries!) and only makes it look not green on your display. Did you look at the other colours? I assure you those are not red, not blue, and not green either.

Again, the ratios you are staring at are not green.

Clear?

[1] This ratio happens to be perfect D65. So again, those ratios are not greeny-cyan, but rather perfect idealized D65, aka xy coordinates [0.3127, 0.3290]. The ratios don’t represent green. They aren’t green. They aren’t cyany.

I don’t think anyone is disputing that it will appear green when viewed a certain way. It’s more about what the data actually represents - at least that’s how I read it!

1 Like

I think you’re wrong. I completely agree with you, that the ratios represent something utterly different, but I’m pretty certain that that party doesn’t have a lot of people at it currently.

If we had an idealized camera projector, we could project those values and see exactly what those colour ratios gathered in camera mean, but we don’t. But again, I assure you, those ratios do not make greeny cyan.

Quiet party.

If you change your sentence to read "it will be green when processed a certain way, I’d totally agree. I’d also agree that either the photographer knew they were using a creative white balance, or else they were using a flawed workflow.

What I don’t agree with is that the green color cast that results from not using the right white balance multipliers - that is, white balance multipliers that are in line with the white balance multipliers used to process the target chart - is somehow “not green”. It is green whether viewed on my calibrated and profile monitor using ICC profile color management, or yours, or anyone else’s.

I know you don’t and you won’t believe me, or an individual who has written an entire colour management system from scratch and has been around colour for gosh knows how many years. I could bring a parade of people in here with colour science backgrounds to restate what Mr. Gill has tried to impress upon you, but you wouldn’t believe them either.

All we have are ratios of light gathered at a sensor. What gives those ratios meaning? The camera. That is, if you don’t have precisely the same coloured filters as the camera has, you aren’t looking at anything that resembles the intention of those ratios, which can be expressed as xy coordinates based on the light gathered.

Further, and I’d encourage you to actually check this, your “white balance” is not balancing white. It’s making R=G=B equal some approximation of an illuminant. This just happens to appear as the same colour “neutral white” as your display or what you are accustomed to seeing. If you were on a DCI-P3 projector and used the RGB encoding? Guess what? It’d look greeny! There I said it.

So you have conflated that R=G=B stretches the base camera RGB such that it equates with whatever illuminant in the image. But that’s not the case. Test the other swatches and you’ll see that they are also well off their mark. The native camera R=G=B is nothing close to it, and you can’t change that. Hence, R=G=B is not green.

Now if you ask how complex it would be for you to do a white point adjustment using any old tool, I’m willing to wager that you, @Elle, could see how to do it pretty easily post-XYZ transformed. Why? Because it’s not that challenging to someone like you with colour experience. Dare I say it’s almost trivial, and a simple math matrix node would do it, or an equivalent in any software.

So no, “it”, being the ratios as a single set, is not green. You don’t have to believe me, but I’d encourage you to believe Mr. Gill.

I have asked @gwgill to explain how the following steps somehow involve sRGB, and he has so far declined to answer:

  1. Make matrix camera input profile using a target chart white balanced to D50. Or else just pick the dcraw default input profile or when using darktable, the enhanced profile if it’s available for your camera.
  2. Interpolate a raw file but use uniwb.
  3. Assign the camera input profile from step one and review the results using ICC profile color management and a calibrated and profiled monitor.

Because it’s silly?

Why don’t you do that?

  1. Balance the achromatic axis in the camera RGB encoding.
  2. Sample the chromaticities of known swatches.

Are they correct?

If they aren’t, what exactly have you “white balanced”? Or is it perhaps a bit of math trickery to make the achromatic axis align with R=G=B?

White balancing changes all of the colours in the image such that they would appear under a given illuminant within the standard observer model. Yet when we sample the colour swatches after we align / scale the achromatic axis such that R=G=B in RGB, lo and behold, the colours aren’t correct. So again, have you white balanced? In proper white balancing approaches, as most folks realize, we are actually rotating and adjusting the primaries themselves. In fact, you have only performed a portion of the overall transform. Examining that partial math aspect isn’t valid unto itself.

Now if you make that matrix profile and converted the RGB values to your display’s RGB values, I can assure you that the RGB ratios didn’t change; they still represent the camera light ratios and you have transformed them to your display appropriately. Sure the ratios might look whacky to you, but they still represent legitimate xy coordinates. Further, equal energy camera sensor values are natively what they are.

When you are suggesting that the ratios “are green”, you’re leaning on your learned experience of familiar light ratios. But again, the ratio set I posted above is not greeny cyan; you are interpreting the data wrong.

Right, so an expected result. The profile doesn’t match the device setup (because you white balanced the raw for profiling, but not for application), and as a result you don’t get correct colors, and for your particular device (camera), it looks green.

It may seem like a technicality, but it’s pretty fundamental to the understanding of the difference between device color spaces and device independent color spaces. The camera raw colorspace doesn’t have any color meaning until it is interpreted (i.e. converted) to a device independent representation. At that point it has a color meaning. Done using a reasonably accuracy device color profile, white is white. If white isn’t white, then the color profile isn’t accurate for that device space. Changing the gain of the channels in the raw file, changes the device space, so it need to be profiled in that condition for the profile to be valid.

It may all seem a tautology, because it is. Colorimetrically, white in camera raw space is whatever raw values correspond to scene white. So even if you change those values by modifying the raw file, you haven’t actually changed the color, just the encoding of it.

But of course you can assemble any sort of workflow you like, including not quite color accurate ones. So given a fixed camera profile you are applying to the raw images, you can certainly change the end white balance by changing the raw channel values while not changing the profile to compensate, but this isn’t a color managed way of doing it.
[ And yes, this is where we started - a lot of camera workflows seem to be non-color accurate in applying white balance to the raw encoding values, rather than applying it in a cone sharpened device independent colorspace. ]

1 Like

Yep. There is a practicality in terms of interchangeability, in “white balancing” the raw data.

You could still get a more color accurate result there, if the white balancing was done using a 3x3 profile approximation. i.e.:

raw → XYZ → Sharpened cone → white balance → sharpened cone → raw

[ This all assumes that black = 0, which is not always the case for raw raw. You’d need to have a 3x3 + offset for the raw ↔ XYZ to compensate. And raw is assumed to be reasonably linear light :slight_smile: ]

So, having nothing better to do after raking leaves, I set out to produce a whitebalancing camera profile. Starting with my target shot raw file from last year’s profiling, I opened it in rawproc using the rawdata property, so I started with the camera data straight out of the NEF, converted to floating point, no assigned camera profile. I demosaiced it with half, then cropped to the target corners and saved it. Absolutely no processing except convert to float, demosaic. Here’s a screenshot of it:

Next, I ran it through the bash script I wrote for last year’s profiling:

PATH=$PATH:/d/Documents/Argyll_V1.9.2/bin
scanin -dipn -v -G1.0 -p $1.tif /d/Documents/Argyll_V1.9.2/ref/ColorChecker.cht /d/Documents/Argyll_V1.9.2/ref/ColorChecker.cie
colprof -v -am -u -C"No copyright, use freely." -O"Nikon_D7000_Sunlight_UniWB.icc" -D"Nikon_D7000_Sunlight_UniWB.icc" $1

That produced an ICC profile that I then assigned to my reference train image upon opening. Same treatment: rawdata, demosaic; to that I added a colorspace conversion to Rec2020 g1.8, and a scaling to put black and white at the data container limits (a poor man’s display transform). Here’s a screenshot:

I used to have to tweak white balance to get this. It’s still a tad blue, but based on what we’ve been discussing, I think it’s due to the shooting of the target in good, bright sunlight, and the train was shot on a cloudy day, n’est ce pas? On second thought…

I’d shot multiple exposures of my target, and using the one I’d used for my original camera profile produced a bit of garishness (NOT GREEN!!!). On inspection, the white patch was a bit blown out, so i moved to the next lower exposure, re-produced the profile, and 'ere y’go.

You can read and read and read all the prose out there on color management, but there’s nothing like a good example to drive things home. @pixelator, thanks for letting us hijack your thread. @Elle, @gwgill, @anon11264400, thank you for the discourse…