GIMP 2.10.6 out-of-range RGB values from .CR2 file

You’ve got to be joking. There is nothing wrong with the profile. The same thing happens with my custom profiles and with the standard dcraw input profiles and with darktable enhanced input profile. The “way it is applied” is just select it from the dropdown box in the raw processor.

There is also nothing wrong with the raw processor - the same thing happens whether using darktable or RawTherapee or PhotoFlow or dcraw at the command line.

Try for yourself. Here’s a raw file:
081015-1536-109-0050.cr2 (8.5 MB)

And here’s the darktable xmp file:
081015-1536-109-0050.cr2.xmp (5.4 KB)

Yes the fellow with a hundred years of experience writing an entire colour management system is joking.

Am I the only person on this entire forum who’s ever applied “uniwb” to a raw file and noticed that the resulting image is green? And then when selecting an actual suitable white balance the green goes away? Am I really the only person who’s ever seen this happen?

darktable does allow to easily disable the white balance module - give it a try and see what happens - the file will look green.

Yes, you are right. I was rude. @gwgill, my apologies.

@patdavid, could you pretty please unsubscribe me from the pixls.us forum, this time I think permanently? I love the forum, but I need to focus more on family.

I’m really sorry Elle, but I’m not in a position to replicate what you’re doing that way, and even if I had those tools installed and knew my way around them, it’s hard to know what’s going on under the covers, without considerable work.

Now if you sent me a raw file as a 16 bit TIFF, along with the it8 reference values, then I could have a go with tools (i.e. ArgyllCMS) that I am familiar with, and know where to look for what is actually going on. But if white doesn’t turn out white, it’s because there is a bug or lack in the workflow handling, since that’s the definition of correct profiling.

1 Like

@gwgill - You probably have dcraw installed. Run these commands on the target chart shot raw file, but on any raw file from either of my two cameras the result is the same, uniwb produces green where there should be gray, unless I use a special input profile (not the default dcraw profile), or else if I put a magenta filter on the front of the lens before taking the picture.

First using “uniwb”:

/usr/bin/dcraw -v -r 1 1 1 1 -4 -T -o 0 081015-1536-109-0050.cr2
mv 081015-1536-109-0050.tiff 081015-1536-109-0050-uniwb.tiff

Then using white balance multipliers that are taken from gs06, as reported using darktable - ufraw will also report similar multipliers:

/usr/bin/dcraw -v -r 2.339 1 1.421 1 -4 -T -o 0 081015-1536-109-0050.cr2
mv 081015-1536-109-0050.tiff 081015-1536-109-0050-wb-on-gs06.tiff

The resulting tiffs don’t have an embedded ICC profile. Here’s the default dcraw matrix profile for the Canon 400D, that should be assigned to each tiff produced by the above dcraw commands:
400d-matrix-profile-from-dcraw.icc (944 Bytes)

The “uniwb” file is green, has an overall green color cast. After assigning the input profile and converting to sRGB for uploading to the web, it looks like this:
081015-1536-109-0050-uniwb

The white-balanced on gs6 file has neutral grays where there should be neutral grays. After assigning the input profile and converting to sRGB for uploading to the web, it looks like this:
081015-1536-109-0050-color-balanced-on-gs6

Ignore the fact that this is a target chart shot. Pretend it’s just an ordinary image file. How to make a profile isn’t an issue here. The issue is what happens when using uniwb vs an actual set of white balance multipliers when interpolating a raw file.

@gwgill I’ve done it manually.

Here’s the process for those interested, with no scaling in the RGB domain. This isn’t aiming at optimal settings, but just a simple once-over to generate the singular matrix with no RGB white balancing multipliers.

dcraw -r 1 1 1 1 -M -H 0 -o 0 -4 -T -h ./081015-1536-109-0050.cr2
scanin -v3 -p -a -G 1.0 -dipna ./081015-1536-109-0050.tiff /usr/share/color/argyll/ref/it8.cht ./R170830.txt
colprof -v -qh -am -nc 081015-1536-109-0050
convert ./081015-1536-109-0050.tiff -profile ./081015-1536-109-0050.icc no_scaled.tiff

The following is a PNG (TIFF doesn’t display) with the embedded matrix only XYZ profile.

It’s not green.

It’s not green because your camera input profile was made using a target chart shot that was processed using uniwb All you did was move the white balancing - the multiplying by white balance multipliers to remove the green color cast (usually green - green for all the camera raw files I’ve processed - but surely one can find an exception) - to the camera input profile itself.

I’ve made this type of profile in the past. If you actually spend some time using your new input profile to process raw files using the various free/libre raw processors, you will find that you can no longer use the camera white balance because the colors will be very wrong even if you set a custom “in-camera” white balance using a white balance shot.

You’ll also find that spot white balancing by selecting a neutral area in the raw file doesn’t work - again the colors turn out very wrong because the various raw processors expect that when a color is “neutral” it means that R=G=B. But with your “not green” profile, “neutral” means that R is roughly twice or a bit more than twice green for daylight shots.

And of course all the camera presets that raw processors and cameras offer - “daylight”, “incandescent”, “cloudy”, “auto”, etc - no longer work.

1 Like

That’s exactly what I’ve been trying to explain. I make my own custom camera input profiles using a target chart shot that’s been white balanced to D50. dcraw default input profiles that are included with free/libre raw processors are made the same way.

So using these profiles - made using a target chart that’s been white balanced - if you open up a raw file and don’t white balance away the green bias, instead just use “uniwb” when processing the raw file, the resulting image file will look green.

Edit:

Here are two ICC profiles that I made for my Sony A7:

  1. The target chart was white balanced to D50 using the procedure given in this article: Make a better custom camera profile
    sony98-am.icc (23.7 KB)

  2. A7-uniwb-dcraw-rcd-715.icc (25.2 KB) - the target chart wasn’t white balanced. Instead the channel values were left unmultiplied, “uniwb”

I made the first profile back when I first got the camera several years ago. I made the second profile when the various devs on the pixls.us forum were working on the new “RCD” demosaicing algorithm (hence the name of the profile), which by the way the RCD algorithm is totally awesome, has replaced AMAZE as my “go to” interpolation algorithm. I made the second profile as a step in trying to figure out why a particular raw file that I have shows rather extreme chromatic aberration.

If you use iccToXml to output the XYZ values for each profile, you get these results:

           X             Y                Z
D50	
R	0.785278320	0.307815550	-0.007293700
G	0.173873900	0.948043820	-0.198257450
B	0.005050660	-0.255859380	1.030456540

uniwb			
R	1.404388430	0.557617190	0.003982540
G	0.130264280	0.698913570	-0.148925780
B	0.019058230	-0.254791260	1.062713620

If you put the XYZ values into a spreadsheet and turn them into xyY values, you get these results - notice the xy values are very close but the “Y” values are not close at all:

	     x                 y          Y
D50
R	0.7232254532	0.283491897	0.30781555
G	0.1882444289	1.0263988295	0.94804382
B	0.0064781301	-0.3281730205	-0.25585938
			
uniwb			
R	0.7143422624	0.2836320184	0.55761719
G	0.1914941325	1.0274332131	0.69891357
B	0.023045559	-0.3080982348	-0.25479126

If you divide the “uniwb” Y values by the “white balanced to D50” Y values, you get these numbers:

R 1.8115302817
G 0.7372165244
B 0.9958253631

If you normalize the ratios by dividing by the Green Y value, you get these numbers, which are extremely close to daylight multipliers for the Sony A7 camera:

R 2.4572567511
G 1
B 1.3507908872

In other words, all @anon11264400 's “it’s not green” camera input profile does is move the white balance multipliers into the camera input profile itself, in the process producing a camera input profile that’s very difficult to use in the way that people use general purpose camera input profiles, that is, by selecting camera presets, in-camera auto white balance, and by white balancing on neutral areas in a raw file.

1 Like

And now it starts to make sense, at least to bear-of-little-brain here. If one takes the raw image and demosaics it, then uses it to make a camera profile, its computed white point would contribute to comprehending the camera’s spectral sensitivity in all it’s glory.

What we regular folk do in making camera profiles is to start with an image that’s already modified to make neutrals do R=G=B, so I’d say the colorspace conversion to output is only about chromaticity. I’ll guess that this convention started with dcraw, due to the difficulty in getting a (I hate to overload the term, but here goes) “raw” demosaics from contributors’ softwares to develop new camera profiles.

Exactly. @anon11264400 's profile is about white balancing and chromaticity, which is fine if you always shoot under exactly the same lighting conditions or if you make a new target chart shot every time you change lighting conditions - in this case go ahead and use uniwb for the target chart shot and also for all your raw files.

But if you want a general purpose input profile that allows the user to use various white balances according to the actual light source, then the useful approach is to white balance the target chart instead of asking the input profile to do the white balancing.

Yes, there are limitations to how accurate a general purpose input profile can be, but for most people’s editing purpose, general purpose input profiles work sufficiently well.

1 Like

It’s still not green.

See how the other colours are equally broken if you scale RGB only?

Do yourself a favour and read the article. You can easily adapt once in the XYZ / spectra domain using Bradford or its ilk, or any of the other researched in that paper.

It’s still not green.

It’s still not green.

Read the paper. A 3x3 matrix doesn’t just scale the one component, but also takes contribution from the other values. That is, it is taking it into the spectral XYZ domain, not a simple scale.

Read the paper.

See the first formula? See how the others reference XYZ matrix (A subscript c)? Why has a colour management system’s designer and a research paper suggesting that there is indeed a difference between scaling RGB and using a 3x3 matrix to take values into the spectral domain outlined in the CIE 1931 research and then performing various approaches to white balancing in that domain?

Read that line, from someone who has written an entire colour management system, supported by vast amounts of other research including the paper linked. Read it until it makes sense.

It’s not green.

But carry on Elle, you’re right.

@Pixelator is probably wondering, “what happened to my thread??” I’m reminded of an old bulletin board thread I participated in somewhere, which started with the title “(redacted) Is A Big $#@*-Head!”. Three months later, the thread is all about something different, but the title was still the same. Talk about infamy…

Gerrit, what we’ve been discussing does still have some connection, in that these transforms all can contribute to pushing values out of the displayable bounds, either color or tone. That you can see them out there, so to speak, > 1.0, means the information is still available for eventually corralling back to what can be displayed, and not arbitrarily truncated, lost forever…

1 Like

This is the crux of the issue however.

If we agree that it is a chromatic adaptation, one has to wonder what an idealized variant is. In the idealized variation, the values that take the excursion beyond the envelope are non data because the primaries rotate and shift, and the resulting mixtures that are out of gamut become non-data. There’s nothing to corral because we are talking about a device referred encoding to which the values have no representation.

This extends, even if you use the non-spectral approach[1], to values that escape the device referred envelope in other fashions, including resampling and otherwise. It’s non data. It’s not data.

[1] It doesn’t personally matter to me how someone deals with their data. Go nuts. Fill your boots. There is a real problem idea in here though, that echoes outwards. Compositing, evaluating, and colour grading work will take a serious knock if the conceptual framework behind the data is busted, wrecking the effort in the first place.

@ggbutcher - :grinning: I’m really enjoying the show here (the forum needs an emoji of a goggle-eyed smiley face munching popcorn). About 80% of it is going over my head, but showing me how much I have to learn about color representation and how complicated and subject to interpretation it is. Your second paragraph is the crux of the matter for me.

1 Like

A specific camera input profile was assigned to the interpolated raw file. That camera input profile was made by first white balancing the target chart shot to D50 in the usual fashion of making camera input profiles. The raw file was not white balanced but rather left at “uniwb”. This - combined with the nature of sensor response to light and the little color caps that provide a way to get color from a sensor - is why this interpolated from raw image file looks green after applying the camera input profile.

There is no sRGB anywhere in the processing or display chain, not even my monitor profile.

Please read the above sentence again. My monitor is not an sRGB monitor. There is no sRGB anywhere in the processing or display chain and the image still looks green until/unless it’s appropriately white balanced during raw processing. It would look green on an AdobeRGB wide gamut monitor. It would look green on a Rec.2020 monitor. It would look on an old ColorMatch CRT. If would look green if you had a print made and looked at the print. It’s green.

sRGB has nothing, absolutely nothing at all to do with the interpolated image file looking green when using the default dcraw input profile and then using uniwb instead of properly white balancing the image during raw processing.

In the past year it’s gone from 80% to probably about 50% OMH. The discourse here has been instrumental to my learning.

I’m also interested in astro, I’m within an hour’s drive to some of the darkest skies in the USA. Astrophotography proves to be very interesting, 1) you have to understand the sensors to some degree to coax faint data from them, 2) “color” can mean something very specific scientifically, or can be arbitrarily transposed to an artificial use for visualization, and sometimes you just want it to be pretty.

To your end, I think your question is quite important as astro PP has plenty of opportunity to drive the data within the containers (integer, float, and the like), and you really want to know how it eventually gets rounded back up for display or worse, pushed out of the container. In that regard, float is your friend…

So, to attempt to put it into context, is the “green” cast of the same ilk as would be the blue cast achieved by shooting tungsten film in daylight? Or specfically, the difference between the camera white temperature Kelvin and the display white temperature Kelvin?

I’m eventually going to write prose to explain such to my “regular” cohorts, and I want to not just ‘hand-wave’ it…

Stealing a link from another thread: LUDD - Homepage closed

That suggest it will “appear” green due to a difference of sensor sensitivity. So it’s just camera space sensor data, which should then be transformed to XYZ before white balance. I can’t see how white balance should ever be used to “balance” the sensor response… it’s perceptual to account for lighting isn’t it?

Edit: I hope everyone can stay cool, some interesting things in here again!

No. Not at all similar.

The values are light ratios. The colours of the sensor filters (capture) / lights (projected) are completely different, and the mixtures that appear green when projected (or merely mentally thought about) as sRGB lights into your eyes is nothing at all like the light mixture that those ratios generate from the camera lights, where the second channel isn’t the same colour.

It is exactly the same as how Wright and Guild balances their experiment to white with unequal contributions. Those lights are completely different from any other colours of light, so the ratios would mix an entirely different set of colours.

It is not much different from mixing precise quantities of paint, and expecting the same measurements to generate roughly the same colours when using entirely different paints.

It isn’t green at all. Full stop. It appears green in your thinking and under sRGB thinking because your experience and mental model equate a mixture of the second channel as being a particular colour. That isn’t the case with colour spaces where the colours of the lights / filtration are entirely arbitrary.