GIMP 2.10.6 out-of-range RGB values from .CR2 file

Hello everyone,

I’m an astronomy weenie, using GIMP to peer into my astrophotos. I’m not very experienced at either, I might add! I use a Canon 100D DSLR and .CR2 raw image files.

When I import a .CR2 file into GIMP, using the darktable plugin, I see my image just fine. But when I zoom way in and use Color Picker to examine individual pixels I see some instances of over 100% R, G, or B values in the core of bright stars, and some instances of negative percentage R, G, or B values in the dark background.

How can this be? .CR2 is just a 16-bit integer format for each of R, G, and B, so how can there be >100% or <0% R/G/B values?

Thanks for any help,
Gerrit

1 Like

Two reasons:

  • A colour transform via matrix to a smaller gamut will change the light ratios in the file.
  • Sampling. Resampling can cause overshoot and undershoot.

It’s likely mostly the first point.

Thanks, Troy. Why would darktable or GIMP transform the nice simple .CR2 file data? I see overranges up to 108% or so, and underranges down to a bit less than -1%. This seems like a pretty sloppy transform, too (not knowing anything about it).

This leads down the dark path known as pixel management.

The values in your camera data parcel are essentially light ratios. That is, they are linear values that have been harvested off of the sensor after serious massaging of software and hardware engineering. But sensors are monochromatic, and to record colour, they put little plastic / glass coloured filters on top of the microlenses.

Those filters gather up spectral components and bake them down into a three light array. If we wanted to replicate what they captured, we’d need a projector / emitting device with precisely the same coloured plastic / glass caps. That is, we’d need identical coloured filters to properly re-project the light ratios.

This isn’t the case of course, as our displays conform to many different emitting light colours. Many conform to the colours for REC.709 lights via the sRGB specification, and others still use DCI-P3 lights in the case of Apple products.

The transform from one set of lights into another set results in different ratios between the three channels. A highly saturated colour may require more equal ratios to emit and mix a similar colour in a smaller gamut set of lights, while some smaller gamut lights are incapable of producing the mixtures from some wider gamuts, and you end up with negative emissions that push the other lights mathematically outwards.

In the case of your Canon, the sensor captures a reasonably wider gamut than sRGB, and as such, it must go through a transform in an attempt to display the colours as the original ratios intend. This is a Good Thing™ otherwise you are viewing light ratios intended for emission from a different set of coloured lights.

The TL;DR is that the transformation from one set of lights to another, in this case the camera’s native filters to sRGB, results in different ratios of measurements to mix various colours. This is no different than mixing a colour from three paints and trying to match the same colour using another set of three paints; your ratios will be different.

When transforming down gamut and starting with normalized light ratios, the proper approach is to clip at 0.0 and 1.0 as the values are non-data in the destination gamut.

3 Likes

@Pixelator - the only way to know for sure (as opposed to speculating on this or that possible cause of whatever) is if you upload one of your raw files and also upload the corresponding darktable xmp file. I’d be happy to take a look to see what’s really going on. But speculating without actually looking at the raw file and also at how you processed it is a bit silly.

@anon11264400 - Thank you for taking the time on this! Your explanation is quite clear, and I understand another morsel of the complicated world of color imaging now.

@Elle - I haven’t done any processing, in the sense of manipulating the image. I’m just reading the .CR2 (via darktable) and examining the pixels. Would it still be helpful to see the .xmp and .CR2 files? It sounds to me like @anon11264400 has identified a likely mechanism.

It’s entirely up to you whether you would like me to look at your files or not. I probably wouldn’t actually look at your files until tomorrow morning east coast US time :slight_smile: .

There is no “just reading and examining” the raw file unless you are literally looking at the uninterpolated channel data before applying a white balance. I might be totally mistaken, but I’m assuming you’ve chosen an interpolation method, you’ve provided a white balance, you’ve chosen a camera input profile, and you chose an output RGB color space. Or else you accepted all the darktable defaults for these actions.

If you didn’t change what darktable does by default when you open a raw file, then you (or rather darktable) also applied sharpening and a “make it pretty” tone curve.

All this default processing that goes into “just reading and examining” a raw file is why I’d like to see the actual raw file plus the darktable xmp file. You might have to go to the “lighttable” section of darktable and export the interpolated raw file to disk in order to generate the corresponding darktable xmp file.

1 Like

Thanks, @Elle. Image analysis is turning out to be complicated by processes going on behind the scenes. It’s hard to simply look at the camera sensor data. That’s what I’m ultimately after right now, both examining individual pixels and getting the statistical mean of all pixels in an image. It turns out that the software I’m using to “stack” multiple images alters the gain of the result sometimes too.

So let me get closer to the bottom of all this before I start uploading anything. For now I have enough of an understanding to get me to the next level.

@Pixelator Welcome to the forum!

In my experience, sharing a sample would help focus the discussion. It would save us from a lot of guesswork and digression.

1 Like

In your .cr2 file is a collection of numbers that the camera manufacturer calls"raw", and even that is probably not the actual sensor measurements as cameras often do minor manipulations even before that. It is not easy to extricate this data, most software wants to be kind to you and make it somewhat presentable, even dcraw, the canonical command line raw processor applies scaling unless you figure out how to tell it not to.

If what you’re regarding isn’t a dark monochrome image with a 'quilt- like" texture, it’s not a bayer- pattern array of sensor measurements, or “raw”

That your image data has values that extends past the display bounds is actually a good thing, it is information still available for eventual corralling back to the display domain, wasn’t arbitrarily clipped. It does indicate your data has been scaled somewhere, as your camera saturation point is at least two bits lower than the usual 16- bit integer data containers used in raw image files.

2 Likes

@ggbutcher - You’re right, the .CR2 image in GIMP has been debayerized, so that’s already some processing that’s been done. It’s a 14-bit camera, so it has evidently been scaled up two bits also.

I’m poking around with Image Magick and IRIS, will try DCRAW too.

@Pixelator see RawDigger.

2 Likes

I just spent some quality time with my hack software to see what it would take to open a raw file and save it to a tiff that preserved the unadorned raw data to the greatest extent possible. I can get that data from the raw file by snarfing the array from libraw before it does it’s dcraw processing, which I can turn on in my software with the property input.raw.libraw.rawdata=1.

Short answer, even that wasn’t enough, I had to turn off color management (input.cms=0) and make sure the tiff output was 16-bit (output.tiff.parameters=channelformat=16bit) . Even then, there was a small difference due to rounding, because I open all image files to a floating point array, and have to re-convert back to 16-bit integer for the tiff output. Victim of my own design decisions… :smiley:

If you download and compile libraw, in the examples you’ll find the program ’ ‘unprocessed_raw’. It compiles to a command line program that’ll get the raw data and make a TIFF of it, but even it has scaling and black-frame subtraction switches if you desire. I haven’t played with it, but it would appear to be the most straightforward FOSS way to get really raw data for messing with. RawDigger IMHO is a really good alternative, well-worth the price. I may take unprocessed_raw.cpp, pare it down to just a TIFF saver with no options, and compile it to an AppImage (or just static executable, the ultimate AppImage :slight_smile: ) and put it somewhere for folk to try, but I need to get my WhiteBalance logic straight for rawproc 0.8 first.

I think this is important to this thread because looking at the histogram of really raw data speaks volumes about the processing that’s done to even get the dull, dark images most think of as ‘raw’.

2 Likes

@ggbutcher - that rawdigger program that @Morgan_Hardwood linked to, I wish free/libre software had something similar, or at least something like the old “Rawnalyze”:

http://dave-anderson-photo.com/blog/2010/08/23/gabor-rawnalyze-author-rip/

I put some screenshots of the Rawnalyze user interface in this article:

https://ninedegreesbelow.com/bug-reports/ufraw-highlights.html

It sounds like your raw processing software - which by the way is leagues away from being “hack” software at this point! - could perhaps be massaged to work as well or better than the old Rawnalyze. Which would be awesome.

I’ve picked at such in rawproc, release to release, because the histogram has become so important to my working of each image. rawproc’s histogram now is fast, but not so ‘precise’, but it clearly shows the R-G-B relationships well enough for me to mess with stuff like white balance and color negative conversion. The big thing is that, in spite of the 0-255 display range, it is actually calculated from the 0.0-1.0 range of the internal working image, what-you-see-is-what-it-is (WYSIWII?).

So, things like the integer 16384 saturation limit of my camera is actually 64 in the histogram ( (16384/65536)*256), but that mental math is part of the discovery for me regarding the relationships of measurements and the subsequent transformations. How do you count cows? Easy, just count the legs and divide by four… :smiley:

In the image information popup available for each processing step, I do present max/min stats which are helpful, but not as telling as the topology of the data presented in the histogram.

So, I will commit that rawproc 0.9 will have a better histogram, with some or all of the following: log/linear modes; full-range/display clipping, some kind of intelligent height scaling so a radical change in one channel doesn’t squash the other two into the dirt, etc… I may put out what we call at work an “engineering release”, a preliminary implementation for comment, and your feedback in that regard will be carefully considered. I just downloaded the last Rawnalyze .zip version for inspection, and that too will influence my coding.

Thanks for the endorsement, compared to the carefully considered and deliberate way we do this at work, my home coding is definitely “hack” - they’d throw me out of peer review with this stuff!

Yes! If there were freer alternatives to RawDigger and / or FastRawViewer, I would be all over them. Guess that is for another thread. :wink:

@anon11264400 - what does “Resampling can cause overshoot and undershoot” mean? What does “non-data” mean?

I would have assumed that in the deep shadows when photographing stars, the signal-to-noise ratio is very, very low such that the data is essentially “all noise” and random - this would account for negative channel values in the shadows in the interpolated (“debayered”) image file. In this case one partial solution is to shoot a black frame (or averaged set of black frames) and subtract out at least this one type of noise that’s generated by the sensor itself.

I also would have said - exactly as @ggbutcher suggested - that for the actual brighter stars that have channel values above 1.0, perhaps the white balance multipliers that @Pixelator used drove some of the channel values in the interpolated image file above 1.0 floating point. In this case the solution isn’t to “clip at 1.0” but rather to apply negative exposure compensation either in darktable or in GIMP to bring the channel values below 1.0 (assuming of course no pixels in the raw file had received enough light to reach “full well/saturated” status which would complicate things and require highlight recovery).

I didn’t actually think that photographs of stars would have colors that are saturated enough to drive the RGB channel values outside the sRGB color gamut, but it seems I was mistaken on this point, at least based on one “star image” raw file that I downloaded and checked.

As @Pixelator doesn’t want to upload one of his own raw files, and being curious as to where the “outside the sRGB color gamut” channel values came from, I found and downloaded a raw file of a star image taken with a Canon 5D Mark II. Other camera brands handle the raw file black point differently from Canon, so checking for example a Nikon camera raw file wouldn’t have been all that relevant for deep shadow data from a Canon raw file.

To my surprise, after converting from the camera input profile to sRGB the bright stars in the 5D Mark II sample image do have channel data with one or more negative RGB channel values. “How negative” depends on the white balance. I tried with the in-camera white balance and also with “daylight” white balance as Roger Clark recommends using “daylight” white balance for night shots.

“How negative” also depends on the input profile, being more negative with the enhanced color matrix input profile that darktable uses by default.

I seem to recall from using an old (no longer available) astrophotography software that for star photography their recommendation was to just use sRGB as the input profile - I didn’t use that software for processing star images, but I did read the software’s online manual cover to cover - there was a lot of interesting image processing information. I might be totally misrecalling what the manual actually said! But “just use sRGB as the input profile” would make sense if actual star colors are so saturated as to be far outside the color gamut covered by colors on target charts that are used when making the usual matrix camera input profile.

GIMP-2.99 now has (and the next release of GIMP-2.10 will have) xyY color picker readouts. So in darktable I assigned one of the default-supplied camera matrix input profiles (not sRGB as recommended by that old astrophotography software), converted the image to sRGB, and then sampled the xyY colors using GIMP:

  • Indeed the deep shadow channel values in the sample star photograph that I downloaded look just like noise (no big surprise there), with xyY values (sometimes x, sometimes y, sometimes Y, sometimes two or all three at once) that are negative, which means not just “out of gamut with respect to sRGB” but “totally imaginary” colors - note that this isn’t from having converted from the relatively large input color space to the small sRGB color space.

  • The violet-blue bright stars have negative Y values - only the RGB blue channel is positive, hence the blue color. To me this suggests very strongly that these stars indeed have colors outside the gamut of colors that the camera input profile is capable of even remotely accurately handling - the sampled color as interpreted by the camera input profile isn’t even a real color. Though in fact saturated violet blue is a color that linear gamma camera matrix input profiles in general have trouble handling, so the negative Y value isn’t all that surprising - photographs of blue neon signs and backlit deep blue glass have the same problem.

  • The bright red star that I sampled has xy values of x=0.647, y=0.316 - which puts the color right near the edge of the “horseshoe shaped footprint” of all real colors on the xy plane, though I’m not sure if the color is inside or outside the horseshoe. At any rate it’s a very saturated color, so saturated that again very possibly the camera matrix input profile isn’t suited for dealing with colors this saturated.

Here’s a nice xyY Wikipedia diagram if anyone wants some context for these xy numbers - anything outside the horseshoe shape is imaginary - no real colors have x or y (or Y) values that are less than 0.0:

Stars in the sky do have real colors! So all these “imaginary” colors are from somewhere in the capture and processing pipeline.

If anyone is curious to check for themselves, the “star image” raw file I downloaded is from here:

and specifically I downloaded this file: IMG_1187.CR2

I found the links to these files here:

The download link also includes terms and conditions for using the files.

I’m guessing the xyY values in @Pixelator 's star photograph are similar to the colors in the sample star photograph that I downloaded, but of course this is just a guess :slight_smile: .

1 Like

I am overwhelmed at the technical knowledge and curiosity of the experts at this site! I had no idea people would be so helpful and dig so deep.

@Elle - you have found just what I see in my astrophotos. I am attaching a typical single exposure here, and certainly would have done this earlier if I’d known how curious you were! L_0007_ISO400_100s__18C.CR2 (20.6 MB)

The star cores do indeed saturate, and even bleed over into multiple sensor sites. This is typical, to extend the exposure in order to get the noise and faint signal above the camera noise floor. The night sky has way too much dynamic range for a camera. There are supposedly ways to combine short and long exposures using high dynamic range software to avoid the blown-out star cores, but I’m not there yet.

And, also as you found, the dark regions are dominated by noise. Stacking (averaging) dozens of positionally dithered exposures together (aligning the stars so the background “moves”) reduces this noise, and it can be further reduced by subtracting the fixed bias signal which arises from camera read noise. There are also other noise reduction tricks, before you even get to the good old image processing noise reduction.

I’ll re-read your post a few times and try to soak up a little knowledge about color representation. Thank you!

Resampling, such as when resizing an image using SinC, Lanczos, Cubic, etc., can result in overshoots and undershoots of the target values. This becomes non-data when in the negative domain range.

It means it becomes an invalid piece of data, and no longer represents a colour ratio.

Doing a colour transform from source to destination via Bradford etc. means the values may end up out of gamut. If they are out of gamut, the value cannot be represented in the destination gamut.

I believe this is false.

Via Lindbloom, when doing proper chromatic adaptation via Bradford and its ilk, D65 sRGB 0,0,1 represents xyY colour 0.15, 0.06, 0.07. When adapted to D50, the resulting colour in RGB is -0.14, -0.07, 1.13.

If you believe an exposure compensation brings that value into gamut, you are mistaken.

Quoting the Cinematic Colour reference below:

Display-referred imagery has dynamic ranges which are inherently tied to displays. Thus, even though the real world can have enormously luminous intensities, when working in a display-referred space values above the display-white are essentially meaningless. This mismatch between the dynamic range of the real world and the dynamic range of display-technology makes working in display-referred color spaces (even linear ones) ill suited for physically-based rendering, shading, and compositing.

Some camera sensor vendors do not encode 0 as “true black” and instead use an offset for noise encoding. As such, naive math will create incorrect colour ratios. See Roger Clark’s deep dive into sensor noise over at his site.

The Cinematic Colour PDF also has a section discussing this:

While a full discussion of input characterization is outside the scope of this document, there are differing philosophies on what camera linearizations should aim to achieve at very darkest portions of the camera capture range. One axis of variation is to decide if the lowest camera code values represent “true black,” in which the average black level is mathematically at 0.000 in scene-linear, or if instead the lowest camera code values correspond to a small but positive quantity of scene-linear light. This issue becomes more complex in the context of preserving sensor noise/film grain. If you consider capturing black in a camera system with noise, having an average value of 0.000 implies that some of the linearized noise will be small, yet positive linear light, and other parts of the noise will be small, and negative linear light. Preserving these negative linear values in such color pipelines is critical to maintaining an accurate average black level. Such challenges with negative light can be gracefully avoid by taking the alternative approach, mapping all sensor blacks to small positive values of linear light. It is important to note that the color community is continues to be religiously split on this issue. Roughly speaking, those raised on motion-picture film workflows often prefer mapping blacks to positive linear light, and those raised on video technology are most comfortable with “true black” linearizations.

Voltage spill is common. All of this, including sampling / engineering residue, become non colour values, and care must be taken when handling the values, as different contexts yield different data that may not represent a light emission.

1 Like

@anon11264400 - There seems to be some conceptual confusion in your arguments above:

  • darktable is an ICC-profile color-managed editing application.

  • When darktable assigns a camera input profile to the interpolated raw file, and then converts the file to linear sRGB, the file isn’t converted to the linear sRGB color space with its D65 white point. Instead the file is converted to a linear sRGB ICC profile color space, which has already been Bradford-adapted from D65 to D50 to make the linear sRGB ICC profile in the first place. The two color spaces are not the same, they are merely linked by the process of chromatic adaptation to make the ICC profile from the original sRGB color space. You know all this I’m sure.

  • There is no additional Bradford adaptation done during the conversion of the interpolated image RGB values from the camera input profile to darktable’s linear sRGB ICC profile.

  • There is also no additional Bradford Chromatic adaptation done during the transfer from darktable to GIMP. GIMP also is an ICC profile color-managed editing application and GIMP’s built-in linear sRGB ICC profile also has already been chromatically adapted from D65 to D50 furing the process of actually making the ICC profile in the first place. And in fact GIMP’s linear sRGB ICC profile is a functional match to darktable’s linear sRGB ICC profile.

Of course negative exposure compensation won’t turn a negative RGB channel value into a positive channel value. But the specific case I was referring to is when the color in question doesn’t have any negative RGB channel values and only has one or more RGB channel value that’s greater than 1.0. This can happen when the raw file isn’t clipped but white balance multipliers push channel values over 1.0. It can also happen when the user gets too eager when applying positive exposure compensation. This isn’t “non data” that should be summarily clipped!

Checking @Pixelator 's sample raw file, and using the dcraw input matrix rather than the enhance input matrix, and disabling the “make it pretty by sharpening and applying the base curve” edits, most of the pixels that have RGB channel values that are greater than 1.0, don’t in fact also have negative channel values. So negative exposure compensation does allow to bring the RGB channel values down to within the display color gamut. In which case simply clipping these channel values instead of applying negative exposure compensation would be to throw away the data:

As you can see, there is at least one pixel - Sample points 7 and 8 are for the same pixel - that has a Red Channel value that’s greater than 1 and a Green Channel value that’s less than 0. Negative exposure compesation won’t make that Green Channel value greater than 0, all exposure compensation is, is multiplying by a constant, and that constant is always greater than 0, so the signs of the multiplied numbers don’t change. But negative exposure compensation (multiply by a number greater than zero but less than 1) will certainly bring a lot of the image to within the display gamut. Just clipping these values would be throwing away perfectly good data.

Whether one should always clip actual negative channel values is another story. In this case, the ultimate solution isn’t clipping. The ultimate solution is as @Pixelator already mentioned - multiple shots, dark frames, averaging, etc. In a regular photographic image file, the solution depends on why there is a negative channel value in the first place, and also on how much further processing of which sorts the user wants to do.