GIMP 2.10.6 out-of-range RGB values from .CR2 file

Well… No. It’s not “garbage” or “non-data”. If the transformation from a source is continuous, then values that land outside the destination encoding range are merely out of range and remain valid representations of image information. You have to deal with how you are going to represent those values within the destination range, but there is nothing very unusual about that when doing such transformations, and choices are well known (various types of clipping, compression, scaling etc. etc.)

2 Likes

The xyY plot located at GIMP 2.10.6 out-of-range RGB values from .CR2 file - #90 by anon11264400 that used the camera native derived values. I updated it to include the RGB scaled approach, and plotted the resultant primaries in each case.

1 Like

Yes, sorry, a bit confusing. In the camera image, there was a lot more detail in the background to start with, but it’s non-data now, crushed to black. My choice. In rawproc, however, you could still see the data in the image stats; the mins for all three channels were well below 0.0. The display clamps to 0-1. So of course, you can’t see it, but it’s still in the internal working image.

In all of the scene-referred discourse, when I go to process I regard the following: My 14-bit camera values come to me from Libraw in a 16-bit integer container, so they have room to grow. They gain more room when I convert the 16-bit integer values to equivalent floats in the range 0.0-1.0. Indeed, a lot of room, for floats can be really, really big, and we’re just using the “percentage” range. But now the data is all bunched up in the lower part of the histogram, and is hard to tease out with “regular”, display-referenced curves and such. I could then find out where middle gray is at and adjust exposure to put it a 0.18, but I’m a post-production house of one and all my personalities know what the others are doing, so not a big deal if I don’t. That’s where I think scene-referred maps into my ways, from here I would then start monkeying around as per my flower picture.

Log transform would lift the bunched data out of the dark well, without pushing too much of the upper end past 1.0, but I need to write that operator and play with it. What I don’t get about that is that it appears to me the energy relationship of the original data is abandoned, and I thought that relationship was to be retained up till the final transforms for display. I’ll add that to the learning bucket…

There are other questions I’ll save for later (BT709 as the “reference colorspace”?, Why are linear ICC profiles not well regarded? etc…)

Ruminations of a bear-of-little-brain…

An encoding of scene intensities is a scene referred image. If the intensities are captured and encoded in a way that encompasses a full range of of intensities, it is an HDR Image :slight_smile:

From “High Dynamic Range Imaging” by Greg Ward et all, pg. 7:
“Images that store a depiction of the scene in a range of intensities commensurate with the scene are called HDR …”

[ Of course HDR is a relative term, used for contrast against previous systems with more limited capture and/or display display capabilities. ]

Reading @elle’s post and regarding @anon11264400 xyY plot brought me to this observation: whitebalance manipulation with the multipliers looks like it’s only considering the relative luminance, and is missing the chromaticity in the manipulation. @gwgill refers to the “chromatic transform” as a better way to bake in calibrating the colors to white, and that now makes sense to me if my observation is correct. This may be obvious to some, but for b-o-l-b here, it wasn’t until now.

Which is not quite the same thing as chromatic adaptation :slight_smile:

You can re-light a scene so as to move the white point, but that’s a change in the spectral domain, and so will have effects on color that can’t be emulated by the manipulation of a tri-stimulus representation of the scene.

[ As I understand it ] chromatic adaptation is an emulation of what we have in our visual system that causes us to normalize our perception of color to “white”.

Not really. As a matter of practicality, many display systems use a fixed point encoding of pixel data, since this makes delivering data at speed a tractable problem. The encoding that seems to have been most broadly adopted is actually an absolute brightness encoding scheme taken from an HDR mastering standard. So typically “1.0” = maximum code value = 10000 cd/m^2. There are two reasons that the output may be variable. One is that the display is not capable of reproducing light levels above its maximum, and the other being that the display may not be able to display its maximum over the whole screen area, or for more than a brief period of time, due to power or display technology limitations. So the behavior of HDR displays is hard to pin down, typically being make/model/mode dependent.

[ Tangentially, I think it was a mistake for the display industry to adopt an absolute encoding scheme, since the effective observe brightness depends on the viewing conditions. ]

I understand that’s the case in a color management pipe, but not necessarily true in a processing pipe.
See my example above with the screen compositing mode: The data outside the display range produces undesired effects that can be safely considered garbage :smiley: and discarded.
And doing so your image becomes a display-referred image that lost the dynamic range and ratios it had before.
Maybe it’s not garbage in a sense of data. There are values there, sure. But the result for the user with certain expectations is certainly incorrect and unusable.

Perhaps you can elaborate, but linear light ICC profiles will be rubbish if the linear light values are used as indexes to tables, since the tables will be badly quantized perceptually.

There are tricks that can help, and a lot of the newer ICC profile standards and alternatives to ICC profiles are motivated by addressing the issues of profile accuracy over larger dynamic ranges.

I was thinking of profiles that have “identity” TRCs, e.g., gamma = 1.0. AFAIK, a transform to such a profile will only affect chromaticity. I have @Elle’s profile collection, which includes a g10 version in each set, and they seem to operate that way in Little CMS.

Okay, I know this thread is getting entirely too long, but I’ve been picking through this year’s images and re-processing selected ones with the new camera profile I made back in post #102, then comparing them with what I did with the old profile that required separate white balance. The difference is marked; I’m starting with richer colors. Here’s an example, first the old rendering:

Then, the new rendering with the uwb profile:

Now, the first profile might just be crappy; in fact I think I made it with the blown-white exposure, and the new one was made with the next lower, which was a good ETTR. But, well, the whole chromatic treatment of white balance just makes intuitive sense, and so far I don’t see a downside. Now, that’s with my “manual transmission” software; this may be harder or impossible in tools that do more under the hood before presenting the image for work.

Still no downside…
Edit: well, the red flower really showed its colors; might need to work on that.

I’ve no idea why you think that, given the typical encoding resolution of JPEG and the norms of how it gets displayed. (And why would Greg Ward develop an extension to JPEG to be able to encode HDR if it JPEG already handled HDR ?)

Such scaling in raw RGB can certainly be regarded as a Von Kries type white point adaptation, even if it has less strong foundations than other chromatic adaptation approaches.

Hmm… why would one want to do that? Seems to me, once you’ve applied a log transform for example, wouldn’t you want to do the rest of your work there?

representation → encoding

And I never said it does.

I don’t know what planet you live on where “a full range of intensities” means little or nothing in relation to HDR.

@gwgill - GIMP currently has RGB/HSV/CMYK/LAB/LCH and the recently added xyY color picker readouts, of which it provides RGB/HSV/CMYK/LCH color sliders for dialing in new colors (as opposed to merely reading colors in the image).

I want to add “XYZ”. Maybe I could also add “u’v’” - would this be useful given the already existing color readouts?

To verify, is this “u’v’” the same as CIELUV (https://en.wikipedia.org/wiki/L*u*v* , Welcome to Bruce Lindbloom's Web Site)? Or is it something else?

I’d add u’v’ before XYZ :slight_smile:

XYZ may be interesting to examine, but not very useful to adjust, as it’s very perceptually non-linear.

It’s not L*u*v*, which is an analogous space to L*a*b* based on non-linear transforms of Yu’v’

Perceptual CIE 1976 UCS diagram Yu’v’:

u’ = (4.0 * X) / (X + 15.0 * Y + 3.0 * Z)
v’ = (9.0 * Y) / (X + 15.0 * Y + 3.0 * Z)

See this page or this page for a little more background.

[ xy Chromaticity diagrams are very popular, but they should all be replaced by u’v’ Chromaticity diagrams, according to Prof. Robert Hunt. ]

1 Like

Looks like Wikipedia has a link too: http://www.efg2.com/Lab/Graphics/Colors/Chromaticity.htm. (There is a discussion and a Windows app. I haven’t looked into it.)

1 Like

@gwgill - thanks so much! for links and equations.

From experimenting with editing in XYZ a long time ago in Cinepaint, I totally agree, it’s not a good space for editing. I just want to put XYZ in the color picker readout options.

Regarding xyY and this Yu’v’, which from your equations seems easy enough to code up (again thank you! for the links), these also would just be for color picker readout options. Though maybe one or the other also might be useful as color sliders for choosing colors, especially if given a polar transform.

Returning to the topic of vocabulary and the word “HDR”, Mark Fairchild uses the word “HDR” when referring to images and scenes:

  • In a talk given by Mark Fairchild (https://www.youtube.com/watch?v=fjt7VHarVu4&feature=youtu.be), in the fourth and fifth minutes Fairchild has a slide that has the words “HDR stimuli” and “LDR Stimuli”.

    In that same time frame he uses the phrase “high dynamic range images”.

    Earlier he talked about scenes with diffuse white, and then “extended range” scenes, so apparently Fairchild sees a reason to distinguish between images and scenes that are low dynamic range, ones that are only a bit past having a well-defined diffuse white, and ones that are high dynamic range scenes.

  • See this page on Fairchild’s web pages at RIT:

    http://rit-mcsl.org/fairchild/HDR.html
    Welcome to the HDR Photographic Survey

    Quoting the intro sentence, “This page is the home of the High-Dynamic-Range (HDR) Photographic Survey, a unique database of HDR photographs”

  • Also this page:

    https://scholarworks.rit.edu/cgi/viewcontent.cgi?article=1154&context=other
    Rendering HDR images

@anon11264400 - thanks! for posting that video link for Mark Fairchild’s talk over in this post:

I had watched that Fairchild video earlier this year and wanted to rewatch because there was a part in the middle that I didn’t understand. I was going to ask you about that part of the video. But in the meantime it seems you or someone deleted several of your posts to that “color balance” thread, anyway, I can’t find your post with the link to the video. But thanks much for posting it anyway.