GIMP 2.10.6 out-of-range RGB values from .CR2 file

HDR monitors changes their brightness based on the video codec metadata, so they have backward compatibility with sdr footage and different HDR standards.
The display bitdepth should be 10 bit for HDR10 and 12 bit for Dolby vision HDR, the processing bit depth should be at least 10 or more, generally high quality monitors have 14 bit processing.

They don’t at the EOTF level; the brightness is fixed to the maximum display brightness in HDR mode.

They use the PQ encoded values to project out at fixed levels. If you have a decent cell phone with HDR support, and play back from say, YouTube, you’ll see that HDR mode will jack brightness up to maximum. Same goes for decent HDR display sets which lock out consumer control and hit the specficiations when an HDR encoded signal is detected.

With that said, it’s the nuance between HDR10 and Dolby Vision / HDR10+; with the former there is a general setting for maximum content brightness, that never changes from shot to shot whereas with the latter there is metadata that allows the content to be custom transfer mapped based on content levels and display capabilities. However, the display output brightness remains constant.

See above. It depends on the standard.

You’d need a card capable of delivering 30 bit content for HDR10 / 10+ and 36 bit for Dolby Vision. And of course an operating system that doesn’t absolutely stink. Windows is pretty awful, but has gotten better. MacOS is top drawer. Linux is probably six decades away?

10 bit for HDR10 / HDR10+ standard.
12 bit for Dolby Vision.

The confusion comes if your software doesn’t have a proper view transrom. Otherwise what’s the problem?
I mean specifically:
Are you editing scene-referred images with an HDR output in mind? Use the proper view transform. Everything works.*
Are you editing scene-referred images with an HD or sRGB display in mind? Use the proper view transform. Everything works.

*) given that the software hands the proper data to the display driver, of course. A proper transfer for HDR dumped directly to the screen won’t cut it.

Your concerns about HDR displays confusing users come from software that is not prepared for scene-referred editing, therefor it can’t produce the needed output.

Anyway, we end up in the same place we started: You shouldn’t send device dependent images to the screen unmanaged, you shouldn’t send scene-referred data directly to the display. And this is what you do when you edit scene-referred data in GIMP.

Yes, that’s exactly what I was trying to point out. A portion of the scene values is mapped to the display. There’s a mandatory transform needed to go from one model (the scene) to the other (the display).
As Elle pointed out, it was a sort of a trick question. It’s never above 1 for the display because 1 represents the maximum intensity the display can give.

@ggbutcher I’ve updated the plotting using @Elle’s Sony ICC with the RGB scaled approach versus chromatically adapted values. Might be of interest to you to see how the values would march linearly towards white. Was going to do a L’u’v’ variant but it’s a tad uh… well the primaries for the camera go off to 3000+ in u’v’.

Camera native to adapted values is via Bradford, targetting the same white.

@ggbutcher - I don’t understand your post #168 above. It seems to refer to a picture? But the quote is about a viewer for scene-referred data that exceeds the range 0 to 1.0f? @gez - this by the way is why I need a term for “HDR” scene-referred, that doesn’t offend anyone!

@anon11264400 - what plotting was updated using my Sony camera profile? I don’t see a previous similar plot, what post is it in?

Well… No. It’s not “garbage” or “non-data”. If the transformation from a source is continuous, then values that land outside the destination encoding range are merely out of range and remain valid representations of image information. You have to deal with how you are going to represent those values within the destination range, but there is nothing very unusual about that when doing such transformations, and choices are well known (various types of clipping, compression, scaling etc. etc.)

2 Likes

The xyY plot located at GIMP 2.10.6 out-of-range RGB values from .CR2 file - #90 by anon11264400 that used the camera native derived values. I updated it to include the RGB scaled approach, and plotted the resultant primaries in each case.

1 Like

Yes, sorry, a bit confusing. In the camera image, there was a lot more detail in the background to start with, but it’s non-data now, crushed to black. My choice. In rawproc, however, you could still see the data in the image stats; the mins for all three channels were well below 0.0. The display clamps to 0-1. So of course, you can’t see it, but it’s still in the internal working image.

In all of the scene-referred discourse, when I go to process I regard the following: My 14-bit camera values come to me from Libraw in a 16-bit integer container, so they have room to grow. They gain more room when I convert the 16-bit integer values to equivalent floats in the range 0.0-1.0. Indeed, a lot of room, for floats can be really, really big, and we’re just using the “percentage” range. But now the data is all bunched up in the lower part of the histogram, and is hard to tease out with “regular”, display-referenced curves and such. I could then find out where middle gray is at and adjust exposure to put it a 0.18, but I’m a post-production house of one and all my personalities know what the others are doing, so not a big deal if I don’t. That’s where I think scene-referred maps into my ways, from here I would then start monkeying around as per my flower picture.

Log transform would lift the bunched data out of the dark well, without pushing too much of the upper end past 1.0, but I need to write that operator and play with it. What I don’t get about that is that it appears to me the energy relationship of the original data is abandoned, and I thought that relationship was to be retained up till the final transforms for display. I’ll add that to the learning bucket…

There are other questions I’ll save for later (BT709 as the “reference colorspace”?, Why are linear ICC profiles not well regarded? etc…)

Ruminations of a bear-of-little-brain…

An encoding of scene intensities is a scene referred image. If the intensities are captured and encoded in a way that encompasses a full range of of intensities, it is an HDR Image :slight_smile:

From “High Dynamic Range Imaging” by Greg Ward et all, pg. 7:
“Images that store a depiction of the scene in a range of intensities commensurate with the scene are called HDR …”

[ Of course HDR is a relative term, used for contrast against previous systems with more limited capture and/or display display capabilities. ]

Reading @elle’s post and regarding @anon11264400 xyY plot brought me to this observation: whitebalance manipulation with the multipliers looks like it’s only considering the relative luminance, and is missing the chromaticity in the manipulation. @gwgill refers to the “chromatic transform” as a better way to bake in calibrating the colors to white, and that now makes sense to me if my observation is correct. This may be obvious to some, but for b-o-l-b here, it wasn’t until now.

Which is not quite the same thing as chromatic adaptation :slight_smile:

You can re-light a scene so as to move the white point, but that’s a change in the spectral domain, and so will have effects on color that can’t be emulated by the manipulation of a tri-stimulus representation of the scene.

[ As I understand it ] chromatic adaptation is an emulation of what we have in our visual system that causes us to normalize our perception of color to “white”.

Not really. As a matter of practicality, many display systems use a fixed point encoding of pixel data, since this makes delivering data at speed a tractable problem. The encoding that seems to have been most broadly adopted is actually an absolute brightness encoding scheme taken from an HDR mastering standard. So typically “1.0” = maximum code value = 10000 cd/m^2. There are two reasons that the output may be variable. One is that the display is not capable of reproducing light levels above its maximum, and the other being that the display may not be able to display its maximum over the whole screen area, or for more than a brief period of time, due to power or display technology limitations. So the behavior of HDR displays is hard to pin down, typically being make/model/mode dependent.

[ Tangentially, I think it was a mistake for the display industry to adopt an absolute encoding scheme, since the effective observe brightness depends on the viewing conditions. ]

I understand that’s the case in a color management pipe, but not necessarily true in a processing pipe.
See my example above with the screen compositing mode: The data outside the display range produces undesired effects that can be safely considered garbage :smiley: and discarded.
And doing so your image becomes a display-referred image that lost the dynamic range and ratios it had before.
Maybe it’s not garbage in a sense of data. There are values there, sure. But the result for the user with certain expectations is certainly incorrect and unusable.

Perhaps you can elaborate, but linear light ICC profiles will be rubbish if the linear light values are used as indexes to tables, since the tables will be badly quantized perceptually.

There are tricks that can help, and a lot of the newer ICC profile standards and alternatives to ICC profiles are motivated by addressing the issues of profile accuracy over larger dynamic ranges.

I was thinking of profiles that have “identity” TRCs, e.g., gamma = 1.0. AFAIK, a transform to such a profile will only affect chromaticity. I have @Elle’s profile collection, which includes a g10 version in each set, and they seem to operate that way in Little CMS.

Okay, I know this thread is getting entirely too long, but I’ve been picking through this year’s images and re-processing selected ones with the new camera profile I made back in post #102, then comparing them with what I did with the old profile that required separate white balance. The difference is marked; I’m starting with richer colors. Here’s an example, first the old rendering:

Then, the new rendering with the uwb profile:

Now, the first profile might just be crappy; in fact I think I made it with the blown-white exposure, and the new one was made with the next lower, which was a good ETTR. But, well, the whole chromatic treatment of white balance just makes intuitive sense, and so far I don’t see a downside. Now, that’s with my “manual transmission” software; this may be harder or impossible in tools that do more under the hood before presenting the image for work.

Still no downside…
Edit: well, the red flower really showed its colors; might need to work on that.

I’ve no idea why you think that, given the typical encoding resolution of JPEG and the norms of how it gets displayed. (And why would Greg Ward develop an extension to JPEG to be able to encode HDR if it JPEG already handled HDR ?)

Such scaling in raw RGB can certainly be regarded as a Von Kries type white point adaptation, even if it has less strong foundations than other chromatic adaptation approaches.

Hmm… why would one want to do that? Seems to me, once you’ve applied a log transform for example, wouldn’t you want to do the rest of your work there?