GIMP 2.10.6 out-of-range RGB values from .CR2 file

or CWSATMMBAGI image, for short. Fair.

And speaking of HDR. What maximum value do you think an actual image for HDR delivery will have if it’s brightest pixel is 5 stops above middle gray?
The answer shows why calling an image with pixel values above 1.0 "HDR is misleading.

You are posting faster than I can type :slight_smile: . I was trying to add to my last post, but here goes the rest of my response to your next to last post:

I don’t follow most of the rest of what you said in your next-to-last post, except that of course embracing a scene-referred workflow allows to produce a display-referred image. I’ve been using this approach to editing for a long time now. But I suspect even you sometimes throw in a few edits that appropriate only for images with channel values in the display range? The trick is to keep such edits at a point past where one is working with channel values > 1.0f.

The whole point of editing is to produce an image to be displayed. You and @anon11264400 are saying that GIMP-2.10 can’t be used to process scene-referred images into images for display. I disagree. I think GIMP is amazing well suited for this task. But then again I’ve been using high bit depth GIMP since 2013, so I know all its remaining foibles. That helps.

GIMP doesn’t have all the tools required for processing extremely high dynamice range scene-referred images. But for less extreme cases it works quite well - if, and this is a big if - the user understands which editing algorithms are suitable for which types of images.

Otherwise I’d be tempted to put a great big “There be dragons here” on top of the option to convert to or keep an image at floating point precision in GIMP, and advise people who just aren’t interested in learning about scene-referred editing to use 16-bit integer precision instead. But I’m sure many Blender and even a few Nuke users learn while processing, and so will people using GIMP-2.10.

I didn’t think anyone ever thought that. But maybe they do.

But the image sent to the screen of an HDR display still has channel values <1.0f. That image is still display-referred. This is something that I suspect will lead to confusion once people start editing photographs and such to display on their spiffy new HDR monitors.

As a complete aside, trying to edit a photograph for printing while using an HDR monitor, well, I can’t imagine trying to do that, unless there’s a way to dial the brightness down to my usual roughly 70 cd/m^2 in line with the ambient light levels where I edit. Monitor brightness and surround have such a huge influence on what an image looks like.

That’s a trick question. The HDR display still has a highest value sent to the screen of 1.0f. Images prepared for HDR displays are still display-referred images. Which is why I think the marketing term “HDR displays” is highly misleading and designed to confuse.

I guess it depends on the app. This might not be relevant to image processing but it is possible to view HDR values. However, you would not be able to see the entire dynamic range or gamut on display, just a particularly range at a time.

Back on the subject of white balance, here’s a short article about the vagueness of the “white balance” term which I think explains partially different goals of camera vs perceptual balance. The author includes quotes from correspondence with Mark Fairchild:

2 Likes

Actually one HDR standard is the hdr10 and it use the st2084 EOTF, basically it compress values >1 in the 0-1 range (0-1023 range in 10 bit) without introducing banding.
If you don’t take in account the values >1 then the max value it will be roughly 500 ( in the 0-1023 range) because 1 is mapped to a value of roughly 500.

It is the same principle of all the log encoding.

Indeed.

At the core is whether or not RGB colour constancy scaling achieves the same goal as chromatic adaptation. It is my belief that while it is acceptable, it is also detaching the spectral observer from the ratios. That is, the scaling on the values produces values that:

  • Produces values that the sensor would never capture. IE: They are beyond the device referred capture range of the sensor.
  • Produces values that the sensor would not capture under equivalent lighting.
  • Produces values that do not correspond to the sensor basic colorimetry in terms of colour primaries as solved via the 1931 CIE CMFs.

Are the values good enough? Sure. Can one validly question the “on sensor” values it generates? Absolutely.

Nor would you, in a large number of cases dynamic range depending, be able to see representations of the intended colours correctly as specified by the ratios.

Seeing a portion of the image by raising or lowering the intensities of the pixels until shadows or highlights or whatever are displayed on the screen, with everything else crushed to 0.0 or pushed to 1.0, that’s one thing. The actual channel values sent to the screen are still in the range 0.0f to 1.0f.

Those little viewers that allow to examine this or that portion of the image from extreme shadows to extreme highlights are really cool, but I don’t know of any “stand-alone” apps, only stuff that’s part of larger packages - do you by chance know of such a stand-alone application?

As an artistic tool, I’ve found that you can make interesting images of forest flora in rather flat light by composing a subject interestingly, then taking the surrounding background into near dark oblivion with a nastily-shaped curve. Mood-ful images made by sending non-data to oblivion at output…

However, the recent days’ discovery compels me to start any such with a fully chromatically characterized camera image, using a profile who’s white point is based on the camera interpretation of white. The profile produced for the above post is my new favorite camera profile, well, until it bolloxes up something else I haven’t yet tried…

I think it works this way:

1)Image in the 0-1 range (midtones at 0.18) → display with 100 nits of brightness

  1. image in the 0-10 range (midtones at 0.18) → reduce exposure to bring the image in the 0-1 range (midtones at 0.018) → display with 1000 nits of brightness

That’s interesting - thanks! I didn’t know these HDR monitors could change max brightness depending on the image content, though that makes sense. But the actual channel values sent to the screen still max out at 1.0, it’s just a display.

Do all such monitors have these scene-relevant break points?

What sort of graphics card is required, do you know? Or do these monitors - which I’m assuming are for watching TV and movies - have their own built-in cards?

It seems like the monitor bit depth must be fairly high to avoid appearance of posterization - do you know the bit-depth? Is it just 10 bits?

HDR monitors changes their brightness based on the video codec metadata, so they have backward compatibility with sdr footage and different HDR standards.
The display bitdepth should be 10 bit for HDR10 and 12 bit for Dolby vision HDR, the processing bit depth should be at least 10 or more, generally high quality monitors have 14 bit processing.

They don’t at the EOTF level; the brightness is fixed to the maximum display brightness in HDR mode.

They use the PQ encoded values to project out at fixed levels. If you have a decent cell phone with HDR support, and play back from say, YouTube, you’ll see that HDR mode will jack brightness up to maximum. Same goes for decent HDR display sets which lock out consumer control and hit the specficiations when an HDR encoded signal is detected.

With that said, it’s the nuance between HDR10 and Dolby Vision / HDR10+; with the former there is a general setting for maximum content brightness, that never changes from shot to shot whereas with the latter there is metadata that allows the content to be custom transfer mapped based on content levels and display capabilities. However, the display output brightness remains constant.

See above. It depends on the standard.

You’d need a card capable of delivering 30 bit content for HDR10 / 10+ and 36 bit for Dolby Vision. And of course an operating system that doesn’t absolutely stink. Windows is pretty awful, but has gotten better. MacOS is top drawer. Linux is probably six decades away?

10 bit for HDR10 / HDR10+ standard.
12 bit for Dolby Vision.

The confusion comes if your software doesn’t have a proper view transrom. Otherwise what’s the problem?
I mean specifically:
Are you editing scene-referred images with an HDR output in mind? Use the proper view transform. Everything works.*
Are you editing scene-referred images with an HD or sRGB display in mind? Use the proper view transform. Everything works.

*) given that the software hands the proper data to the display driver, of course. A proper transfer for HDR dumped directly to the screen won’t cut it.

Your concerns about HDR displays confusing users come from software that is not prepared for scene-referred editing, therefor it can’t produce the needed output.

Anyway, we end up in the same place we started: You shouldn’t send device dependent images to the screen unmanaged, you shouldn’t send scene-referred data directly to the display. And this is what you do when you edit scene-referred data in GIMP.

Yes, that’s exactly what I was trying to point out. A portion of the scene values is mapped to the display. There’s a mandatory transform needed to go from one model (the scene) to the other (the display).
As Elle pointed out, it was a sort of a trick question. It’s never above 1 for the display because 1 represents the maximum intensity the display can give.

@ggbutcher I’ve updated the plotting using @Elle’s Sony ICC with the RGB scaled approach versus chromatically adapted values. Might be of interest to you to see how the values would march linearly towards white. Was going to do a L’u’v’ variant but it’s a tad uh… well the primaries for the camera go off to 3000+ in u’v’.

Camera native to adapted values is via Bradford, targetting the same white.

@ggbutcher - I don’t understand your post #168 above. It seems to refer to a picture? But the quote is about a viewer for scene-referred data that exceeds the range 0 to 1.0f? @gez - this by the way is why I need a term for “HDR” scene-referred, that doesn’t offend anyone!

@anon11264400 - what plotting was updated using my Sony camera profile? I don’t see a previous similar plot, what post is it in?

Well… No. It’s not “garbage” or “non-data”. If the transformation from a source is continuous, then values that land outside the destination encoding range are merely out of range and remain valid representations of image information. You have to deal with how you are going to represent those values within the destination range, but there is nothing very unusual about that when doing such transformations, and choices are well known (various types of clipping, compression, scaling etc. etc.)

2 Likes

The xyY plot located at GIMP 2.10.6 out-of-range RGB values from .CR2 file - #90 by anon11264400 that used the camera native derived values. I updated it to include the RGB scaled approach, and plotted the resultant primaries in each case.

1 Like

Yes, sorry, a bit confusing. In the camera image, there was a lot more detail in the background to start with, but it’s non-data now, crushed to black. My choice. In rawproc, however, you could still see the data in the image stats; the mins for all three channels were well below 0.0. The display clamps to 0-1. So of course, you can’t see it, but it’s still in the internal working image.

In all of the scene-referred discourse, when I go to process I regard the following: My 14-bit camera values come to me from Libraw in a 16-bit integer container, so they have room to grow. They gain more room when I convert the 16-bit integer values to equivalent floats in the range 0.0-1.0. Indeed, a lot of room, for floats can be really, really big, and we’re just using the “percentage” range. But now the data is all bunched up in the lower part of the histogram, and is hard to tease out with “regular”, display-referenced curves and such. I could then find out where middle gray is at and adjust exposure to put it a 0.18, but I’m a post-production house of one and all my personalities know what the others are doing, so not a big deal if I don’t. That’s where I think scene-referred maps into my ways, from here I would then start monkeying around as per my flower picture.

Log transform would lift the bunched data out of the dark well, without pushing too much of the upper end past 1.0, but I need to write that operator and play with it. What I don’t get about that is that it appears to me the energy relationship of the original data is abandoned, and I thought that relationship was to be retained up till the final transforms for display. I’ll add that to the learning bucket…

There are other questions I’ll save for later (BT709 as the “reference colorspace”?, Why are linear ICC profiles not well regarded? etc…)

Ruminations of a bear-of-little-brain…