GIMP 2.10.6 out-of-range RGB values from .CR2 file

Um, RGB scaling for white balancing is not the same as using exposure compensation to reduce intensities.

In the case of white balancing during raw processing, except for the trivial case of uniwb, the scaling values for R, G, and B are not all equal to each other, with the G value being roughly half the R and B values, except of course for your example where you put the white balance in the camera input profile.

In the case of using exposure compensation, the RGB channel values are multiplied or divided by gray, R=G=B.

Very different situations. Multiplying and dividing by gray produces the same result in any well-behaved RGB working space, unless of course you insist on clipping the channel values before applying negative exposure compensation to bring the highlight values down below 1.0f. But this isn’t true for multiplying by a non-gray color.

Let’s assume the channel values > 1.0f for the image in question actually have meaning, are real data, for the specific topic at hand. Which is that white balancing by multiplying by a non-gray color is not the same as reducing intensity by multiplying by a gray color that’s less than 1.0f. The latter operation is color-space independent as long as the color space is a well-behaved RGB working space (no RGB LUT profiles here, please!).

Maybe reread what I posted, as I believe you misread the intention.

Let’s not, as that is the entire reason for this thread existing. Is RGB scaling appropriate colour constancy? If you scale the RGB, is that what the sensor would / could have recorded?

PS: You are re-explaining things that every single reader of this thread is exhausted by given it is so rudimentary. Everyone knows the difference between an exposure change!

Say what?

Do you never scale intensities using Nuke or Blender?

In the example case, the raw file has already been interpolated and white balanced.

Every raw processor out there allows to apply positive and negative exposure compensation.

Did you shift tracks from discussing colour constancy / chromatic adaptation? RGB scaling in the colour constancy / chromatic adaptation sense.

Well, I thought you shifted tracks when you introduced the topic of how to do a better white balance, when the OP’s original question was how to deal with channel values < 0.0 and >1.0. But the question of a better white balance is something I’m very interested in, so I went with the flow.

But these posts were my attempts to switch the discussion back to the original question:

and the following posts after #144 are mostly about dealing with channel values > 1.0f, with sidetracks of criticizing my use of “HDR”.

1 Like

Changing topics, getting someone to agree to use your vocabulary to describe a situation is a major victory. I’ve been using @anon11264400’ vocabulary whenever possible simply to avoid initiating a tirade about all the ways phrases like “out of gamut channel values” (use “non data” instead) and “unbounded” (use “non data” instead") are wrong terminology, in @anon11264400 's opinion. But I accidentally slipped up and used “HDR”, which I know he objects to, but I just forgot.

But I’m pretty sure everyone on the forum does casually refer to high dynamic range scene-referred images, that is, images of scenes that have intensities that are > 1.0, as “HDR”, or at least knows what I mean when I use the term. But maybe not.

Here is a question: Instead of “HDR image”, what is the correct term - the term that won’t bring the wrath of @anon11264400 down upon my head - to refer to images of scenes that have intensities that are proportional to scene intensities, where the scene has intensities that are > 1.0 (depending of course on where one puts middle gray) and these intensities are encoded in the image file?

See these threads for context of above disagreements about vocabulary, especially the first thread:

Scene-referred images.
Implying that they are “HDR” means that there is an extension beyond a standard dynamic range, and that leads to the usual confusion that scene-referred is only display-referred where values above 1 weren’t clipped.

Scene referred is not display referred with extra dynamic range.
An HDR display is a display with extra dynamic range.

With a scene-referred image you can produce both images for LDR and HDR outputs.

Oh, interesting. But then how do you distinguish between scene-referred images that in fact have all intensities below 1.0f and those that have intensities >1.0f?

What do you need that distinction for? With a proper scene-referred workflow it doesn’t matter. It only matters when your processing pipe is display-referred.
As I said above. If you embrace a scene-referred workflow you can still produce display-referred images.

Is an image that fits in the 0,1 range a “low dynamic range”, image? Is it an image with pixels reading 1,1 automatically a “high dynamic range” image? Then what do you call an image framing the sun and a piece of charcoal?

A case where some automated tone-mapping might be a good idea.

or CWSATMMBAGI image, for short. Fair.

And speaking of HDR. What maximum value do you think an actual image for HDR delivery will have if it’s brightest pixel is 5 stops above middle gray?
The answer shows why calling an image with pixel values above 1.0 "HDR is misleading.

You are posting faster than I can type :slight_smile: . I was trying to add to my last post, but here goes the rest of my response to your next to last post:

I don’t follow most of the rest of what you said in your next-to-last post, except that of course embracing a scene-referred workflow allows to produce a display-referred image. I’ve been using this approach to editing for a long time now. But I suspect even you sometimes throw in a few edits that appropriate only for images with channel values in the display range? The trick is to keep such edits at a point past where one is working with channel values > 1.0f.

The whole point of editing is to produce an image to be displayed. You and @anon11264400 are saying that GIMP-2.10 can’t be used to process scene-referred images into images for display. I disagree. I think GIMP is amazing well suited for this task. But then again I’ve been using high bit depth GIMP since 2013, so I know all its remaining foibles. That helps.

GIMP doesn’t have all the tools required for processing extremely high dynamice range scene-referred images. But for less extreme cases it works quite well - if, and this is a big if - the user understands which editing algorithms are suitable for which types of images.

Otherwise I’d be tempted to put a great big “There be dragons here” on top of the option to convert to or keep an image at floating point precision in GIMP, and advise people who just aren’t interested in learning about scene-referred editing to use 16-bit integer precision instead. But I’m sure many Blender and even a few Nuke users learn while processing, and so will people using GIMP-2.10.

I didn’t think anyone ever thought that. But maybe they do.

But the image sent to the screen of an HDR display still has channel values <1.0f. That image is still display-referred. This is something that I suspect will lead to confusion once people start editing photographs and such to display on their spiffy new HDR monitors.

As a complete aside, trying to edit a photograph for printing while using an HDR monitor, well, I can’t imagine trying to do that, unless there’s a way to dial the brightness down to my usual roughly 70 cd/m^2 in line with the ambient light levels where I edit. Monitor brightness and surround have such a huge influence on what an image looks like.

That’s a trick question. The HDR display still has a highest value sent to the screen of 1.0f. Images prepared for HDR displays are still display-referred images. Which is why I think the marketing term “HDR displays” is highly misleading and designed to confuse.

I guess it depends on the app. This might not be relevant to image processing but it is possible to view HDR values. However, you would not be able to see the entire dynamic range or gamut on display, just a particularly range at a time.

Back on the subject of white balance, here’s a short article about the vagueness of the “white balance” term which I think explains partially different goals of camera vs perceptual balance. The author includes quotes from correspondence with Mark Fairchild:

2 Likes

Actually one HDR standard is the hdr10 and it use the st2084 EOTF, basically it compress values >1 in the 0-1 range (0-1023 range in 10 bit) without introducing banding.
If you don’t take in account the values >1 then the max value it will be roughly 500 ( in the 0-1023 range) because 1 is mapped to a value of roughly 500.

It is the same principle of all the log encoding.

Indeed.

At the core is whether or not RGB colour constancy scaling achieves the same goal as chromatic adaptation. It is my belief that while it is acceptable, it is also detaching the spectral observer from the ratios. That is, the scaling on the values produces values that:

  • Produces values that the sensor would never capture. IE: They are beyond the device referred capture range of the sensor.
  • Produces values that the sensor would not capture under equivalent lighting.
  • Produces values that do not correspond to the sensor basic colorimetry in terms of colour primaries as solved via the 1931 CIE CMFs.

Are the values good enough? Sure. Can one validly question the “on sensor” values it generates? Absolutely.

Nor would you, in a large number of cases dynamic range depending, be able to see representations of the intended colours correctly as specified by the ratios.

Seeing a portion of the image by raising or lowering the intensities of the pixels until shadows or highlights or whatever are displayed on the screen, with everything else crushed to 0.0 or pushed to 1.0, that’s one thing. The actual channel values sent to the screen are still in the range 0.0f to 1.0f.

Those little viewers that allow to examine this or that portion of the image from extreme shadows to extreme highlights are really cool, but I don’t know of any “stand-alone” apps, only stuff that’s part of larger packages - do you by chance know of such a stand-alone application?

As an artistic tool, I’ve found that you can make interesting images of forest flora in rather flat light by composing a subject interestingly, then taking the surrounding background into near dark oblivion with a nastily-shaped curve. Mood-ful images made by sending non-data to oblivion at output…

However, the recent days’ discovery compels me to start any such with a fully chromatically characterized camera image, using a profile who’s white point is based on the camera interpretation of white. The profile produced for the above post is my new favorite camera profile, well, until it bolloxes up something else I haven’t yet tried…

I think it works this way:

1)Image in the 0-1 range (midtones at 0.18) → display with 100 nits of brightness

  1. image in the 0-10 range (midtones at 0.18) → reduce exposure to bring the image in the 0-1 range (midtones at 0.018) → display with 1000 nits of brightness

That’s interesting - thanks! I didn’t know these HDR monitors could change max brightness depending on the image content, though that makes sense. But the actual channel values sent to the screen still max out at 1.0, it’s just a display.

Do all such monitors have these scene-relevant break points?

What sort of graphics card is required, do you know? Or do these monitors - which I’m assuming are for watching TV and movies - have their own built-in cards?

It seems like the monitor bit depth must be fairly high to avoid appearance of posterization - do you know the bit-depth? Is it just 10 bits?