GIMP 2.10.6 out-of-range RGB values from .CR2 file

Notice the words “dynamic range higher than what is considered to be standard dynamic range”.

In image editing, somewhere around 8 stops is considered “standard”, based on the number of discernible stops (doublings of linear RGB values) from “just above black” (perhaps "L=0.5 or “L=1.0” on a scale from 0 to 100 is a good number for this), to the integer-encoded max of 255 (or 65535/etc depending on the bit depth of the image).

OK, a lot of people will say, well, that’s just Wikipedia and those people make huge mistakes (something I haven’t found to generally be the case, but of course I haven’t double-checked the facts in every single Wikipedia article). So putting anything Wikipedia says to one side, how about this article by Greg Ward:

http://www.anyhere.com/gward/hdrenc/pages/originals.html

Quoting the first sentence of Ward’s article: "The following examples were used to compare the following High Dynamic Range Image formats: OpenEXR, Radiance RGBE and XYZE, 24-bit and 32-bit LogLuv TIFF, 48-bit scRGB, and 36-bit scRGB-nl and scYCC-nl. "

OK, so what does Ward mean by HDR image format? He means an image format that can hold more than the dynamic range that fits into 8/16-bit integer file formats such as png and jpeg.

Two commonly used HDR image formats are OpenEXR and high bit depth floating point tiffs. GIMP (and darktable, PhotoFlow, etc) can open and export both file formats, fwiw, and also can operate on channel values that are >1.0f and produce correct results, assuming of course that the data is actually meaningful data in the first place!

The video display industry is using “HDR” to market their new monitors with a greatly increased dynamic range compared to “standard” monitors. And someday those new monitors will perhaps become commonplace. But for image editing, right now apparently they have limitations.

Just because the video display industry is using “HDR” to market their new display technology, doesn’t mean suddenly anyone and everyone who uses “HDR” to mean anything else is suddenly using the wrong terminology.

Which brings the topic back to the question:

@anon11264400 - why is data (actual data) with channel values >1.0 “data” in Blender and Nuke, and “non data” in GIMP?

When you have tools and operations designed to work in the display range, the only valid data produced by the tool is the data that ends up in the display range. The rest is non-data, garbage.
Take the “screen” blending mode for instance in GIMP.
https://docs.gimp.org/en/gimp-concepts-layer-modes.html
A quick look to the formula tells you what’s going on there. What does that 255 value in the formula mean?
Now, that can be outdated information only valid for 8i and that formula now uses 1.0 instead of 255, but the problem remains: what happens to the pixels with values above 1.0?
You may clamp/clip the operations and get the expected result, or leave them unclipped and get garbage.
The fact that some unclipped display-referred operations don’t return garbage doesn’t mean that display-referred operations are fine for scene-referred images. It’s just a lucky accident.

The same goes to Photoshop. It’s a display-referred tool. Having higher precision modes and removing clips doesn’t make it automagically a scene-referred editor.
For starters you can’t even SEE what’s going on when data is beyond the display range. That should tell you something.

Yes, I agree that you have to look at the actual data to decide what operations are appropriate. You also have to look at the specific operation.

When applying negative exposure compensation to linearly encoded scene-referred output from a raw processor, in what way is this an appropriate operation in Nuke or Blender, but a totally inappropriate operation in GIMP, requiring clipping of channel values >1.0?

So, it’s non-data in the context of the display. Before that, in your post processing, if you know it’s out there, so to speak, you have the opportunity to either corral it back with, say, choice of rendering intent in the case of color, or avoid it with intelligent choices regarding workflow in the case of dynamic range… ??

Well, in this case you’re picking a specific case where the same arithmetic operation is valid for both scene and display referred images.
It’s not that the multiplication is inappropriate in GIMP and valid in Nuke, Blender or any other compositor that does scene-referred properly.
However, the issue remains: You may have an image with pixels that are 6 stops above middle gray, you may apply in GIMP a negative exposure of 1 stop, leaving those bright pixels 5 stops above middle gray. The result is something you still can’t see on screen and that will break a lot of operations producing non-data in the context of display-referred imagery.

The operations that don’t break with scene ratios are obviously not the problem here.

In the context of the display and in the context of any application designed to work with images that are ready for the display (display-referred).
It’s a matter of scope. You design a road with cars and trucks in mind and all the design decisions you made only consider vehicles with wheels that may transit on that road. Sure, you may use that road to land a plane, but that doesn’t mean that you designed something that is valid or useful for air travels.
The fact that the road doesn’t have a roof doesn’t make it a proper aerial route.

The approach for dealing with scene-referred imagery is a completely different design approach. Every tool has to be designed for that type of data. It’s not matter of corraling the data into the display constraints to make it valid in the context of the display.
If a tool needs you to do that, it’s not an appropriate tool for scene-referred imagery. It has to be discarded and replaced by a tool that is.
And in that tool, the range 0,1 means nothing special.

I am working in the context of an application that starts with the camera-provided raw array, and I stack operators on top of that, in any sequence i desire. It’s not one of the node-based compositors, it’s more like g’mic or ImageMagick, with a GUI.

Further, I can select any operator in the stack for display. I usually select the last operator, but I can separately select any of the prior operators and work on it, watching what it does in the final rendering. So in my little world, all of the discourse regarding scene-referred vs display referred has greatly influenced how I stack my operators, thanks.

I think, in the context of the software we discuss here, that’s a little drastic, throwing the baby out with the bathwater. I don’t think a single scene-referred concept discussed so far can’t be considered in some way in Raw Therapee or darktable, pardon the double-negative…

Oh, I probably wasn’t clear. I meant operations, not tools as in the whole software.
I didn’t mean that you have to discard your software or DT or RT because there is a 1 here or there in the code, but the operations that have that need to go if you’re after a true scene-referred editor.
At any rate a design-centered analysis on the whole piece of software is needed, that’s for sure. If you find yourself constantly in the need of constraining operations in the display range, then the whole application was designed around a concept that is incompatible with the idea of scene-referred editing.
If it’s just matter of removing a couple of operations and eveything makes sense sure, you don’t have to throw the baby out with the bathwather.
Your software is a stack of individual operations, from what I could see. In your case you just need to make sure the whole processing stack is composed by operations that work with scene ratios.
Putting the display operator last serves as some sort of view, which is fine. The only problem I can devise is what happens when the user decides to put the display operator first. Designing a view independent from the processing stack might ensure a proper scene-referred processing.

Considering the above, I think it’s quite evident what’s the problem with other applications that take the wheel on the order of operations and stuff some legacy display-referred ops at any point of the pipe.

1 Like

Good point, but I’m not going to protect them. :smile: They need to come here, read the threads, walk away armed with the knowledge…

My perspective developing rawproc was to provide a toolbox full of tools, one aim being to provide a sandbox within which to play with ordering of operations, see what happens. I have in mind to write a raw processing tutorial with a rawproc AppImage specifically configured to support it, to which end this thread has had a significant influence. Indeed, a previous post here was to describe what I did to see how dealing with white balance as an integral part of the camera characterization worked, and rawproc let me do that with maybe ten minutes of fiddling. Now, to figure out what to do with that knowledge…

Right now, I believe I’m the only rawproc user. Really haven’t relished taking on the “why is my picture dark?” questions that’d come with a user base. My hat’s off to @Morgan_Hardwood. @houz, and the RT/dt devs who do it with aplomb…

I’ve tried to outline above, the reason this thread death-spiralled, is that it comes down to what one believes colour constancy to be.

  • If someone believes that scaling the RGB values is legitimate, all is well and good and the sensor would have recorded those out of sensor values just as they were scaled.
  • If someone believes it is inherently tied to the standard observer model, then adaptation is done in the XYZ domain and some of the resultant values are non-data.

I don’t believe the first situation is an ideal solution. As evidence I would cite:

What I will say beyond a shadow of a doubt is that there is no way a sensor could capture the value ratios one ends up with post RGB scaling. In fact, changing the scene’s illuminant manually would end up with different sensor values captured.

You make up your mind for yourself.

Note “HDRI” is an actual class of things, not to be confused with random idiot saying HDR. So yes, I should have included HDRIs, although the “I” in this case differs it from HDR. As I also said, HDR display is firmly acceptable, and refers to an entire class of displays.

Um, RGB scaling for white balancing is not the same as using exposure compensation to reduce intensities.

In the case of white balancing during raw processing, except for the trivial case of uniwb, the scaling values for R, G, and B are not all equal to each other, with the G value being roughly half the R and B values, except of course for your example where you put the white balance in the camera input profile.

In the case of using exposure compensation, the RGB channel values are multiplied or divided by gray, R=G=B.

Very different situations. Multiplying and dividing by gray produces the same result in any well-behaved RGB working space, unless of course you insist on clipping the channel values before applying negative exposure compensation to bring the highlight values down below 1.0f. But this isn’t true for multiplying by a non-gray color.

Let’s assume the channel values > 1.0f for the image in question actually have meaning, are real data, for the specific topic at hand. Which is that white balancing by multiplying by a non-gray color is not the same as reducing intensity by multiplying by a gray color that’s less than 1.0f. The latter operation is color-space independent as long as the color space is a well-behaved RGB working space (no RGB LUT profiles here, please!).

Maybe reread what I posted, as I believe you misread the intention.

Let’s not, as that is the entire reason for this thread existing. Is RGB scaling appropriate colour constancy? If you scale the RGB, is that what the sensor would / could have recorded?

PS: You are re-explaining things that every single reader of this thread is exhausted by given it is so rudimentary. Everyone knows the difference between an exposure change!

Say what?

Do you never scale intensities using Nuke or Blender?

In the example case, the raw file has already been interpolated and white balanced.

Every raw processor out there allows to apply positive and negative exposure compensation.

Did you shift tracks from discussing colour constancy / chromatic adaptation? RGB scaling in the colour constancy / chromatic adaptation sense.

Well, I thought you shifted tracks when you introduced the topic of how to do a better white balance, when the OP’s original question was how to deal with channel values < 0.0 and >1.0. But the question of a better white balance is something I’m very interested in, so I went with the flow.

But these posts were my attempts to switch the discussion back to the original question:

and the following posts after #144 are mostly about dealing with channel values > 1.0f, with sidetracks of criticizing my use of “HDR”.

1 Like

Changing topics, getting someone to agree to use your vocabulary to describe a situation is a major victory. I’ve been using @anon11264400’ vocabulary whenever possible simply to avoid initiating a tirade about all the ways phrases like “out of gamut channel values” (use “non data” instead) and “unbounded” (use “non data” instead") are wrong terminology, in @anon11264400 's opinion. But I accidentally slipped up and used “HDR”, which I know he objects to, but I just forgot.

But I’m pretty sure everyone on the forum does casually refer to high dynamic range scene-referred images, that is, images of scenes that have intensities that are > 1.0, as “HDR”, or at least knows what I mean when I use the term. But maybe not.

Here is a question: Instead of “HDR image”, what is the correct term - the term that won’t bring the wrath of @anon11264400 down upon my head - to refer to images of scenes that have intensities that are proportional to scene intensities, where the scene has intensities that are > 1.0 (depending of course on where one puts middle gray) and these intensities are encoded in the image file?

See these threads for context of above disagreements about vocabulary, especially the first thread:

Scene-referred images.
Implying that they are “HDR” means that there is an extension beyond a standard dynamic range, and that leads to the usual confusion that scene-referred is only display-referred where values above 1 weren’t clipped.

Scene referred is not display referred with extra dynamic range.
An HDR display is a display with extra dynamic range.

With a scene-referred image you can produce both images for LDR and HDR outputs.

Oh, interesting. But then how do you distinguish between scene-referred images that in fact have all intensities below 1.0f and those that have intensities >1.0f?

What do you need that distinction for? With a proper scene-referred workflow it doesn’t matter. It only matters when your processing pipe is display-referred.
As I said above. If you embrace a scene-referred workflow you can still produce display-referred images.

Is an image that fits in the 0,1 range a “low dynamic range”, image? Is it an image with pixels reading 1,1 automatically a “high dynamic range” image? Then what do you call an image framing the sun and a piece of charcoal?

A case where some automated tone-mapping might be a good idea.