GIMP 2.10.6 out-of-range RGB values from .CR2 file

@Elle linked to my dcraw and WB page. I followed the link, and noticed that some scripts and result boxes were missing. Sorry about that, it was a build that went wrong and I didn’t notice. I’ve rebuilt the page and re-uploaded.

My apologies for any inconvenience.

It’s scene referred data. You overcomplicated it, as well as mangling up completely what I have said and restated dozens of times.

If you read up, you’ll see where I said that whether or not data is non data is contextual. For example, resampling an image via SinC for scaling for example, and suddenly a small value juts negative? Non-data. It didn’t suddenly go out of gamut, but instead is residue from sampling. Likewise, some might suggest that RGB scaling isn’t proper chromatic adaptation, and therefore the validity of some of the data is questionable at particular output values.

Scene referred files are fine, and loading them in a display referred application isn’t prudent. Nor does 1.0 mean anything magical, just like 0.0 doesn’t; it depends on the encoding and proper interpretation thereof. There are a good many encodings where 1.0 represents a massive scene referred value and 0.0 represents another arbitrary value. Assuming that there is some magic about the value 1.0 or 0.0 is foolish, and one has to interpret how and what is going on with values that are beyond the particular encoding’s range.

There just isn’t any such thing as “HDR”[1] for example, which is at least a huge chunk as to why so many folks get confused about particular encodings.

We’ve looped over this dozens, if not hundreds of times. Keep with whatever you do.

[1] Save for the legitimate application of “HDR display”, which is a whole other kettle of fish.

Yes, I believe so. By doing this you are deliberately breaking the color management, and it is the use of a non-matching/inappropriate profile to interpret the raw values that results in the green appearance.

A higher level view: The idea of color management is to use objective measurements of color, and remove device specific characteristics from a workflow. Our snapshot of a workflow here is capturing real world colors with a camera and then observing them on a display. If the color management is arranged as intended, the details of how the camera or display represent their color is removed completely - you can make fairly arbitrary changes to the device encoding of each or both the camera and display (gain, offset, 3d transforms) and when both are profiled and the profiles used to interpret/transform values between these two devices, the results will be unchanged. Color management doing its job.

If you make changes to the device value encodings (like modify raw value channel gains) without re-profiling, you are breaking the end to end color management. All bets are off then, and the interpretation of the device values will be faulty. Same thing if you profile a display and then fiddle the displays contrast or white point or R/G/B gain values.

Now a realistic photography workflow has to allow for creative modifications by the photographer (or the camera, on the photographers behalf), and there are many workflows that can be proposed that allow this. One that works within a color managed workflow might be to convert the camera image into a working space using the camera + working space profile, make creative modifications in that working space, and the proceed with any further color managed transformations into display space, sRGB, printing space etc.

Other workflows are possible, such as using a fixed camera profile and modifying the raw values creatively. You are then using the camera raw space as a working space, something that will vary camera to camera, make to make etc., and making it difficult to relate the changes to objective criteria (such as a best practice chromatic adaptation/white balancing).

Yes, it will continue to look green when the raw values have been interpreted as green by a profile that wasn’t made from that exact device space.

Yes.

Sure - the precise model used by the profile isn’t relevant to the discussion.

Yes - this is good example of the difference between being within the color managed workflow or without, and the intent of the photographer informs the practice:

  1. The photographer intends the filter to change the appearance of the world that the camera is capturing. The profiling is from the camera lens down, and so the resulting images appear to have the filtered color.
  2. The photographer intends the filter to be part of the camera, so the profiling is from the filter down, and so the profile captures the effect of the filter on the camera raw response, and compensates for it. The resulting images appear normal, and not affected by the filter.

Yes, but no :slight_smile:

You’re now down into details of “exactly which white amongst all the whites is the one I want to transform to the exact media color of my destination space.” That’s important, but not what I mean by “white is white”

There are many details about profiling cameras, and several assumptions in creating something like an ICC profile, which (by default) assumes that some white will transform to PCS white, and even more assumptions when the source is a photo of a reflective test chart.

I’m speaking in a slightly more abstract sense, in that if you profile a camera based on certain XYZ stimulus, then (by definition, and within the limitations of how closely the camera spectral sensitivities match the standard observer) the profile will translate the camera raw values back to the corresponding XYZ stimulus. So if you call one of the XYZ stimulus white (because it looks white to you in the original scene), then the profiles interpretation of the corresponding raw values will be that XYZ value, which you’ve already agreed is “white”.

Now of course such an absolute interpretation of the camera captured colorimetry needs to be chromatically transformed if it is to be mapped to a different device (such as a display) where the observer is white point adapted to a different XYZ values. Relative colorimetric ICC profiles include this step as well.

I can’t tell you that as a matter of fact, because I haven’t researched it myself. My guess is that it would be better than XYZ (“Wrong Von Kries”), possibly better than (say) sRGB if the camera primaries are “sharper” than sRGB primaries, but not as good as doing the white point balance/chromatic transform in an accepted sharpened cone space, such as Bradford.

It depends a bit on what the intent of the profile is. ICC profiles and cameras don’t go together very naturally, due to the default usage of ICC profiles being relative colorimetric, and the dependence the relative colorimetric source white point has on lighting conditions. This is why camera ICC profiles work more successfully in repro type situations.

On the other hand, I see no reason why absolute colorimetric profiles can’t be used as a basis for general photography, either using ICC profiles as a basis, or other formats such as DNG or spectral sensitivity profiles. (Camera manufacturers own processing tends to be based on characterizing spectral sensitivities and then computing profile transforms from that.)

How to deal properly with white balance when shooting a test chart, depends on the details of how the workflow does white balance, and what the intended use of the profile is.

It’s only a problem if it’s a problem to you :slight_smile:
From a color science point of view, I think doing chromatic transformations in camera device space is not optimal. “Not optimal” may well be perfectly usable and good looking though!

Probably not, because (from what I can gather googling “uniwb”), this makes image dependent changes to the raw encoding. Such a process sits uneasily within a color managed workflow, because profiles are static, and can’t dynamically adapt to such a change in encoding.

Ideally you would be able to profile a camera, and then insert the profile in the workflow before the white balancing, the white balancing being done in a device independent colorspace.

[ It’s not clear to me what the idea of uniwb is though, if it is just scaling the raw RGB values after capture, since typically the signal/noise limit is imposed by the sensor, not the quantisation of encoding range. Modifying the lighting to even the channels may improve S/N, but so does increasing lighting and/or exposure generally. ]

I believe it is exactly the same.

Note that there is a difference between a chromatic adaptation transform in tri-stimulus space, and the effect on colors of a change of illuminant spectrum, which involves a change in spectral interaction as well as a possible shift in white and a corresponding shift in the observers adapted white point.

6 Likes

Yes, contextual. Totally agree. And in the examples that I gave the context was perfectly fine legitimate scene-referred data that had channel values >1.0. And yes, I totally agree with you that in the sense of recording scene-referred information blah blah 1.0 is just a number.

But when I bring that same scene-referred information into GIMP, from a raw processor or by opening that Rec709.exr file, suddenly according to you “1.0” is the magic number that means “clip the data” even if the same data brought into Blender or Nuke would not be summarily clipped because in fact its perfectly legitimate data.

Why?

Hi Graham,

It will take awhile to absorb the rest of what you’ve said. But “uniwb” has two meanings in this discussion:

  1. When shooting a raw file, set the white balance to “uniwb” where R=G=B is presumed “white”, and of course the resulting image looks green. The point is to “fool” the little in-camera histograms into revealing a bit more accurately how close to clipping in the raw file the resulting capture is. This used to be fairly commonly done - for example search dpreview forums for posts by Iliah Borg and Luminous Landscape forums for posts by Guillermo Luijk (well, here’s an article on his website: GUILLERMO LUIJK >> TUTORIALS >> UNIWB. MAKE CAMERA DISPLAY RELIABLE).

  2. The other meaning is use “1,1,1” as the white balance multipliers when processing the raw file.

    This is something that personally I don’t do! except in the case of putting together my contrived example for this post trying to show that “it’s really green” if it’s not properly white balanced, and the “green” isn’t from sRGB or from my monitor or from my being “used to looking at my monitor” as has been claimed several times in this post.

    But I did once read about a photographer who uses the resulting green images as a “creative” white balance. I can sort of see why - there’s something a bit disturbing and dreamlike when the green color cast is combined with the right sort of image.

I suppose a third meaning is use “uniwb” as the white balance when making the ICC profile for the camera, as was suggested earlier in this long thread as a way to avoid “green”. I tried this a long time ago in an entirely different context. But the resulting profile just can’t be used with regular raw processors if the user actually wants to modify the white balance say from D50 to one of the camera presets or by clicking on a neutral area in the image.

On the other hand, several years ago I helped a person make a dng DCP profile, and there is software that allows to extract an ICC profile from a dng DCP, and sometimes that extracted profile actually does use “uniwb” multipliers, and sometimes not. Well, that was a long time ago so hopefully I’m remembering correctly.

Notice the words “dynamic range higher than what is considered to be standard dynamic range”.

In image editing, somewhere around 8 stops is considered “standard”, based on the number of discernible stops (doublings of linear RGB values) from “just above black” (perhaps "L=0.5 or “L=1.0” on a scale from 0 to 100 is a good number for this), to the integer-encoded max of 255 (or 65535/etc depending on the bit depth of the image).

OK, a lot of people will say, well, that’s just Wikipedia and those people make huge mistakes (something I haven’t found to generally be the case, but of course I haven’t double-checked the facts in every single Wikipedia article). So putting anything Wikipedia says to one side, how about this article by Greg Ward:

http://www.anyhere.com/gward/hdrenc/pages/originals.html

Quoting the first sentence of Ward’s article: "The following examples were used to compare the following High Dynamic Range Image formats: OpenEXR, Radiance RGBE and XYZE, 24-bit and 32-bit LogLuv TIFF, 48-bit scRGB, and 36-bit scRGB-nl and scYCC-nl. "

OK, so what does Ward mean by HDR image format? He means an image format that can hold more than the dynamic range that fits into 8/16-bit integer file formats such as png and jpeg.

Two commonly used HDR image formats are OpenEXR and high bit depth floating point tiffs. GIMP (and darktable, PhotoFlow, etc) can open and export both file formats, fwiw, and also can operate on channel values that are >1.0f and produce correct results, assuming of course that the data is actually meaningful data in the first place!

The video display industry is using “HDR” to market their new monitors with a greatly increased dynamic range compared to “standard” monitors. And someday those new monitors will perhaps become commonplace. But for image editing, right now apparently they have limitations.

Just because the video display industry is using “HDR” to market their new display technology, doesn’t mean suddenly anyone and everyone who uses “HDR” to mean anything else is suddenly using the wrong terminology.

Which brings the topic back to the question:

@anon11264400 - why is data (actual data) with channel values >1.0 “data” in Blender and Nuke, and “non data” in GIMP?

When you have tools and operations designed to work in the display range, the only valid data produced by the tool is the data that ends up in the display range. The rest is non-data, garbage.
Take the “screen” blending mode for instance in GIMP.
https://docs.gimp.org/en/gimp-concepts-layer-modes.html
A quick look to the formula tells you what’s going on there. What does that 255 value in the formula mean?
Now, that can be outdated information only valid for 8i and that formula now uses 1.0 instead of 255, but the problem remains: what happens to the pixels with values above 1.0?
You may clamp/clip the operations and get the expected result, or leave them unclipped and get garbage.
The fact that some unclipped display-referred operations don’t return garbage doesn’t mean that display-referred operations are fine for scene-referred images. It’s just a lucky accident.

The same goes to Photoshop. It’s a display-referred tool. Having higher precision modes and removing clips doesn’t make it automagically a scene-referred editor.
For starters you can’t even SEE what’s going on when data is beyond the display range. That should tell you something.

Yes, I agree that you have to look at the actual data to decide what operations are appropriate. You also have to look at the specific operation.

When applying negative exposure compensation to linearly encoded scene-referred output from a raw processor, in what way is this an appropriate operation in Nuke or Blender, but a totally inappropriate operation in GIMP, requiring clipping of channel values >1.0?

So, it’s non-data in the context of the display. Before that, in your post processing, if you know it’s out there, so to speak, you have the opportunity to either corral it back with, say, choice of rendering intent in the case of color, or avoid it with intelligent choices regarding workflow in the case of dynamic range… ??

Well, in this case you’re picking a specific case where the same arithmetic operation is valid for both scene and display referred images.
It’s not that the multiplication is inappropriate in GIMP and valid in Nuke, Blender or any other compositor that does scene-referred properly.
However, the issue remains: You may have an image with pixels that are 6 stops above middle gray, you may apply in GIMP a negative exposure of 1 stop, leaving those bright pixels 5 stops above middle gray. The result is something you still can’t see on screen and that will break a lot of operations producing non-data in the context of display-referred imagery.

The operations that don’t break with scene ratios are obviously not the problem here.

In the context of the display and in the context of any application designed to work with images that are ready for the display (display-referred).
It’s a matter of scope. You design a road with cars and trucks in mind and all the design decisions you made only consider vehicles with wheels that may transit on that road. Sure, you may use that road to land a plane, but that doesn’t mean that you designed something that is valid or useful for air travels.
The fact that the road doesn’t have a roof doesn’t make it a proper aerial route.

The approach for dealing with scene-referred imagery is a completely different design approach. Every tool has to be designed for that type of data. It’s not matter of corraling the data into the display constraints to make it valid in the context of the display.
If a tool needs you to do that, it’s not an appropriate tool for scene-referred imagery. It has to be discarded and replaced by a tool that is.
And in that tool, the range 0,1 means nothing special.

I am working in the context of an application that starts with the camera-provided raw array, and I stack operators on top of that, in any sequence i desire. It’s not one of the node-based compositors, it’s more like g’mic or ImageMagick, with a GUI.

Further, I can select any operator in the stack for display. I usually select the last operator, but I can separately select any of the prior operators and work on it, watching what it does in the final rendering. So in my little world, all of the discourse regarding scene-referred vs display referred has greatly influenced how I stack my operators, thanks.

I think, in the context of the software we discuss here, that’s a little drastic, throwing the baby out with the bathwater. I don’t think a single scene-referred concept discussed so far can’t be considered in some way in Raw Therapee or darktable, pardon the double-negative…

Oh, I probably wasn’t clear. I meant operations, not tools as in the whole software.
I didn’t mean that you have to discard your software or DT or RT because there is a 1 here or there in the code, but the operations that have that need to go if you’re after a true scene-referred editor.
At any rate a design-centered analysis on the whole piece of software is needed, that’s for sure. If you find yourself constantly in the need of constraining operations in the display range, then the whole application was designed around a concept that is incompatible with the idea of scene-referred editing.
If it’s just matter of removing a couple of operations and eveything makes sense sure, you don’t have to throw the baby out with the bathwather.
Your software is a stack of individual operations, from what I could see. In your case you just need to make sure the whole processing stack is composed by operations that work with scene ratios.
Putting the display operator last serves as some sort of view, which is fine. The only problem I can devise is what happens when the user decides to put the display operator first. Designing a view independent from the processing stack might ensure a proper scene-referred processing.

Considering the above, I think it’s quite evident what’s the problem with other applications that take the wheel on the order of operations and stuff some legacy display-referred ops at any point of the pipe.

1 Like

Good point, but I’m not going to protect them. :smile: They need to come here, read the threads, walk away armed with the knowledge…

My perspective developing rawproc was to provide a toolbox full of tools, one aim being to provide a sandbox within which to play with ordering of operations, see what happens. I have in mind to write a raw processing tutorial with a rawproc AppImage specifically configured to support it, to which end this thread has had a significant influence. Indeed, a previous post here was to describe what I did to see how dealing with white balance as an integral part of the camera characterization worked, and rawproc let me do that with maybe ten minutes of fiddling. Now, to figure out what to do with that knowledge…

Right now, I believe I’m the only rawproc user. Really haven’t relished taking on the “why is my picture dark?” questions that’d come with a user base. My hat’s off to @Morgan_Hardwood. @houz, and the RT/dt devs who do it with aplomb…

I’ve tried to outline above, the reason this thread death-spiralled, is that it comes down to what one believes colour constancy to be.

  • If someone believes that scaling the RGB values is legitimate, all is well and good and the sensor would have recorded those out of sensor values just as they were scaled.
  • If someone believes it is inherently tied to the standard observer model, then adaptation is done in the XYZ domain and some of the resultant values are non-data.

I don’t believe the first situation is an ideal solution. As evidence I would cite:

What I will say beyond a shadow of a doubt is that there is no way a sensor could capture the value ratios one ends up with post RGB scaling. In fact, changing the scene’s illuminant manually would end up with different sensor values captured.

You make up your mind for yourself.

Note “HDRI” is an actual class of things, not to be confused with random idiot saying HDR. So yes, I should have included HDRIs, although the “I” in this case differs it from HDR. As I also said, HDR display is firmly acceptable, and refers to an entire class of displays.

Um, RGB scaling for white balancing is not the same as using exposure compensation to reduce intensities.

In the case of white balancing during raw processing, except for the trivial case of uniwb, the scaling values for R, G, and B are not all equal to each other, with the G value being roughly half the R and B values, except of course for your example where you put the white balance in the camera input profile.

In the case of using exposure compensation, the RGB channel values are multiplied or divided by gray, R=G=B.

Very different situations. Multiplying and dividing by gray produces the same result in any well-behaved RGB working space, unless of course you insist on clipping the channel values before applying negative exposure compensation to bring the highlight values down below 1.0f. But this isn’t true for multiplying by a non-gray color.

Let’s assume the channel values > 1.0f for the image in question actually have meaning, are real data, for the specific topic at hand. Which is that white balancing by multiplying by a non-gray color is not the same as reducing intensity by multiplying by a gray color that’s less than 1.0f. The latter operation is color-space independent as long as the color space is a well-behaved RGB working space (no RGB LUT profiles here, please!).

Maybe reread what I posted, as I believe you misread the intention.

Let’s not, as that is the entire reason for this thread existing. Is RGB scaling appropriate colour constancy? If you scale the RGB, is that what the sensor would / could have recorded?

PS: You are re-explaining things that every single reader of this thread is exhausted by given it is so rudimentary. Everyone knows the difference between an exposure change!

Say what?

Do you never scale intensities using Nuke or Blender?

In the example case, the raw file has already been interpolated and white balanced.

Every raw processor out there allows to apply positive and negative exposure compensation.

Did you shift tracks from discussing colour constancy / chromatic adaptation? RGB scaling in the colour constancy / chromatic adaptation sense.

Well, I thought you shifted tracks when you introduced the topic of how to do a better white balance, when the OP’s original question was how to deal with channel values < 0.0 and >1.0. But the question of a better white balance is something I’m very interested in, so I went with the flow.

But these posts were my attempts to switch the discussion back to the original question:

and the following posts after #144 are mostly about dealing with channel values > 1.0f, with sidetracks of criticizing my use of “HDR”.

1 Like