GIMP 2.10.6 out-of-range RGB values from .CR2 file

Moving right along to my next question, as I don’t think I’m going to get a helpful response to why it is that @anon11264400 thinks I didn’t read whatever it is I didn’t read . . .

What if I open Rec709.exr with its >1.0 channel values, but this time open it with GIMP or darktable instead of with Blender or Nuke? What’s the appropriate thing to do next?

I read all that. I offered my assessment for what it’s worth that the image probably was intended to fit inside one or another version of the old WideGamutRGB color space. Is this what you are responding to when you say that I didn’t read your post? Or are you responding to something else that makes you think I didn’t read your post?

The reason why WideGamutRGB is relevant is because my inclination is to convert to WideGamutRGB. I don’t have any reason to choose some other reference space.

I apologize. Let’s try again. I misread you and was rude.

Try with:

Component x y Wavelength
Red 0.7347 0.2653 700 nm
Green 0.1152 0.8264 525 nm
Blue 0.1566 0.0177 450 nm
White point 0.3457 0.3585

This could be a reasonable decision. Bear in mind that if your target is smaller, that would mean some crafted gamut mapping. It might also yield funky math wonky results on saturation in your manipulations.

Yes, creating images that won’t fit into color gamut of the final display color space be it for the web or for print - edit: and then making them fit - sorry, left this part out :slight_smile: is just not fun and not easy, is actually quite demanding of creative efforts to preserve “pretty” while squeezing those colors down, as no doubt just about everyone on this forum knows very well from experience.

I don’t normally add a lot of saturation to photographs, so this “oh my goodness the colors just don’t fit” problem didn’t really hit home for me until I started working on some digital paintings, working in a color gamut that “just fits” my monitor color space but doesn’t fit well at all into sRGB. Fitting into a reasonably large gamut printer profile, that’s easy, sRGB not so much.

1 Like

As a helper, I’ve used the mind blowing useful Colab in conjunction with the absolutely incredible Colour Science library. Technically this is one line in Colour, but I included the others to show what default matrices are used. It’s specification unless you tell Colour otherwise.

That’s one of the concerns that require addressing when choosing a reference space.

If you were targetting a ten ink print with a wider gamut, it would be a consideration. Gamut mapping via absolute or relative colorimetric, versus CAM or homebrew based perceptual / saturation might work or might not, depending. There’s plenty of room out there for clever folks like @anon41087856, and the way he’s been completely cutting to the core of image processing, to include tunable interfaces for pixel pushers to engage with their work with more control considering the output. Imagine a tunable gamut mapper? That’s pretty easy with the knowledge and coding abilities of the folks around here to have in Darktable almost overnight.

This isn’t a place where typical people have no control over the tools, and instead, sometimes magic happens as @anon41087856 has shown.

1 Like

If that page you linked to allows people to compress colors, I bet a lot of people would find it very helpful. That’s taking this particular discussion waaay off-topic, I think. But if that page does show an alternative “automatic” way to get nicely compressed colors, if you started a new thread, I bet a lot of people would like to learn how to use the information. Color conversion without losing colors seems to be a big concern, and the usual “use perceptual intent to convert to the sRGB matrix profile” of course just clips because there aren’t any perceptual intent tables in the matrix version of sRGB (why didn’t the ICC give some other name to their LUT versions of sRGB???).

1 Like

Did you get those primaries from here? http://www.babelcolor.com/index_htm_files/A%20review%20of%20RGB%20color%20spaces.pdf

My WideGamutRGB ICC profile is made using Pascale’s primaries rathr than Lindbloom’s.

It’s sort of cramming a camera rendering transform into the core, operating on scene referred linear data that comes before it.

With the exception of @gwgill’s arguably best-of-breed work, Perceptual and Saturation renderings are largely secret sauce. That poses problems, as well as being extremely brittle.

With that said, Darktable or its ilk could implement a gamut mapping node rather easily that would allow for direct dialing of the gamut curves for a particular work, and letting the rendering intent remain simply absolute or relative colorimetric.

Actually, “out-of-range values” is precisely on that subject matter. :wink:

All of Colour comes with full citations.

1 Like

I followed @anon41087856 's darktable thread with a lot of interest. And you are right - “out of range” values is where this current thread started. Which brings us back around to this question:

I still think it would be nice to start a new thread on using the Colour tools to corral colors for color space conversions and even also the relevant ArgyllCMS tools - I’ve had people ask me questions about using ArgyllCMS for making image-specific gamut mappings, but I don’t have enough experience doing this sort of thing to say anything helpful.

1 Like

I think we left the rails and began sailing quite some time ago :smiley:
Based on previous comments, I don’t think @Pixelator will mind… thankyou @Elle and @anon11264400 for very interesting conversation!

I’m only halfway through my bag of popcorn. I hope you aren’t done yet. :grinning:

2 Likes

OK, here’s what I did do with the Rec709.exr file, with a nominal goal of preparing the image for display on the web:

  1. I opened the Rec709.exr file. GIMP assigned the built-in linear sRGB color space, which is the same as linear Rec.709.

  2. I color-picked around and located some of the out of gamut colors, which are mostly located in the yellow and red flower parts.

  3. I used “Colors/Auto/Auto/Stretch Contrast”, making sure “keep colors” was checked. This was an OK thing to do because there aren’t any negative channel values (in images with negative channel values “Stretch Contrast” can turn the entire image to a pile of neutral gray). A safer edit would be to use “Colors/Exposure” to bring the >1.0f channel values back below 1.0f.

  4. As expected, “Stretch Contrast” made the image considerably darker by lowering the intensity of all the pixels. So I applied this curve to raise the shadows and midtones and compress the highlights, and make the image more or less as colorful as the original image: rec709.curves.zip (1.9 KB)

I’d show the resulting image but I’m not sure what the copyright status of these images is. But the link to download the file has already been given above if anyone wants to experiment. I actually like the slight increase in petal detail in my tone-mapped version of the image better than the original image.

But of course I couldn’t actually see the original image because there isn’t an HDR viewer in GIMP that allows to view different portions of the dynamic range. Levels or Exposure can be pressed into service - just hit the Cancel button instead of applying any actual change - but this is a bit awkward. Nor is there a viewer that tone-maps to show the image in its entirety, though personally I don’t usually like such viewers as imho the tone-mapping of the viewer gets in the way of tone-mapping, so to speak.

Arguing whether it makes sense to speak of the >1.0f channel values as “non data” is basically just arguing over terminology. Regardless of what you call these channel values, the original image isn’t suited for display on the web or for printing until the >1.0 channel values are either clipped or corralled, to use @ggbutcher’s very apt term. And many other types of edits that one might want to make, don’t work very well with “out of display range” channel values.

But please note that this dictum to first corral the “non data” is not true of all types of edits one might want to make, starting of course with the edits one might make to corral the data in the first place.

Whether any given operation can be safely, sanely done on “non data” depends on the specific algorithm, the RGB encoding (if there is channel data outside the display range, the RGB values should be encoded linearly), the nature/source of the “non data” channel values, and the goal of the person making the edits.

Clipping good data - or “non data” that can be turned into “good data” - just makes no sense to me. So I used Stretch Contrast and Curves to corral the data and do something nice with it (well, I thought the final image looked OK, it’s an odd image to begin with).

The same considerations apply to interpolated raw files in which there happen to be channel values >1.0 either from the user applying too much positive exposure compensation or else from applying the white balance multipliers in a raw processor that doesn’t provide the equivalent of dcraw’s very useful “-H 1” switch. Here the “corralling” is simple: use negative exposure compensation. Again, clipping “non data” that can be turned into “good data” by the simplest possible edit, this seems like an odd thing to do.

Photivo, if it’s still around, has an equivalent to dcraw’s “H 1” switch. But I don’t know of any other free/libre raw processor besides Photivo and dcraw that have such an option. Though I’m guessing rawproc does have such an option :slight_smile: .

IMHO @anon11264400 is absolutely right to suggest that raw processors should make it easy to get an interpolated raw file that doesn’t have channel values >1.0. Though I don’t think this should be mandatory because developers just can’t predict what users actually want or need to do with any given image.

My floating point version of dcraw (dcraw-float: floating point dcraw - completely out of date version of dcraw used in this code!) has this “normalization” built in, except that in addition, similar I think to @snibgo’s dcraw mods (dcraw and WB) there’s also an automatic stretching to the maximum dynamic range without clipping any of the highlights. Maybe it would be good to make such “option to Normalize the interpolated raw file” enhancement requests for various our free/libre raw processors

But given that it really is easy with our current free/libre raw processors to get channel values from an interpolated raw file that are >1.0, let’s say the image is sent to GIMP in this condition. Clipping these channels in GIMP as “non data” just doesn’t make sense.

Instead use “Colors/Exposure” or equivalent function to bring the channel values to below 1.0f, thereby turning “non data” into “good data” - if indeed that seems like the thing to do given whatever other goals the user might have in mind. Sometimes the most aesthetically pleasing thing to do with an image that’s otherwise ready to be output for display, is just to clip any channel values outside the range 0.0f-1.0f.

@Elle linked to my dcraw and WB page. I followed the link, and noticed that some scripts and result boxes were missing. Sorry about that, it was a build that went wrong and I didn’t notice. I’ve rebuilt the page and re-uploaded.

My apologies for any inconvenience.

It’s scene referred data. You overcomplicated it, as well as mangling up completely what I have said and restated dozens of times.

If you read up, you’ll see where I said that whether or not data is non data is contextual. For example, resampling an image via SinC for scaling for example, and suddenly a small value juts negative? Non-data. It didn’t suddenly go out of gamut, but instead is residue from sampling. Likewise, some might suggest that RGB scaling isn’t proper chromatic adaptation, and therefore the validity of some of the data is questionable at particular output values.

Scene referred files are fine, and loading them in a display referred application isn’t prudent. Nor does 1.0 mean anything magical, just like 0.0 doesn’t; it depends on the encoding and proper interpretation thereof. There are a good many encodings where 1.0 represents a massive scene referred value and 0.0 represents another arbitrary value. Assuming that there is some magic about the value 1.0 or 0.0 is foolish, and one has to interpret how and what is going on with values that are beyond the particular encoding’s range.

There just isn’t any such thing as “HDR”[1] for example, which is at least a huge chunk as to why so many folks get confused about particular encodings.

We’ve looped over this dozens, if not hundreds of times. Keep with whatever you do.

[1] Save for the legitimate application of “HDR display”, which is a whole other kettle of fish.

Yes, I believe so. By doing this you are deliberately breaking the color management, and it is the use of a non-matching/inappropriate profile to interpret the raw values that results in the green appearance.

A higher level view: The idea of color management is to use objective measurements of color, and remove device specific characteristics from a workflow. Our snapshot of a workflow here is capturing real world colors with a camera and then observing them on a display. If the color management is arranged as intended, the details of how the camera or display represent their color is removed completely - you can make fairly arbitrary changes to the device encoding of each or both the camera and display (gain, offset, 3d transforms) and when both are profiled and the profiles used to interpret/transform values between these two devices, the results will be unchanged. Color management doing its job.

If you make changes to the device value encodings (like modify raw value channel gains) without re-profiling, you are breaking the end to end color management. All bets are off then, and the interpretation of the device values will be faulty. Same thing if you profile a display and then fiddle the displays contrast or white point or R/G/B gain values.

Now a realistic photography workflow has to allow for creative modifications by the photographer (or the camera, on the photographers behalf), and there are many workflows that can be proposed that allow this. One that works within a color managed workflow might be to convert the camera image into a working space using the camera + working space profile, make creative modifications in that working space, and the proceed with any further color managed transformations into display space, sRGB, printing space etc.

Other workflows are possible, such as using a fixed camera profile and modifying the raw values creatively. You are then using the camera raw space as a working space, something that will vary camera to camera, make to make etc., and making it difficult to relate the changes to objective criteria (such as a best practice chromatic adaptation/white balancing).

Yes, it will continue to look green when the raw values have been interpreted as green by a profile that wasn’t made from that exact device space.

Yes.

Sure - the precise model used by the profile isn’t relevant to the discussion.

Yes - this is good example of the difference between being within the color managed workflow or without, and the intent of the photographer informs the practice:

  1. The photographer intends the filter to change the appearance of the world that the camera is capturing. The profiling is from the camera lens down, and so the resulting images appear to have the filtered color.
  2. The photographer intends the filter to be part of the camera, so the profiling is from the filter down, and so the profile captures the effect of the filter on the camera raw response, and compensates for it. The resulting images appear normal, and not affected by the filter.

Yes, but no :slight_smile:

You’re now down into details of “exactly which white amongst all the whites is the one I want to transform to the exact media color of my destination space.” That’s important, but not what I mean by “white is white”

There are many details about profiling cameras, and several assumptions in creating something like an ICC profile, which (by default) assumes that some white will transform to PCS white, and even more assumptions when the source is a photo of a reflective test chart.

I’m speaking in a slightly more abstract sense, in that if you profile a camera based on certain XYZ stimulus, then (by definition, and within the limitations of how closely the camera spectral sensitivities match the standard observer) the profile will translate the camera raw values back to the corresponding XYZ stimulus. So if you call one of the XYZ stimulus white (because it looks white to you in the original scene), then the profiles interpretation of the corresponding raw values will be that XYZ value, which you’ve already agreed is “white”.

Now of course such an absolute interpretation of the camera captured colorimetry needs to be chromatically transformed if it is to be mapped to a different device (such as a display) where the observer is white point adapted to a different XYZ values. Relative colorimetric ICC profiles include this step as well.

I can’t tell you that as a matter of fact, because I haven’t researched it myself. My guess is that it would be better than XYZ (“Wrong Von Kries”), possibly better than (say) sRGB if the camera primaries are “sharper” than sRGB primaries, but not as good as doing the white point balance/chromatic transform in an accepted sharpened cone space, such as Bradford.

It depends a bit on what the intent of the profile is. ICC profiles and cameras don’t go together very naturally, due to the default usage of ICC profiles being relative colorimetric, and the dependence the relative colorimetric source white point has on lighting conditions. This is why camera ICC profiles work more successfully in repro type situations.

On the other hand, I see no reason why absolute colorimetric profiles can’t be used as a basis for general photography, either using ICC profiles as a basis, or other formats such as DNG or spectral sensitivity profiles. (Camera manufacturers own processing tends to be based on characterizing spectral sensitivities and then computing profile transforms from that.)

How to deal properly with white balance when shooting a test chart, depends on the details of how the workflow does white balance, and what the intended use of the profile is.

It’s only a problem if it’s a problem to you :slight_smile:
From a color science point of view, I think doing chromatic transformations in camera device space is not optimal. “Not optimal” may well be perfectly usable and good looking though!

Probably not, because (from what I can gather googling “uniwb”), this makes image dependent changes to the raw encoding. Such a process sits uneasily within a color managed workflow, because profiles are static, and can’t dynamically adapt to such a change in encoding.

Ideally you would be able to profile a camera, and then insert the profile in the workflow before the white balancing, the white balancing being done in a device independent colorspace.

[ It’s not clear to me what the idea of uniwb is though, if it is just scaling the raw RGB values after capture, since typically the signal/noise limit is imposed by the sensor, not the quantisation of encoding range. Modifying the lighting to even the channels may improve S/N, but so does increasing lighting and/or exposure generally. ]

I believe it is exactly the same.

Note that there is a difference between a chromatic adaptation transform in tri-stimulus space, and the effect on colors of a change of illuminant spectrum, which involves a change in spectral interaction as well as a possible shift in white and a corresponding shift in the observers adapted white point.

6 Likes

Yes, contextual. Totally agree. And in the examples that I gave the context was perfectly fine legitimate scene-referred data that had channel values >1.0. And yes, I totally agree with you that in the sense of recording scene-referred information blah blah 1.0 is just a number.

But when I bring that same scene-referred information into GIMP, from a raw processor or by opening that Rec709.exr file, suddenly according to you “1.0” is the magic number that means “clip the data” even if the same data brought into Blender or Nuke would not be summarily clipped because in fact its perfectly legitimate data.

Why?

Hi Graham,

It will take awhile to absorb the rest of what you’ve said. But “uniwb” has two meanings in this discussion:

  1. When shooting a raw file, set the white balance to “uniwb” where R=G=B is presumed “white”, and of course the resulting image looks green. The point is to “fool” the little in-camera histograms into revealing a bit more accurately how close to clipping in the raw file the resulting capture is. This used to be fairly commonly done - for example search dpreview forums for posts by Iliah Borg and Luminous Landscape forums for posts by Guillermo Luijk (well, here’s an article on his website: GUILLERMO LUIJK >> TUTORIALS >> UNIWB. MAKE CAMERA DISPLAY RELIABLE).

  2. The other meaning is use “1,1,1” as the white balance multipliers when processing the raw file.

    This is something that personally I don’t do! except in the case of putting together my contrived example for this post trying to show that “it’s really green” if it’s not properly white balanced, and the “green” isn’t from sRGB or from my monitor or from my being “used to looking at my monitor” as has been claimed several times in this post.

    But I did once read about a photographer who uses the resulting green images as a “creative” white balance. I can sort of see why - there’s something a bit disturbing and dreamlike when the green color cast is combined with the right sort of image.

I suppose a third meaning is use “uniwb” as the white balance when making the ICC profile for the camera, as was suggested earlier in this long thread as a way to avoid “green”. I tried this a long time ago in an entirely different context. But the resulting profile just can’t be used with regular raw processors if the user actually wants to modify the white balance say from D50 to one of the camera presets or by clicking on a neutral area in the image.

On the other hand, several years ago I helped a person make a dng DCP profile, and there is software that allows to extract an ICC profile from a dng DCP, and sometimes that extracted profile actually does use “uniwb” multipliers, and sometimes not. Well, that was a long time ago so hopefully I’m remembering correctly.

Notice the words “dynamic range higher than what is considered to be standard dynamic range”.

In image editing, somewhere around 8 stops is considered “standard”, based on the number of discernible stops (doublings of linear RGB values) from “just above black” (perhaps "L=0.5 or “L=1.0” on a scale from 0 to 100 is a good number for this), to the integer-encoded max of 255 (or 65535/etc depending on the bit depth of the image).

OK, a lot of people will say, well, that’s just Wikipedia and those people make huge mistakes (something I haven’t found to generally be the case, but of course I haven’t double-checked the facts in every single Wikipedia article). So putting anything Wikipedia says to one side, how about this article by Greg Ward:

http://www.anyhere.com/gward/hdrenc/pages/originals.html

Quoting the first sentence of Ward’s article: "The following examples were used to compare the following High Dynamic Range Image formats: OpenEXR, Radiance RGBE and XYZE, 24-bit and 32-bit LogLuv TIFF, 48-bit scRGB, and 36-bit scRGB-nl and scYCC-nl. "

OK, so what does Ward mean by HDR image format? He means an image format that can hold more than the dynamic range that fits into 8/16-bit integer file formats such as png and jpeg.

Two commonly used HDR image formats are OpenEXR and high bit depth floating point tiffs. GIMP (and darktable, PhotoFlow, etc) can open and export both file formats, fwiw, and also can operate on channel values that are >1.0f and produce correct results, assuming of course that the data is actually meaningful data in the first place!

The video display industry is using “HDR” to market their new monitors with a greatly increased dynamic range compared to “standard” monitors. And someday those new monitors will perhaps become commonplace. But for image editing, right now apparently they have limitations.

Just because the video display industry is using “HDR” to market their new display technology, doesn’t mean suddenly anyone and everyone who uses “HDR” to mean anything else is suddenly using the wrong terminology.

Which brings the topic back to the question:

@anon11264400 - why is data (actual data) with channel values >1.0 “data” in Blender and Nuke, and “non data” in GIMP?

When you have tools and operations designed to work in the display range, the only valid data produced by the tool is the data that ends up in the display range. The rest is non-data, garbage.
Take the “screen” blending mode for instance in GIMP.
https://docs.gimp.org/en/gimp-concepts-layer-modes.html
A quick look to the formula tells you what’s going on there. What does that 255 value in the formula mean?
Now, that can be outdated information only valid for 8i and that formula now uses 1.0 instead of 255, but the problem remains: what happens to the pixels with values above 1.0?
You may clamp/clip the operations and get the expected result, or leave them unclipped and get garbage.
The fact that some unclipped display-referred operations don’t return garbage doesn’t mean that display-referred operations are fine for scene-referred images. It’s just a lucky accident.

The same goes to Photoshop. It’s a display-referred tool. Having higher precision modes and removing clips doesn’t make it automagically a scene-referred editor.
For starters you can’t even SEE what’s going on when data is beyond the display range. That should tell you something.