Feature request: save as floating-point

Well, now I’m confused. What do you mean by “all of them” and “obvious differences”?

Your AdobeRGB profile has RGB XYZ values that exactly match the Adobe specs, the original AdobeRGB proprietary profile RGB XYZ values (from an example profile that I have), and the ArgyllCMS and my own RGB XYZ values. So there is no obvious difference here, but instead an exact match.

All the profiles for which I give the white point and RGB XYZ values in my last post have a tag that says they are V2 profiles. But the darktable profile doesn’t use the same white point tag values as ArgyllCMS and my own profiles. Here are some relevant considerations:

The original V2 specs didn’t say anything at all about how to encode the source color space white point. So many/most profile vendors put that information in the white point tag. At that time there wasn’t any “chad” tag. That’s a V4 thing.

For some reason the ICC decided to rewrite history and publish a revised V2 standard that did things differently, producing what I refer to as a “V2 according to V4 specs” profile.

But older V2 CMM software doesn’t understand the “chad” tag. These CMMs look at the white point tag, which is where the old V2 profiles did put the source white point information. Which is exactly why I switched my profile-making code to produce what I’ve called “true V2” ICC profiles (along with V4 profiles) - some of the people who use my profiles really do need “true V2” ICC profiles to use with software that uses an older V2 CMM.

In a V4 CMM, functionally it doesn’t matter what’s in the white point tag, that tag is just ignored, though the technically correct values are D50 values, not D65 values, even for profiles made from D65 color spaces. V4 CMMs use the chad tag to store and retrieve the source white point information.

In a “V2 according to V4” CMM, if there is such a thing, the white point tag would probably be ignored, and the chad tag would be used. Or maybe such a CMM would look in both places if it really intended to be used with older V2 profiles. But I don’t really know - if you have such software you’d have to do some experimenting to see what it does.

I admit this is confusing. Don’t blame me. Blame the ICC. Some of the changes from V2 to V4 were bitterly contested. How to handle absolute colorimetric intent is one such bitterly contested item. The old V2 CMMs use the information in the white point tag for absolute colorimetric conversion intent, and they allow absolute colorimetric intent even for destination “display” profiles, which a V4 workflow doesn’t allow unless special programming is added, and then for a V4 profile the “chad” tag is used, not the white point tag. It makes no sense to me at all that a V4 CMM isn’t backwards-compatible with the original V2 specs.

Well, the latest revised “V2” specs provided by color.org do include the “chad” tag. I didn’t read closely enough to see if it’s a required tag, though it is for V4 profiles. I’d advise either using D65 in the white point tag, or else adding a chad tag, depending on the needs of various darktable users.

Here’s the page to download the current V4 and latest V2 specs: ICC Specifications

Notice older V2 specs have to be requested via email. They are no longer posted to the color.org website. Also notice that older V2 CMMs do use the information in the white point tag to determine the source white point. So there is a serious conflict here between the latest color.org V2 profile specs and profile-making recommendations and what older V2 CMMs actually do.

So you have to make your own choice, to provide “V2 according to V4” profiles with the source white point information in the chad tag, or “V4 profiles” again with the source white point information in the chad tag, or “true V2” profiles for which the source white point information is available in the white point tag.

Oh, I owe you an apology! When I looked at your profiles did I even mention the white point at all? I think I didn’t. And I didn’t look to see if there was a “chad” tag. My focus was on getting from the EXR file the correct chomaticities for the profile that GIMP was making when the darktable plug-in was used to open raw files, so all I paid attention to was the RGB XYZ values. What the white point tag “should” read and whether there “should” be a chad tag depends on who uses your profiles for what purposes with what software, and also on how closely you want to follow the latest “revised” V2 ICC specs.

Actually, why does darktable supply V2 profiles? For that matter, why does RawTherapee provide V2 profiles? From a practical point of view, what “should” be done with the white point and chad tags for V2 profiles depends on who’s using these profiles with what software for what purposes.

If you are using V2 software that expects to find the source white point in the white point tag, then it’s very wrong to not put the source white point in the white point tag.

The older V2 ICC specs made no provision for “where to put the source white point”. So by consensus, people put that information in the white point tag. There wasn’t a chad tag, so the information couldn’t go there.

The V4 specs say the white point tag should always be D50, which is simply a repeat of the information in the illuminant tag, so functionally speaking doesn’t accomplish anything at all. The V4 specs put the source white point information in a chad tag.

The latest ICC revision of the V2 specs - which seems to be an attempt to “shoehorn” V4 specs backwards into a V2 workflow, or maybe it’s an attempt to break functionality of V2 workflows because that’s exactly what it does - also says “use a chad tag”. This latest revision isn’t being used by software that still uses the older V2 specs, which is the case for ArgyllCMS and also, I’ve been told, for some fairly important print-oriented software.

ArgyllCMS supplies true V2 ICC profiles. The ArgyllCMS profiles don’t contain a chad tag. Are they broken?

What about all the other V2 ICC profiles that have been produced over the years before the V4 specs were released? Are they all broken? They weren’t broken before the ICC “revised” the V2 specs. And they aren’t broken now for people using V2-based software that expects to find the source white point information in the white point tag.

If someone wants to use the ArgyllCMS command line utilities to access absolute colorimetric intent information, they need true V2 ICC profiles. Putting the source white point information in the “chad” tag doesn’t work. It needs to go in the white point tag. Unless maybe ArgyllCMS has added code to handle the chad tag in V2 profiles. At some point there such code was added but it didn’t seem to work very well, and I don’t know if it’s still there because I stopped making and using “V2 according to V4 spec” profiles. If I get really motivated one of these days I’ll make some “V2 according to V4” profiles and see if the latest ArgyllCMS can read the chad tags.

The same is true for at least some important print-oriented software, or so I’ve been told - for full functionality including absolute colorimetric intent conversions, the white point tag needs to hold the source white point information. ArgyllCMS and older V2 “print-oriented” software are the reason I now supply “true V2” and “V4” versions of my profiles, and no longer supply “V2 according to V4” versions of my profiles.

The functional problem with the dt profiles (that led me to report a problem with the ICC profiles constructed from the EXR files exported to GIMP) was with the RGB XYZ tags, and that’s what I looked at. I only glanced at the other profile tags. I didn’t for example verify that every header tag was exactly right or that the TRC tags had exactly right point values. I did notice and should have mentioned the missing “source white point” information, for which I apologize.

If the reason darktable and RawTherapee supply V2 profiles is for embedding in images posted to the web, for browsers that don’t read V4 profiles, either type of V2 profile will work just fine, and unless the browser supports absolute colorimetric intent and the user is actually using color management and also has chosen to use absolute colorimetric intent to view the images (which would be weird), it doesn’t make any difference at all whether the source white point information is present or missing.

But you are right, the darktable D65 profiles should have either a D65 white point tag or a “chad” tag, depending on whether you want the profiles to meet the latest V2 specs, or whether you want the profiles to be useable in software that expects the source white point information to be in the white point tag.

I just pushed a new branch unbounded-processing with the goal of not clipping in the intermediate stages of the pipeline (and allowing to not clip at all if you save as float tiff). It seems to work, except for the following:

  • it always clips at 0. This makes the code simpler, and I can’t think of any drawback. But this doesn’t mean much… so if someone gives a good use-case for allowing negative values, I’m definitely interested;

  • CIECAM02 must be turned off. If you turn it on, the output will be clipped. (That part of the code is simply too much for me to handle);

  • As @Elle wrote above, the default output profiles will clip (specifically, they will cause lcms to work in “bounded” mode); a simple workaround is to set a custom output gamma (e.g. “linear_g1.0”) in the “color management” tool;

  • I had to mess up a little bit with some SSE2-only code paths; there might be some slight performance degradation but I’m confident that someone more competent can fix this;

  • there are probably many other bugs that I just haven’t discovered yet.

A little demo (forgive the slow speed, my machine is not exactly cutting-edge):

https://filebin.net/jp9p7x0wzroo54t3/Peek_2018-02-11_21-01.mp4

4 Likes

I just put in out-of-bound display coloration (cyan for < 0.0, magenta for > 1.0) in rawproc, and I was surprised to see how many operations drove tones < 0.0. It’s probably not as egregious to clip to 0.0 as it is to 1.0, but it does give you pause to think; now, I tend to not set a black point in scaling, and wait until the other editing is done, see where it puts the data.

Like Like Like… thanks! (I mean I like the way this enhancement is going)

Sounds good.

As I have mentioned (Unbounded Floating Point Pipelines - #25 by snibgo ) , negative values can arise from “innocent” operations such as resizing and sharpening. Sometimes, clipping these is acceptable but sometimes it isn’t. We might prefer a more “filmic” treatment that doesn’t lose data.

Hence, please consider this a vote for preventing clipping negative values, where feasible.

EDIT to add: if some stage in the RT pipeline really can’t deal with negative values, developers might consider alternatives to simple clipping. For example, apply a curve that pushes negative values to positive, and (to maintain relative values) make some positive values more positive.

1 Like

@ggbutcher and @snibgo both mention the fact that various operations can push originally “in display range” channel values into negative territory. This is one reason to consider not clipping to 0. Unfortunately as noted in various recent pixls.us discussions, sometimes trying to make further edits using <0.0 (and sometimes even with >1.0) RGB channel values can lead to unexpected and visually awful results. Snigbo suggests one way to deal with these issues, and it seems like an approach with a lot of potential, but as always the devil is in the details.

Not clipping negative (out of gamut) channel values to 0.0 can be very important, even before any actual editing operations have a chance to produce negative channel values. Sometimes channel detail in the raw file will be summarily and irrevocably lost by a “negative channel values clamped to zero” ICC profile conversion. This does happen in RawTherapee and I’m fairly sure it happens in a conversion to LAB that happens somewhere in the RawTherapee processing pipeline. Here is an illustration of the problem:

“1” is what the color image looks like.

“2” is what the blue channel looks like in the interpolated raw file, after assigning a custom camera input profile and before converting to any other color space. This is the detail I sometimes want to preserve either as a blending layer or for channel-based conversions to black and white.

“3” is what the blue channel looks like in RawTherapee after converting to one of RawTherapee’s hard-coded RGB working spaces.

“4” is what the blue channel looks like when asking RawTherapee to convert to the custom camera input profile upon saving the interpolated file to disk. This conversion back to the input profile doesn’t retrieve the original blue channel information. The detail captured by the camera is gone.

Sometimes I use the channel detail from the camera raw file as blending layers, and sometimes for black and white output, which is one reason why I like the option to output completely scene-referred files that are still in the camera input color space. But currently this isn’t possible with RawTherapee, except when using the following work-around:

  • Assign a linear gamma sRGB profile as both the input and the output profile, and choose one of RT’s larger RGB working spaces such as Rec2020 or ProPhotoRGB, to avoid clipping from whatever. This works because the color gamut of an sRGB profile from disk is smaller than the color gamut of any of RT’s built-in RGB working spaces, except possibly the RT sRGB color space.

The problem with the “yellow flower” raw file is that my custom camera matrix input profile interprets the yellow as outside the color gamut defined by real colors, colors seen by the standard observer. The dcraw standard matrix and the RT automatched DCP also interpret the yellow as outside the color gamut defined by real colors. But the channel detail captured by the camera is still valid detail.

This same problem happens with deep saturated violet blues such as blue neon signs or backlit blue glass, except it’s the red and green channels that get clipped to black. And the problem with violet blue is a bit more extreme as not only are one or two channel values less than zero, but also the LAB Lightness can be less than zero.

If they might be useful, I have sample raw files exhibiting the “bright saturated yellows and yellow greens” problem and also exhibiting the “dark saturated violet-blues” problem from my Canon 400D/Rebel xti camera, and also from my Sony A7 camera.

One solution to the “outside the realm of visible/real colors” problem is to make and use a custom camera input profile that’s a LAB LUT profile. But this creates a new set of problems: Colors resulting from using a LAB LUT input profile tend to be anemic (at least the ones I’ve made using ArgyllCMS produce anemic colors); LAB LUT profiles clip colors with channel values greater than 1.0; and commercially available target charts don’t have enough sample points to make a decent LUT profile.

My own solution for bringing “out of visible gamut” yellows, yellow greens, and blue-violets back into gamut for color output is to output the raw file to disk in my matrix custom camera input profile, make a copy, assign my custom LUT profile to the copy and then convert the copy to my custom matrix camera input profile, pull both images into the same layer stack, and blend using a layer mask. This can be done using output from current RawTherapee, but it does require using the “linear gamma sRGB as input and output profile” work-around.

As far as I know, which isn’t very far, the range of colors on which CIECAM02 was “built” isn’t that great. So for using CIECAM it’s better to have already brought all colors into the gamut of a color space that doesn’t include imaginary colors (which rules out camera input profiles). Personally I don’t have any qualms about the CIECAM02 module not handling out of gamut (channel values less than 0.0) colors or HDR colors. At least in my own workflow, when I use the RT CIECAM02 module, the image already has only “display range” RGB channel values, in a standard RGB working space.

As an aside, in theory can CIECAM02 be extended to handle “HDR but real” colors?

Edit: I did compile the unbounded code this morning, and confirmed that the issues mentioned above regarding dealing with “imaginary” colors produced by camera input profiles, still obtains with the new code, which of course would be the case as the issue precisely is clipping of negative channel values.

Thanks @ggbutcher @snibgo and @Elle for the feedback. Ok, so now I understand not clipping to 0 is useful. I’ll see what I can do.

They would definitely be useful, thanks. Even better if you can license them in such a way that they could be included in the RT set of test files. If that’s not possible but you are still willing to send them to me, that’s also okay.

Hi @agriggio - here’s the yellow flower raw file from the Canon 400D:

https://ninedegreesbelow.com/bug-reports/ufraw-highlights/flower.cr2

There’s no embedded copyright information, but please assign whatever copyright you want. I have some other yellow flowers (direct sunlight, most are not back-lit though back-lighting makes the problem worse as it intensifies the colors), a back-lit (sunlight) green leaf, and a “lettuce and tomatos” image (direct halogen light, no back-lighting), that show similar issue. If you like I can provide these also. Plus I have some shots of a back-lit (sunlight) deep blue glass, and a back-lit (also sunlight) blue label on an empty bottle of dasani water. All of these are test shots, not exactly pretty pictures!

The reason I mention the light source is that for many of these images the “camera” white balance in the raw file is UniWB, and so the problem “disappears” unless you change from UniWB to the white balance for the indicated light source. For most of the images “daylight” is the appropriate white balance. This includes the yellow flower image from the above link.

My Sony A7 “violet blue” images turned out to all be ev-bracketed, and my only yellow flower is completely blurred. So tomorow (which will be sunny, unlike today) I’ll make some test shots that are in focus and not blurred.

1 Like

@agriggio Another yellow flower

1 Like

About not clipping negative values we also have to take care about the tools which calculate the logarithm. There are some tools in RT which work in logarithmic space especially for luminance. Having negative input values will result in NaN or Clip even when it’s not immediatly clear what causes the clip to zero.

For example the damping function for RL-sharpening in RT

As you may (or may not) see, it clips to zero to avoid gettings NaN from logarithm of negative values…

That’s a nice yellow flower, perfectly illustrates the problem. The center of the blue channel for the actual flower goes to black when using the Camera white balance, and even more so when using Daylight white balance. Using linear gamma sRGB from disk as the input profile shows there is actually plenty of detail in the blue channel.

1 Like

@Elle, this is tricky stuff :exploding_head:, but is this correct -

  1. A camera sensor can record colours which humans can’t see, so for example the petals of a yellow flower might look a uniform yellow to us, but the camera can see a range of yellows.
  2. Bringing these colours into the processing creates problems because the colour science and associated maths is very much geared to visible (to humans) colours.
  3. You like to improve the range of tones and details in your finished images by taking the invisible-to-humans detail and making it visible, e.g. the petals now show some variation. So you produce a more interesting image, but it’s not quite how one would have observed it had one been there at the time.
1 Like

This is a good point, thanks! IMHO it’s not a big deal if some tools clip to zero, as long as this is documented. Working with negative values seems like a niche/expert mode anyway, so I think it’s acceptable that you can only use a subset of the tools. Of course, everything has to work without crashing.

At this point though, for me working with negative values is low priority. I’d like to have everything work well (without crashes and artifacts) when not clipping to 1 (actually, in RT that’s 65535 most of the time, not 1) first.

This sounds like 16bit integer arithmetic. I thought RT was fl.pt? Rawpedia says “RawTherapee 4 does all calculations in precise 32-bit floating point notation…” Please could you shed light? (I appreciate you can use f.p. variables to store say [0 to 65535], is that what happens?)

hmmm… by that logic, clipping to 1 would mean 1-bit integer arithmetic? :slight_smile:

indeed :+1:

1 Like

Hi @RawConvert - hmm, no, what you say touches on a lot of stuff that’s true, but doesn’t really describe the problems - or at least not all of the problems - when using a digital camera (some digital cameras, I don’t know about digital cameras) to photograph saturated bright yellows (extending to yellow-green and orange) and saturated dark violet-blues.

Silicon sensors are “color blind”, aren’t able to capture color information without using some kind of filter system to differentiate between different wavelengths of visible light that hit the sensor. The most commonly used filter is the Bayer filter, which puts an alternating pattern of little red, green, and blue color filters over the individual pixels in the camera sensor. These filters aren’t “single-wavelenght passing” filters. Instead the green filters let in a little bit of red and blue light, and the blue filters let in a little bit of red and green light, and so forth. Color accuracy is affected by the particular chosen filters, both in terms of the central wavelength that’s allowed through, and in terms of how wide a swath of wavelengths on either side of the central wavelength is allowed through. The more spectrally pure the filters are, the more accurately color can be recorded. The price for “better color” is longer exposure times as less light gets through more spectrally pure filters to reach the sensor.

OK. So the pixels record varying amounts of red, green, and blue light. And the raw processors interpolate the recorded intensities to produce an RGB image with RGB channel values for each pixel. The next step is to take the RGB information and assign a camera input profile. The camera input profile says “For this interpolated image, using this input profile, these RGB channel values correspond to these XYZ/LAB colors”.

The way camera input profiles are made is first you photograph a target chart that’s printed with various color patches with known XYZ/LAB values (“known” because they are carefully measured after they are printed). Ideally you photograph the target chart under full spectrum/high CRI light such as natural daylight or sunlight, or under tightly controlled studio lighting. And then you use profiling software that iterates over all the patches as captured by the camera, to produce a profile that basically says this particular RGB combination as recorded by this particular camera under these particular lighting conditions corresponds to this particular XYZ/LAB color.

The myriad ways in which a camera+camera input profile can fail to capture the colors a human would have perceived when viewing the scene that contained the image that was captured by the camera are, well, numerous and for the most part fairly much outside that realm of “things I think I understand more or less”. But here are some basic considerations:

  1. The camera can’t accurately capture colors that fall too far outside the color gamut encompassed by the filter cap colors.

  2. The camera input profile can’t accurately point to colors that fall too far outside the gamut of colors actually on the target chart. And fewer target chart patches leaves more room for errors for colors that fall between the patches.

  3. There is this thing called “metameric mismatch” which leads us, for example, to sometimes realize that the two socks that matched each other at home, are obviously not the same color at the office. Metameric mismatch is based on the fact that many different combinations of wavelengths of light can result in our eyes detecting “the same” color, and also what looks like “the same” color under lighting condition A might look very different under lighting condition B. Less expensive target charts use less complicated and representative color pigments to produce the color patches, so are especially prone to metameric failures. So “number of patches” isn’t the only consideration when selecting a target chart. The quality of the inks/dyes/pigments used to produce the patches also counts.

  4. The light source on the scene might be “too different” from the light source used to photograph the target chart, with “too different” entirely depending on the color accuracy requirements for the photographic task at hand. This is why fashion and food photographers shooting on location often will make a target chart shot sometimes for every image they capture, or at least every time the light changes.

  5. The type of camera input profile that was made (or that you make) for your camera puts limits on how accurately the input profile can reproduce the colors on the target chart that was used to make the profile:

    • Just as with monitor profiles, when using ArgyllCMS to make a camera input profile you can choose between single channel gamma matrix (including limiting the single channel to being linear gamma), three-channel gamma matrix, single channel shaper matrix, three-channel shaper matrix, XYZ LUT and LAB LUT camera input profiles.
    • Three-channel matrix and LUT profiles can produce lower deltas for the measured patches, compared to linear gamma matrix profiles. But for a camera, well, commercially available charts just don’t have enough patches for extremely accurate profiles, so your profiling process will amount to curve-fitting, producing “more right” colors for the patches on the chart but “less right” results for colors not on the target chart.
    • So if you really need the accuracy of a LUT or three-channel matrix camera input profile, it’s better to make your own target chart using pigments representative of the colors you’ll actually be photographing, and also use controlled lighting, and make a new profile if you plan to shoot under different lighting conditions.

For general photography, the most commonly used type of camera input profile is a single-channel linear gamma matrix input profile. At least for some cameras, including my Canon 400D, my Sony A7, and @heckflosse 's Nikon that was used to photograph the yellow flower in his NEF (well, I think that’s a fairly representative set of cameras), this type of camera input profile not only fails with bright yellow objects out there in the world, but also can fail with some of the yellow color patches on the target chart that was photographed to make the camera input profile.

By “fail” I mean the camera input profile interprets the saturated yellows on the target chart that was used to make the profile as “too saturated” compared to the known XYZ/LAB values of these color patches on the target chart. And not just “too saturated” but also so saturated as to be outside the realm of visible colors.

This isn’t a simple case of the camera capturing information a human can’t see (for that, consider microphotography, night photography, infrared or ultraviolet photography, etc), but rather a case of the “camera input profile” failing to accurately interpret the captured information (putting to one side questions of how accurately the information was captured in the first place given the particular camera sensor, lighting, lens, exposure, and etc).

The problem I’m pointing to is cases where the captured color information as interpreted by the camera input profile is wrong. This particular “wrong” has to do more with limitations of the camera input profile, than with myriad other factors that also influence “captured colors” vs “actual scene colors”.

Siimply clipping the channel information to the color gamut of “visible colors” doesn’t make the captured color match what you might have seen when you took the photograph. For saturated bright yellow colors this just takes “the wrong color” and makes it “more wrong” by throwing away the blue channel contribution to the color.

For dark saturated violet-blues, these colors - as captured by a given camera and interpreted by a given “general purpose” matrix input profile - can end up being interpreted by the input profile as not just “outside the gamut of visible colors”, but also with a negative Luminance value, which is totally nonsensical from the point of view of “what we actually see”. Which makes dealing with these colors more difficult even than dealing with bright saturated yellows.

Sometimes indeed the quickest and simplest way to deal with bright saturated yellows and dark saturated violet-blues is simply to clip the channel information, and perhaps most of the time this is fine with most users for most purposes. But if a user wants to do something else, wants to take steps to bring the colors back into the gamut of real colors, or wants to use the channel information captured by the camera during processing, it would be nice if they had a chance to do this.

5 Likes

I just checked, and today my socks match. Didn’t, yesterday… :smiley:

Seriously, @Elle’s treatise above is one of the more informative discussions of camera profile pitfalls I’ve seen yet. Thanks…

@agriggio - I pulled all the files together into a folder. The Canon files are around 9MB each, and the one Sony file is 24MB. The Sony file is a test shot with blue glass and also yellow glass, backlit. The yellow glass goes to black in the blue channel and the blue glass goes to black in the red and green channel.

Some of the channel information from four of the Canon test files is used in this article about “noise in blue channel” actually sometimes being information that was clipped to black:

The blue channel of the lettuce in the “lettuce and tomatoes” image looks quite unexpectedly awful when using the RT DCP input profile, not so bad with the standard profile.

The empty Dasani bottle label shows that even a mid-toned blue can have clipped channel information. Other than that, it’s not a particularly interesting test image.

The thing that makes clipped channel information really difficult to deal with, is as follows: as long as you keep the colors extremely saturated, then you won’t see any problems. But as soon as you start desaturating the image, lost channel information can show up as posterized areas.

I can put these files into a zip file with “per image” white balancing information, but it would be a large zip file. If you are still interested in the additional files, where would you like them uploaded to? I could just drop them one by one into this thread, but I don’t know how @patdavid would feel about that!

@ggbutcher - That’s a very kind thing to say, thank you! This stuff is pretty important, but it’s also stuff that nobody really wants to think about because it’s complicated. But then sometimes a photograph turns up with a problem that just doesn’t seem to have any solution except to look into the details of the processing pipeline.

I’m looking at a xyY plot right now, specifically the one you cite in one of your profile articles from Wikipedia. I would gather that we’re talking about the yellow hues along the edge of the visible spectrum horseshoe, at about 575nm? If so, would that be because matrix profiles define their boundaries such that the straight line between greenest-green and reddest-red can can cut off the yellows that fall between it and the edge of the horseshoe?