Feature request: save as floating-point


(Elle Stone) #61

@ggbutcher and @snibgo both mention the fact that various operations can push originally “in display range” channel values into negative territory. This is one reason to consider not clipping to 0. Unfortunately as noted in various recent pixls.us discussions, sometimes trying to make further edits using <0.0 (and sometimes even with >1.0) RGB channel values can lead to unexpected and visually awful results. Snigbo suggests one way to deal with these issues, and it seems like an approach with a lot of potential, but as always the devil is in the details.

Not clipping negative (out of gamut) channel values to 0.0 can be very important, even before any actual editing operations have a chance to produce negative channel values. Sometimes channel detail in the raw file will be summarily and irrevocably lost by a “negative channel values clamped to zero” ICC profile conversion. This does happen in RawTherapee and I’m fairly sure it happens in a conversion to LAB that happens somewhere in the RawTherapee processing pipeline. Here is an illustration of the problem:

“1” is what the color image looks like.

“2” is what the blue channel looks like in the interpolated raw file, after assigning a custom camera input profile and before converting to any other color space. This is the detail I sometimes want to preserve either as a blending layer or for channel-based conversions to black and white.

“3” is what the blue channel looks like in RawTherapee after converting to one of RawTherapee’s hard-coded RGB working spaces.

“4” is what the blue channel looks like when asking RawTherapee to convert to the custom camera input profile upon saving the interpolated file to disk. This conversion back to the input profile doesn’t retrieve the original blue channel information. The detail captured by the camera is gone.

Sometimes I use the channel detail from the camera raw file as blending layers, and sometimes for black and white output, which is one reason why I like the option to output completely scene-referred files that are still in the camera input color space. But currently this isn’t possible with RawTherapee, except when using the following work-around:

  • Assign a linear gamma sRGB profile as both the input and the output profile, and choose one of RT’s larger RGB working spaces such as Rec2020 or ProPhotoRGB, to avoid clipping from whatever. This works because the color gamut of an sRGB profile from disk is smaller than the color gamut of any of RT’s built-in RGB working spaces, except possibly the RT sRGB color space.

The problem with the “yellow flower” raw file is that my custom camera matrix input profile interprets the yellow as outside the color gamut defined by real colors, colors seen by the standard observer. The dcraw standard matrix and the RT automatched DCP also interpret the yellow as outside the color gamut defined by real colors. But the channel detail captured by the camera is still valid detail.

This same problem happens with deep saturated violet blues such as blue neon signs or backlit blue glass, except it’s the red and green channels that get clipped to black. And the problem with violet blue is a bit more extreme as not only are one or two channel values less than zero, but also the LAB Lightness can be less than zero.

If they might be useful, I have sample raw files exhibiting the “bright saturated yellows and yellow greens” problem and also exhibiting the “dark saturated violet-blues” problem from my Canon 400D/Rebel xti camera, and also from my Sony A7 camera.

One solution to the “outside the realm of visible/real colors” problem is to make and use a custom camera input profile that’s a LAB LUT profile. But this creates a new set of problems: Colors resulting from using a LAB LUT input profile tend to be anemic (at least the ones I’ve made using ArgyllCMS produce anemic colors); LAB LUT profiles clip colors with channel values greater than 1.0; and commercially available target charts don’t have enough sample points to make a decent LUT profile.

My own solution for bringing “out of visible gamut” yellows, yellow greens, and blue-violets back into gamut for color output is to output the raw file to disk in my matrix custom camera input profile, make a copy, assign my custom LUT profile to the copy and then convert the copy to my custom matrix camera input profile, pull both images into the same layer stack, and blend using a layer mask. This can be done using output from current RawTherapee, but it does require using the “linear gamma sRGB as input and output profile” work-around.

As far as I know, which isn’t very far, the range of colors on which CIECAM02 was “built” isn’t that great. So for using CIECAM it’s better to have already brought all colors into the gamut of a color space that doesn’t include imaginary colors (which rules out camera input profiles). Personally I don’t have any qualms about the CIECAM02 module not handling out of gamut (channel values less than 0.0) colors or HDR colors. At least in my own workflow, when I use the RT CIECAM02 module, the image already has only “display range” RGB channel values, in a standard RGB working space.

As an aside, in theory can CIECAM02 be extended to handle “HDR but real” colors?

Edit: I did compile the unbounded code this morning, and confirmed that the issues mentioned above regarding dealing with “imaginary” colors produced by camera input profiles, still obtains with the new code, which of course would be the case as the issue precisely is clipping of negative channel values.


(Alberto) #62

Thanks @ggbutcher @snibgo and @Elle for the feedback. Ok, so now I understand not clipping to 0 is useful. I’ll see what I can do.

They would definitely be useful, thanks. Even better if you can license them in such a way that they could be included in the RT set of test files. If that’s not possible but you are still willing to send them to me, that’s also okay.


(Elle Stone) #63

Hi @agriggio - here’s the yellow flower raw file from the Canon 400D:

https://ninedegreesbelow.com/bug-reports/ufraw-highlights/flower.cr2

There’s no embedded copyright information, but please assign whatever copyright you want. I have some other yellow flowers (direct sunlight, most are not back-lit though back-lighting makes the problem worse as it intensifies the colors), a back-lit (sunlight) green leaf, and a “lettuce and tomatos” image (direct halogen light, no back-lighting), that show similar issue. If you like I can provide these also. Plus I have some shots of a back-lit (sunlight) deep blue glass, and a back-lit (also sunlight) blue label on an empty bottle of dasani water. All of these are test shots, not exactly pretty pictures!

The reason I mention the light source is that for many of these images the “camera” white balance in the raw file is UniWB, and so the problem “disappears” unless you change from UniWB to the white balance for the indicated light source. For most of the images “daylight” is the appropriate white balance. This includes the yellow flower image from the above link.

My Sony A7 “violet blue” images turned out to all be ev-bracketed, and my only yellow flower is completely blurred. So tomorow (which will be sunny, unlike today) I’ll make some test shots that are in focus and not blurred.


(Ingo Weyrich) #64

@agriggio Another yellow flower


(Ingo Weyrich) #65

About not clipping negative values we also have to take care about the tools which calculate the logarithm. There are some tools in RT which work in logarithmic space especially for luminance. Having negative input values will result in NaN or Clip even when it’s not immediatly clear what causes the clip to zero.

For example the damping function for RL-sharpening in RT

As you may (or may not) see, it clips to zero to avoid gettings NaN from logarithm of negative values…


(Elle Stone) #66

That’s a nice yellow flower, perfectly illustrates the problem. The center of the blue channel for the actual flower goes to black when using the Camera white balance, and even more so when using Daylight white balance. Using linear gamma sRGB from disk as the input profile shows there is actually plenty of detail in the blue channel.


(Andrew) #67

@Elle, this is tricky stuff :exploding_head:, but is this correct -

  1. A camera sensor can record colours which humans can’t see, so for example the petals of a yellow flower might look a uniform yellow to us, but the camera can see a range of yellows.
  2. Bringing these colours into the processing creates problems because the colour science and associated maths is very much geared to visible (to humans) colours.
  3. You like to improve the range of tones and details in your finished images by taking the invisible-to-humans detail and making it visible, e.g. the petals now show some variation. So you produce a more interesting image, but it’s not quite how one would have observed it had one been there at the time.

(Alberto) #68

This is a good point, thanks! IMHO it’s not a big deal if some tools clip to zero, as long as this is documented. Working with negative values seems like a niche/expert mode anyway, so I think it’s acceptable that you can only use a subset of the tools. Of course, everything has to work without crashing.

At this point though, for me working with negative values is low priority. I’d like to have everything work well (without crashes and artifacts) when not clipping to 1 (actually, in RT that’s 65535 most of the time, not 1) first.


(Andrew) #69

This sounds like 16bit integer arithmetic. I thought RT was fl.pt? Rawpedia says “RawTherapee 4 does all calculations in precise 32-bit floating point notation…” Please could you shed light? (I appreciate you can use f.p. variables to store say [0 to 65535], is that what happens?)


(Alberto) #70

hmmm… by that logic, clipping to 1 would mean 1-bit integer arithmetic? :slight_smile:

indeed :+1:


(Elle Stone) #71

Hi @RawConvert - hmm, no, what you say touches on a lot of stuff that’s true, but doesn’t really describe the problems - or at least not all of the problems - when using a digital camera (some digital cameras, I don’t know about digital cameras) to photograph saturated bright yellows (extending to yellow-green and orange) and saturated dark violet-blues.

Silicon sensors are “color blind”, aren’t able to capture color information without using some kind of filter system to differentiate between different wavelengths of visible light that hit the sensor. The most commonly used filter is the Bayer filter, which puts an alternating pattern of little red, green, and blue color filters over the individual pixels in the camera sensor. These filters aren’t “single-wavelenght passing” filters. Instead the green filters let in a little bit of red and blue light, and the blue filters let in a little bit of red and green light, and so forth. Color accuracy is affected by the particular chosen filters, both in terms of the central wavelength that’s allowed through, and in terms of how wide a swath of wavelengths on either side of the central wavelength is allowed through. The more spectrally pure the filters are, the more accurately color can be recorded. The price for “better color” is longer exposure times as less light gets through more spectrally pure filters to reach the sensor.

OK. So the pixels record varying amounts of red, green, and blue light. And the raw processors interpolate the recorded intensities to produce an RGB image with RGB channel values for each pixel. The next step is to take the RGB information and assign a camera input profile. The camera input profile says “For this interpolated image, using this input profile, these RGB channel values correspond to these XYZ/LAB colors”.

The way camera input profiles are made is first you photograph a target chart that’s printed with various color patches with known XYZ/LAB values (“known” because they are carefully measured after they are printed). Ideally you photograph the target chart under full spectrum/high CRI light such as natural daylight or sunlight, or under tightly controlled studio lighting. And then you use profiling software that iterates over all the patches as captured by the camera, to produce a profile that basically says this particular RGB combination as recorded by this particular camera under these particular lighting conditions corresponds to this particular XYZ/LAB color.

The myriad ways in which a camera+camera input profile can fail to capture the colors a human would have perceived when viewing the scene that contained the image that was captured by the camera are, well, numerous and for the most part fairly much outside that realm of “things I think I understand more or less”. But here are some basic considerations:

  1. The camera can’t accurately capture colors that fall too far outside the color gamut encompassed by the filter cap colors.

  2. The camera input profile can’t accurately point to colors that fall too far outside the gamut of colors actually on the target chart. And fewer target chart patches leaves more room for errors for colors that fall between the patches.

  3. There is this thing called “metameric mismatch” which leads us, for example, to sometimes realize that the two socks that matched each other at home, are obviously not the same color at the office. Metameric mismatch is based on the fact that many different combinations of wavelengths of light can result in our eyes detecting “the same” color, and also what looks like “the same” color under lighting condition A might look very different under lighting condition B. Less expensive target charts use less complicated and representative color pigments to produce the color patches, so are especially prone to metameric failures. So “number of patches” isn’t the only consideration when selecting a target chart. The quality of the inks/dyes/pigments used to produce the patches also counts.

  4. The light source on the scene might be “too different” from the light source used to photograph the target chart, with “too different” entirely depending on the color accuracy requirements for the photographic task at hand. This is why fashion and food photographers shooting on location often will make a target chart shot sometimes for every image they capture, or at least every time the light changes.

  5. The type of camera input profile that was made (or that you make) for your camera puts limits on how accurately the input profile can reproduce the colors on the target chart that was used to make the profile:

    • Just as with monitor profiles, when using ArgyllCMS to make a camera input profile you can choose between single channel gamma matrix (including limiting the single channel to being linear gamma), three-channel gamma matrix, single channel shaper matrix, three-channel shaper matrix, XYZ LUT and LAB LUT camera input profiles.
    • Three-channel matrix and LUT profiles can produce lower deltas for the measured patches, compared to linear gamma matrix profiles. But for a camera, well, commercially available charts just don’t have enough patches for extremely accurate profiles, so your profiling process will amount to curve-fitting, producing “more right” colors for the patches on the chart but “less right” results for colors not on the target chart.
    • So if you really need the accuracy of a LUT or three-channel matrix camera input profile, it’s better to make your own target chart using pigments representative of the colors you’ll actually be photographing, and also use controlled lighting, and make a new profile if you plan to shoot under different lighting conditions.

For general photography, the most commonly used type of camera input profile is a single-channel linear gamma matrix input profile. At least for some cameras, including my Canon 400D, my Sony A7, and @heckflosse 's Nikon that was used to photograph the yellow flower in his NEF (well, I think that’s a fairly representative set of cameras), this type of camera input profile not only fails with bright yellow objects out there in the world, but also can fail with some of the yellow color patches on the target chart that was photographed to make the camera input profile.

By “fail” I mean the camera input profile interprets the saturated yellows on the target chart that was used to make the profile as “too saturated” compared to the known XYZ/LAB values of these color patches on the target chart. And not just “too saturated” but also so saturated as to be outside the realm of visible colors.

This isn’t a simple case of the camera capturing information a human can’t see (for that, consider microphotography, night photography, infrared or ultraviolet photography, etc), but rather a case of the “camera input profile” failing to accurately interpret the captured information (putting to one side questions of how accurately the information was captured in the first place given the particular camera sensor, lighting, lens, exposure, and etc).

The problem I’m pointing to is cases where the captured color information as interpreted by the camera input profile is wrong. This particular “wrong” has to do more with limitations of the camera input profile, than with myriad other factors that also influence “captured colors” vs “actual scene colors”.

Siimply clipping the channel information to the color gamut of “visible colors” doesn’t make the captured color match what you might have seen when you took the photograph. For saturated bright yellow colors this just takes “the wrong color” and makes it “more wrong” by throwing away the blue channel contribution to the color.

For dark saturated violet-blues, these colors - as captured by a given camera and interpreted by a given “general purpose” matrix input profile - can end up being interpreted by the input profile as not just “outside the gamut of visible colors”, but also with a negative Luminance value, which is totally nonsensical from the point of view of “what we actually see”. Which makes dealing with these colors more difficult even than dealing with bright saturated yellows.

Sometimes indeed the quickest and simplest way to deal with bright saturated yellows and dark saturated violet-blues is simply to clip the channel information, and perhaps most of the time this is fine with most users for most purposes. But if a user wants to do something else, wants to take steps to bring the colors back into the gamut of real colors, or wants to use the channel information captured by the camera during processing, it would be nice if they had a chance to do this.


(Glenn Butcher) #72

I just checked, and today my socks match. Didn’t, yesterday… :smiley:

Seriously, @Elle’s treatise above is one of the more informative discussions of camera profile pitfalls I’ve seen yet. Thanks…


(Elle Stone) #73

@agriggio - I pulled all the files together into a folder. The Canon files are around 9MB each, and the one Sony file is 24MB. The Sony file is a test shot with blue glass and also yellow glass, backlit. The yellow glass goes to black in the blue channel and the blue glass goes to black in the red and green channel.

Some of the channel information from four of the Canon test files is used in this article about “noise in blue channel” actually sometimes being information that was clipped to black:

https://ninedegreesbelow.com/photography/negative-primaries.html

The blue channel of the lettuce in the “lettuce and tomatoes” image looks quite unexpectedly awful when using the RT DCP input profile, not so bad with the standard profile.

The empty Dasani bottle label shows that even a mid-toned blue can have clipped channel information. Other than that, it’s not a particularly interesting test image.

The thing that makes clipped channel information really difficult to deal with, is as follows: as long as you keep the colors extremely saturated, then you won’t see any problems. But as soon as you start desaturating the image, lost channel information can show up as posterized areas.

I can put these files into a zip file with “per image” white balancing information, but it would be a large zip file. If you are still interested in the additional files, where would you like them uploaded to? I could just drop them one by one into this thread, but I don’t know how @patdavid would feel about that!

@ggbutcher - That’s a very kind thing to say, thank you! This stuff is pretty important, but it’s also stuff that nobody really wants to think about because it’s complicated. But then sometimes a photograph turns up with a problem that just doesn’t seem to have any solution except to look into the details of the processing pipeline.


(Glenn Butcher) #74

I’m looking at a xyY plot right now, specifically the one you cite in one of your profile articles from Wikipedia. I would gather that we’re talking about the yellow hues along the edge of the visible spectrum horseshoe, at about 575nm? If so, would that be because matrix profiles define their boundaries such that the straight line between greenest-green and reddest-red can can cut off the yellows that fall between it and the edge of the horseshoe?


(Ed Mathis) #75

The tone of this discussion takes me back to my college days, sitting in a class run by a particularly knowledgeable and articulate full professor…

Elle, thanks for going over the stuff that goes on “behind the curtain” while making a digital photo! It’s fascinating.


(darix) #76

There is way too much math here! All I want is nice photos!


(Glenn Butcher) #77

Me too, actually. In that regard, photography is a rabbit-hole; you go out and capture images, look at your results and say, “Gee, that’s not quite right. Why…” - and there you go, down the hole.

Interestingly, a lot of consideration has gone into abstracting the technical details of digital imaging so creative folk can do their thing without a lot of logN and dX thinking. Sometimes, the interpretations of the abstractions get convoluted (if you’ve been following the ISO invariance discussions on DPReview, good example), but there are distinct levels of understanding that allow folks to get their particular jobs done without worrying the underlying details. OOC JPEG is really not bad these days; just don’t look too closely… :smile:


(Elle Stone) #78

Yes.

No, or not only. Some RGB matrix color spaces such as sRGB, Rec2020, and WideGamutRGB have RGB chromaticities that lie on or within the “horseshoe” of real colors on the xy plane. For these color spaces there will always be at least a “sliver” of real yellow colors that aren’t encompassed by the particular RGB color space.

However, some RGB matrix color spaces do include all real yellow colors, by virtue of using an “imaginary” (outside the locus of real colors, outside the “horseshoe”) color as one or more of the RGB chromaticities. So for example ProPhotoRGB has two imaginary chromaticities (blue and green) and one real chromaticity (red). This allows ProPhotoRGB to include all real yellow colors.

The following image shows sRGB, Rec2020, and ProPhotoRGB chromaticities. See how the ProPhoto “triangle” skims along the right-most edge of “all real colors”?
CIExy1931_ProPhoto-sRGB-Rec2020

The above image is a superpositioning of the following public domain Wikipedia files, with added labels and “dots” to show the chromaticities:
https://commons.wikimedia.org/wiki/File:CIExy1931_ProPhoto.svg, https://commons.wikimedia.org/wiki/File:CIExy1931_Rec_2020.svg, https://commons.wikimedia.org/wiki/File:CIExy1931_Rec_709.svg.

The ACES color space also includes all real yellows, and also all the other real colors, by virtue of have a blue chromaticity below the x axis - the “base” of the ACES triangle skims along the “magenta” line connecting wavelengths 380nm and 700nm until it intersects the negative y axis, which is where the “blue” ACES chromaticity is located. And the “green” ACES chromaticity is where the line extending from the red chromaticity and skimming along all the yellow colors, to intersect the positive y axis.

So all real colors can be accomodated in one or another RGB matrix profile. The problem with camera input profiles is that some RGB channel values might correspond to colors that are outside the “horseshoe” of real colors. The extent to which this is a problem depends on the colors that were photographed, the camera, and the input profile. Here’s an example of yellow colors that fall completely outside the “horseshoe”:


(Above file is from https://ninedegreesbelow.com/photography/negative-primaries.html).
The actual yellow colors that were photographed were real colors, really seen out there in the real world. These are colors from bright yellow flowers photographed in full sun. So the problem isn’t that the camera recorded colors that humans can’t see. Instead the problem is that the camera input profile wrongly interpreted these colors as being imaginary.

For yellow and also yellow-green and orange colors that fall outside the “horseshoe”, when converted to any commonly-used RGB matrix profile the blue channel is less than zero, “out of gamut”, outside the RGB color space’s “triangle” on the xy plane. If these colors are clipped, the blue channel information is lost. Obviously more information is clipped when converting to RGB working spaces that have real colors for all three chromaticities (for example, WideGamut, Rec.2020, sRGB, AdobeRGB, etc).

The problem with saturated violet-blue colors that get interpreted by an input profile as outside the “horseshoe” (eg near wavelengths 470nm-380nm), hence “not real”, is two-fold: Sometimes these colors are clipped in two channels, not just one, so a whole lot of channel information is lost. And sometimes these colors are interpreted as having a negative Luminance. Clipping does produce a color with a positive luminance. But it also can create areas with posterization that can become very obvious during post-processing.

Something that is very easy to forget is how radically our monitors dictate the colors we see. But the colors on the monitor also are clipped, even before the image is converted from the camera input profile to an RGB working space (you have to use some other raw processor than RawTherapee to observe this, as RT currently doesn’t allow viewing the “before conversion to RGB working space” image). So blue colors with negative Luminance values won’t “look black” even though “by the numbers” the Luminance is less than zero. But all you see on the monitor is the color from the unclipped blue channel. If you keep the image in the camera input color space, or else make a completely unclipped conversion to an RGB color space, and then make a Luminance conversion to black and white, that does allow you to see that indeed the “blue” areas have a negative Luminance.


(Pat David) #79

For you? Upload away! :slight_smile: We shouldn’t have a problem with .zip files, and there should be plenty of headroom (~100MB per upload I think).


(Glenn Butcher) #80

It’s funny, I started into the vagaries of color management by wrangling LittleCMS, in the beginning with very little understanding of what I was doing. I now see things like how ideally the profiles in a color workflow need to “nest” to start with a gamut representative of the capture device and preserve it as much as possible until ready for output to media.

Of late, I’ve felt the need to understand my camera profile in this regard, and I’ve just finished a SVG plotter to see its xy coverage. Earlier today I achieved that, and was dismayed to see all the “imaginary” space it encompassed, but your negative-primaries.html graph above with its camera profile eases my mind somewhat. The plotter is a bit hard-coded at this point but I’m going to post it for general consumption tonight when I get home.

Regarding displays, I’m looking at a three-monitor lashup right now, and each is radically different in both color and tone. You really don’t notice such until you can do this sort of side-by-side comparison, and I think most folks who settle are just staring at a single monitor. It really was driven home for me recently when my son and I had to build a slide show to take to church; we didn’t have access to their equipment until the event, and I was a bit dismayed to see some of my pretty pictures’ colors crushed on both ends by their projectors, even after I took pains to convert-on-output to sRGB. I think this problem is prevalent; how do you produce output for media you can’t ascertain its color pedigree prior to the performance. Perplexing… :slight_smile: