Unbounded Floating Point Pipelines


#41

I have been “bothered” by this statement. My question is how do I know when values greater than 1 are of the result of HDR and not “oodr”. I may have asked this question before but it is still unclear in my mind.

In particular, I am still stuck on PhotoFlow’s raw developer (color) module where I have the opportunity to preserve the negative and positive outliers (which are probably “oodr”, come to think of it). I normally clip the negative values because they are hard to deal with and I have the impression that they are different from the positives ones, in that they are discontinuous from 0+ values unlike 1+ values. Maybe I am wrong about that…

Anyway, certain uses and apps require that I clip both ends. So I have to keep that in mind. Otherwise, I will go crazy wondering why something isn’t working.


(Carmelo Dr Raw) #42

@Elle might have a better definition of this, but a simple way of seing the difference is through a RGB -> XYZ conversion.
If the resulting Y value is within the [0…1] range, the corresponding “color” is not HDR and any RGB component which is below 0 or above 1 is because the color is outside the gamut of the colorspace in use. Converting to a wider colorspace will bring all RGB values inside [0…1].
On the other hand, if Y is above 1 then it means that the color is “HDR”. Notice however that any color with Y > 1 is automatically out of the gamut of ANY well-behaved RGB colorspace, because at least one of the RGB components must be in this case bigger than 1.
Negative Y values are unphysical, because nothing can be darker than pure black.

Concerning the clipping in PhF’s RAW module, my recommended settings are:

  • HL reconstruction set to clip or blend
  • clip only negative values in the color tab
    This way, “HDR” values resulting for example from a positive exposure compensation are still available for further processing.

You are right that 0 represent a discontinuity, especially when gamma encoding is involved. As far as I know, there is no standard prescription of how to gamma-encode negative values. For any negative x value, one possibility is to compute G(x) = -1 * G(-x)… The problem does not exist in linear encoding though.


#43

There’s the rub. If I have an image A that has Y values > 1 and separate it into two components, where image B is 0-1 and image C is 1+; what if the conversion of B generates 1+ values? If I converted A directly, which 1+ values would be HDR and which would be the side effect of the conversion?


(Glenn Butcher) #44

Oooh, not to distract from a really interesting exchange, but you just uncovered the reason I was seeing a “correction” to some out-of-bounds nastiness I’d seen using the wavelet denoise tool in the dcraw pipeline. I’d use libraw/dcraw to open the image with a wavelet denoise, and assign the camera profile. The display image would show what looked like tone reversals in the deepest shadows. Then, i’d apply the colorspace tool to convert to the Rec2020 working profile, and the reversals would go away. That would be due to application of the rendering intent, ???

Okay, back to RGB -> XYZ…


(Elle Stone) #45

Hmm, well, my vocabulary perhaps has caused confusion. Let me try again. The following terminology might not be generally accepted but the distinctions need to be made in order to talk about “unbounded” RGB image editing. So if someone knows of “official terminology”, please let me know! But for purposes of answering @afre’s question, here are some definitions and proposed terminology:

Given a specific RGB working color space ICC profile such as the sRGB ICC profile:

  • “In display range”: all RGB channel values are between 0.0f and 1.0f inclusively. These colors are “in display range”: (0.5, 0.75, 0.01), (0.5, 0.0, 0.0), (0.5, 1.0, 0.0), (1.0, 1.0, 1.0), (0.0, 0.0, 0.0).

  • “Out of display range”: At least one RGB channel value is either greater than 1.0 or less than 0.0. So these colors are out of display range: (0.5, 0.75, -0.01), (0.5, 1.01, 0.2), (0.5, 1.01, -0.2), (1.2, 1.2, 1.2), (-1.0, -5.0, 15).

  • “In gamut”: All RGB channel values are >=0.0. So (0.0, 0.0, 0.0), (0.5, 0.75, 0.01), and (25.0, 50.0, 3.75) are all “in gamut” colors, but the last color is “out of display range”.

  • “Out of gamut”: At least one RGB channel value is less than 0.0f. One or two channel values might also be greater than 1.0, but the color is not “out of gamut” unless at least one channel value is less than 0.0. So these colors are “out of gamut”: (0.5, 0.75, -0.01), (0.5. 1.01, -0.2), (-1.0, -5.0, 15). But these colors are not “out of gamut” though they are “out of display range”: (1.2, 1.2, 1.2), (0.0, 0.0, 25.0).

  • HDR colors that are also “in gamut”: At least one RGB channel values is greater than 1.0 and none of the channel values are less than 0.0. Please note that the “Y” value of an HDR color can easily be less than 1.0. For example the sRGB color (0.0, 0.0, 5.0) is an HDR color, but the Y value is only 0.30304 on a scale from 0.0 to 1.0 (30.304 on a scale from 0.0 to 100.0 as ArgyllCMS xicclu uses):

    $ xicclu -ir -pX sRGB-elle-V2-g10.icc
    1 1 1
    1.000000 1.000000 1.000000 [RGB] -> MatrixFwd -> 96.420288 100.000000 82.490540 [XYZ]
    0 0 1
    0.000000 0.000000 1.000000 [RGB] -> MatrixFwd -> 14.305115 6.060791 71.392822 [XYZ]
    0 0 5
    0.000000 0.000000 5.000000 [RGB] -> MatrixFwd -> 71.525574 30.303955 356.964111 [XYZ]

  • Well, that leaves one last category of colors, which are HDR colors that are also “out of gamut” with respect to the specified RGB working space. Such colors require having at least one RGB channel value that’s <0.0 and also at least one RGB channel value that’s >1.0.

As @Carmelo_DrRaw notes, when converted from the specified RGB color space to XYZ, if the “Y” value is greater than 1.0, the color is by definition HDR. But I’ve suggested that the only requirement for in-gamut HDR colors is that at least one channel value be greater than 1.0, which means an HDR color might easily have a Y value that’s less than 1.0, in which case sometimes (I’m not sure about always) the color won’t be HDR with respect to some other RGB color space. However, I think it makes sense to say that the sRGB color (0.0, 0.0, 5.0) is a high dynamic range color with respect to the sRGB color space, even though the Y value for this color is less than 1.0.

In the xyY color space it’s really easy to tell if a color is out of gamut with respect to a given RGB matrix color space: on the xy plane of the xyY color space, draw a triangle connecting the xy values of the color space’s primaries. All colors that fall outside the triangle are out of gamut wrt to the specified color space . Here are some pictures to illustrate:

Notice that in the above image, the white point is D50 even for the sRGB color space, because we are dealing with ICC profiles, not the color spaces specified by the color space specs.

All ProPhotoRGB colors can be encoded in the sRGB color space, but doing so for the colors outside the sRGB xy triangle requires using RGB channel values that are <0.0 (and hence out of gamut wrt to sRGB).

Notice that in the above image of course ProPhotoRGB reddest red is out of gamut with respect to the sRGB color space (it’s outside the sRGB “triangle”) and also HDR with respect to the sRGB color space (it’s brighter than sRGB’s reddest red). I’m assuming it really does make sense to talk about HDR colors with Y values less than 1.0. I think it does, in HDR editing an RGB channel value of 1.0 is just another channel value on the way from 0 to whatever, and Luminance values and/or RGB channel values that are greater than or less than 1.0 in parts of the image, doesn’t make parts of an HDR image “HDR” and other parts “not HDR”.

In any given RGB color space, “in gamut” just means the xy values are within or on the triangle defined by the xy values for the color space primaries. “In display range” means the colors are not only “in gamut” (no negative RGB channel values) but also all RGB channel values are <=1.0.

The above images are from this article: https://ninedegreesbelow.com/photography/display-referred-scene-referred.html.


#46

Thanks @Elle for your patience and excellent explanation! I am no herding dog for pixels.


Source: Martin Pot, CC BY 3.0.


(Andrew) #47

And thanks also from me for your patience and the time you spend on posts like this.


(Elle Stone) #48

That seems like a good definition of image editing - herding pixels around until they finally take their place as and where you want them to be :slight_smile:


(Brien Dieterle) #49

Thank you @Elle, for this great explanation. I was trying to constrain my pixels to 0-1 but realized I only needed to worry about negative values.


(Glenn Butcher) #51

@troy_s, I thought the same thing for a while, took a bit to shake it off, but color gamut and the range of available tone values between the encoding for black and the encoding for white are not related as you describe. When I started this thread, it was in part to recognize that difference and accommodate tone range in the processing pipeline.

It was this realization that set me straight: an 8-bit image can contain data that conforms to a ProPhoto profile. What it doesn’t have that the equivalent 16-bit image has is 65280 more possible values between black and white. It’s the same as regarding two 1-meter rulers; one that is divided into centimeters, the other divided into millimeters. Within that same ruler is a value for the reddest-red, the greenest-green, and the bluest-blue that a particular camera, scanner, display, or printer can present, and that’s part of the definition of color gamut.

The references to “out-of-bound” “oodr”, and “HDR” are because we don’t have a similar consistent nomenclature to define tone range, which is related to the encoding, be it 8 or 16 bit integer, or floating point. “Gamut” was absconded long ago by the color scientists, and is theirs to use with abandon… :smile:


(Glenn Butcher) #53

I don’t see how ICC profile-based color management is limited to the “display reference”. I have a calibrated profile for my camera that is in ICC format, and has linear gamma TRCs. From there, I can convert to a linear gamma working profile and edit to my heart’s content, “scene-referred”. I’m a rather aged computer scientist who is only recently learning digital imaging, and I’m not seeing how using ICC profiles keeps me from editing the image data in its original linear range.

The thread is about floating point data formats, and the use of the ranges below 0.0 and above 1.0 to retain tones pushed out-of-bounds in editing, for passing from one tool to another. If I understand correctly what I’ve read about scene-referred image editing, talking about data bounds in its context is quite relevant.


(Glenn Butcher) #55

Well, integer representations of RGB values are bounded by the number of bits, 0-255 for unsigned 8-bit numbers and 0-65535 for unsigned 16-bit integers, two of the most prevalent data formats used in image processing software. By convention, floating point bounds are typically 0.0 - 1.0, I believe because that’s where the best precision is found in floating point representations. I’m not making this up, this is what I find in the imaging software I’ve encountered. And, this is not about the color gamut, it’s about how computers are organized.

Sorry, yes, a transfer function. Tone Response Curves are specified in ICC profiles mainly to communicate gamma transfer.

Ah, now I’m beginning to understand your irritation. Yes, I get the point that data bounds are mainly associated with display reference. But, in any digital imaging I know there are input bounds defined by the encoding range manifest in the sensor’s analog-to-digital converters. If not, highlights wouldn’t clip at the upper bound of the ADC’s range.

The verb I used was “capture”, not “transform”

From the links you’ve presented, I assume you have a motion picture/video background. Well and good, but I’m trying to have a conversation in this thread about doing in still image software the very thing you cherish in scene-referred data: unbounded channels. I have actually learned a good bit reading from the links you’ve provided, but you’re not reciprocating by attempting to understand the context and legacy behind color management in still image software. Until you demonstrate the motivation to have a constructive conversation about it, I’m done.


#56

My goal was merely to state that fictional terminology is not going to help anyone. Gamuts are extremely well defined within any given encoding system. The problem with the discourse in this thread is that it is essentially shoehorning a scene referred model into display referred systems. This doesn’t work for obvious reasons. Try adjusting a middle grey reference value of 0.18 up seven stops, for example. 2^7*0.18 is well… Not terrific in a display referred system.

That said, if you would like to trial a proper modern scene referred model application that was designed for photographic and phsyically plausible models of light, Nuke offers a non commercial edition.

Being a person with software experience, the idea is essentially Model / View; the data always exists in the reference space as scene referred linear data at 32 bit float, and there can be various views of said data. The view handles the output transform for the “aesthetic” version. By default, Nuke displays the sRGB OETF, with 0.0 scene referred mapping to 0.0 display referred, and 1.0 scene referred mapping to 1.0 display referred, and the OETF function for the transfer between the values.

If you change the view to ACES or another view transform, the scene referred range captured and mapped will be varying. By default for example, I believe the 48 nit ACES shaper will map approximately 6.5 stops above middle grey, which is approximately code value 16.291 scene referred linear.

Again, “Unbounded” is not a thing. It never was. It is a figment of some parroting from a goofy idea GIMP modeled their architecture around.

The verb I used was “capture”, not “transform”

If you read that link, you will indeed see that some cameras will capture negatives as a result of hardware and software within the camera itself.


#57

@troy_s I appreciate your contribution to the topic, though it is a half glass empty take on the current discussion. What you describe is a definitely a different paradigm but might mean that I would need to use other apps to apply. If you don’t mind, I would be interested in seeing your own or recommended processing workflow. Maybe start a new thread.


(Elle Stone) #58

I wholeheartedly agree that “HDR” is a term that is used, overused, and misused in too many different ways. For example, the up-and-coming “HDR” displays can make an image look as bright as the great outdoors on a sunny day, and this particular usage of “HDR” is market-speak for “Higher Dynamic Range than whatever outdated display you are looking at right now”.

“High Dynamic Range” requires a context, as in one “image/scene/display device/capture device” has a higher dynamic range than another “image/scene/display device/capture device”.

So to say “high dynamic range” is both misleading and ambiguous. “Display range” doesn’t mean “low dynamic range”. It just means that the RGB channel values sent to the screen (for the display profile) or modified in an image editor in an RGB working space, only go from 0.0 to 1.0 floating point. This says absolutely nothing about the actual dynamic range of the display itself.

So I absolutely agree that my choice of the term “HDR” was ill-advised. Here is revised terminology: Given a specific RGB working color space ICC profile such as the sRGB ICC profile:

“In display range”: all RGB channel values are between 0.0f and 1.0f inclusively. These colors are “in display range”: (0.5, 0.75, 0.01), (0.5, 0.0, 0.0), (0.5, 1.0, 0.0), (1.0, 1.0, 1.0), (0.0, 0.0, 0.0).

“Out of display range”: At least one RGB channel value is either greater than 1.0 or less than 0.0. So these colors are out of display range: (0.5, 0.75, -0.01), (0.5, 1.01, 0.2), (0.5, 1.01, -0.2), (1.2, 1.2, 1.2), (-1.0, -5.0, 15).

“In gamut”: All RGB channel values are >=0.0. So (0.0, 0.0, 0.0), (0.5, 0.75, 0.01), and (25.0, 50.0, 3.75) are all “in gamut” colors, but the last color is “out of display range”.

“Out of gamut”: At least one RGB channel value is less than 0.0f. One or two channel values might also be greater than 1.0, but the color is not “out of gamut” unless at least one channel value is less than 0.0. So these colors are “out of gamut”: (0.5, 0.75, -0.01), (0.5. 1.01, -0.2), (-1.0, -5.0, 15). But these colors are not “out of gamut” though they are “out of display range”: (1.2, 1.2, 1.2), (0.0, 0.0, 25.0).

“In gamut but out of display range” colors" (well, that’s a longish phrase, which is why I previously used “HDR” as shorthand for this longish phrase - suggestions for improved terminology are very welcome): At least one RGB channel values is greater than 1.0 and none of the channel values are less than 0.0. Please note that the “Y” value of an HDR color can easily be less than 1.0.

That leaves one last category of colors, which are colors that are both “out of display range” and also “out of gamut” with respect to the specified RGB working space. Such colors require having at least one RGB channel value that’s <0.0 and also at least one RGB channel value that’s >1.0. “Out of display range and also out of gamut” again is a longish phrase, and again, suggestions for improved terminology are very welcome.

From curiosity, what specifically do you mean by “an HDR capable display”?

I think you just said that RawTherapee/darktable/PhotoFlow/GIMP/etc are utterly worthless tools? Worthless tools for what purpose? This thread is not about using any of these tools for processing an image file with eg 20+ stops of RGB data encoded using channel data that extends from (0.0f, 0.0f, 0.0f) up to whatever the limits are for the file type used to hold the data.

As @ggbutcher already noted, the original topic for this thread is what to do about floating point RGB channel values that are less than 0.0 or greater than 1.0, specifically in terms of how to handle such values in the course of “display-referred” editing, which as @ggbutcher also noted, doesn’t mean you can’t edit using linear gamma RGB, and also doesn’t mean your data isn’t scene-referred .

In the context of this thread, the starting image already is a display-referred image, meaning the channel values are between 0.0f and 1.0f, or else can easily be brought into the display range by means of an exposure adjustment. The starting image is often the result of interpolating a camera raw file. And the eventual goal is usually to output an image that can be printed or displayed on a screen.

When using floating point processing, “out of display range” channel values can occur during editing in any given RGB color space, and also when converting an image from a larger RGB color space to a smaller RGB color space. These channel values need to be dealt with, one way or another.

One way to deal with these “out of gamut/out of display range” channel values is to clip them immediately, which of course throws data away. The other way is to carry them along using “unbounded” floating point ICC profile channel values, and let the user deal with them as the user sees fit, according to the user’s editing goals.

There are always data bounds. The file format used to encode an image imposes data bounds. For floating point images, the way in which floating point numbers are encoded on a particular machine imposes data bounds. The RGB color space imposes a gamut boundary of >=0.0 for all channels. Preparing a file for display whether on a screen or as a print imposes another set of data bounds. Even the result of high bit depth scene-referred editing eventually has to be output as display-referred values.

Where did you get the idea that ICC profile color management was “largely designed for graphic design work”? Hmm, backing up a bit, what do you mean by “graphic design work”?

With founding members Adobe Systems, AGFA Apple Computer, Eastman Kodak Company, Microsoft Corporation. Silicon Graphics, and Sun Microsystems; and current members including Canon, Nikon, and Sony it would be fairly much incredible to assume that photography and also color display on screens are not and were not main concerns of the ICC.

Perhaps you were a bit misled by the fact that the original discussions for forming the ICC were under the auspices of “Forschungsgesellschaft Druck e.V. (FOGRA), the German graphic arts research institute in 1993”, quoting from an earlier release of the ICC specs. People from many industries concerned with color reproduction attended these discussions. FOGRA itself is concerned centrally with printing images in general, not just “graphic design” images.

My understanding is that awhile back members of the movie/VFX industry did ask the ICC for better support for things like encoding high dynamic range images and lossless round-tripping from input device spaces such as camera spaces, to working spaces, and back to the input device spaces. The ICC wasn’t forthcoming in a timely fashion, and OCIO and ACES have filled (and more than filled) some of the gaps in ICC profile color management. Which does not mean that OCIO and ACES separately or together can fully meet the needs of people editing photographs or for print-oriented workflows, or even all the needs of the movie/VFX industry. Read through the openexr mailing list - these people do also use ICC profile color-managed editing applications, right alongside NUKE and etc. Also, with iccMAX the ICC has taken steps to recover from their unfortunate delay in responding to actual needs of people who need to manage color in contexts other than print output.

Indeed one of the GEGL/GIMP devs did independently come up with the idea that out of gamut colors could be encoded using channel values that are outside the display range, which I think is an incredibly commendable and intelligent example of “thinking outside the box”.

However, even apart from GIMP, “unbounded” indeed is a “thing”. “Unbounded” is a term from Marti Maria’s paper “Unbounded Color Engines”:

http://www.littlecms.com/CIC18_UnboundedCMM.pdf

I do seem to be responsible for popularizing the term, or at least my article is the first article returned when searching the internet for the terms “unbounded icc profile”. So if you don’t like the word “unbounded”, I’m happy to take at least part of the blame:

https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html

As a historical aside, the ICC mentioned the need for such conversions that don’t clip out of gamut and out of display range channel values, back in 2006:

http://color.org/ICCSpecRevision_02_11_06_Float.pdf

One of my articles refers to the above pdf, and puts it into some historical and “color science” context:
Color Science History and the ICC Profile Specifications
https://ninedegreesbelow.com/photography/icc-profile-negative-tristimulus.html
Overview: A lot of people assume that the ICC prohibition against negative tristimulus values is based on some kind of physical reality grounded in human perception of color. Actually, exactly the opposite is true: negative primaries are used to describe human color perception. Furthermore, an adequate characterization of many device primaries also requires negative tristimulus values.


(Guillermo Espertino) #60

It’s funny that you are so picky with the term “graphic design” here when you admitted using terms in a loose/free manner when you explained your own ideas.

But still: Graphic design is the discipline that shaped the print industry as we know it. The necessity of a consistent color workflow through different devices is an issue related to graphic design.
You may argue the role that photography played, of course, but ICC colour management was definitely shaped by DTP, which is what graphic design turned into when computers started to be used.

ICC colour-management is definitely a workflow for display-referred work. It considers imagery in display-referred formats (mainly integers with limited precision) and dictates how to consistently reproduce colour between devices.
That’s a workflow that proved to be simple and efficient enough for graphic design oriented applications, but it’s not suited for the scene-referred model and HDR displays.

Despite everything said here and in mailing lists when this idea of “unbounded sRGB” editing came up, there’s still a fact: scene-referred and display-referred models are different models. You don’t make display-referred images scene referred by just removing clips and clamps and linearizing the transfer curve.
unbounded sRGB is something that may work as a intermediate buffer for certain, very specific display-referred applications, but violates a very basic aspect of the RGB model: RGB values aren’t light emission anymore, they are something else (and something that doesn’t seem to be usable in a scene-referred workflow).

So, before getting picky again with the terms used or what out-of-gamut means or not, maybe we should discuss about why exactly this conflated model exists, what it comes to solve and why is it preferred over the proven scene-referred model that is compatible with either scene and display referred imagery and is also future-proof for HDR, that is already here.
In other words, why re-inventing the wheel? Why perverting the RGB model when it’s way more efficient to have a wide-gamut reference and stick to it? What’s the gain?


(Elle Stone) #61

@troy_s , @gez -

Is the premise of your arguments that an image editor that uses ICC profile color management must clip all RGB channel values that are outside the display range?

Currently GIMP/RawTherapee/PhotoFlow/darktable/etc work at floating point precision.

All of these softwares have a plethora of editing algorithms and blend modes that are totally unsuitable for being performed on channel values that are outside the display range, and many of which totally destroy the scene-referred nature of the RGB data, assuming it was even scene-referred to begin with, which isn’t always the case.

Nonetheless, even in high bit depth GIMP/RawTherapee/PhotoFlow/darktable/etc many times it is useful to retain out-of-display-range colors instead of summarily clipping them, even when and though the goal is eventually reducing the final image to fit within the display range of one or another chosen output device.

The trick is knowing what to do to bring colors back into the display range of the RGB working space (and eventually the output device space), when and as needed. Which is where this thread started, because “what to do” depends in part on in what way are the channel values outside the display range of the RGB working space. It’s hard to talk about this sort of thing without coming up with some definitions.

If the distinctions I pointed to and the proposed terminology I provided for talking about out-of-gamut and out-of-display range RGB channel values are useful to someone, in terms of understanding how one might want to deal with various out-of-display-range colors that can be created during image editing, great! If not, then my apologies for the noise!

But please don’t try to argue that it’s somehow wrong that GIMP/RawTherapee/PhotoFlow/darktable/etc don’t automatically clip all RGB values to fit within the display range of the RGB working space.

Or maybe that’s not your argument at all. Frankly I don’t know what either one of you are really talking about.

@gez - how is “unbounded sRGB” relevant to this discussion? For many purposes sRGB is poor color space for image editing.


(Guillermo Espertino) #62

The problem of this reasoning is that you seem to be taking a display-referred and print-oriented color workflow and assuming that removing the clips and working with floats will bring any benefits.
You point at clipping as it was something inherently bad, but it’s not for an icc-based workflow.
When you print or when you display those images, you definitely need to deal with the limits of the output.

Here you want to stick with ICCs, but you want to ignore the fact that it’s output referred (check all the rendering intents documentation and tell me where it says otherwise)

So there is this proven color workflow designed to retain colour and light intensity designed by the VFX industry, but you won’t use it. You want to take this legacy color workflow designed mostly by the print and DTP industry and do some modifications to allow the cases the other model already allows (retaining dynamic range and colour latitude by using a wide-gamut reference and scene-referred values).

The question remains. What’s the gain? What’s the purpose?

The whole idea of using RGB as PCS (another term taken from ICC) and assuming that is data suitable for editing is the problem. That’s why I mentioned that. Otherwise, why are you discussing the effects of data out of the display bounds and negatives?


(Mica) #64

This is a reminder to everyone to be civil, the tone of some of the posts in this thread are getting out of hand.


(Alberto) #65

@troy_s, @gez, for the benefit of the illiterate among us (i.e. me), could you give some pointers for further reading? that would be quite helpful in moving things forward, IMHO. thanks!