Unbounded Floating Point Pipelines

Please explain the difference betwen “out of gamut” and “negative”.

How did you derive it?

If you derived it by overshooting due to sampling lobes, that isn’t because the sampling extended the saturation of the value.

But I wasn’t trying to simulate lighting indirectly.

That’s terrific! You also don’t seem to mind when the colours on screen don’t represent the respective emission in the scene. I’m fine with that.

I think @anon11264400 was clear enough, so I won’t add much more.
I’ll only mention that you can always chose a wider gamut reference if you need that extra gamut that in your examples justifies those negative, meaningless RGB values.
There’s no reason for those negatives (from out-of-gamut colours) in the processing pipeline. Clip them or switch to a wider gamut reference and you won’t have to worry about simple operations as multiplications producing weird results.

I don’t know what the “it” of “How did you derive it” means. But the negative channel values were generated by increasing Chroma. Is this a case of what you describe as “sampling extended the saturation of the value”?

Also what does “sampling lobes” mean?

I don’t understand. When I start editing an image, I start with a scene-referred rendering of the raw file. But I don’t always stop right there. Sometimes, often, I continue editing, changing hue, raising or lowering Lightness or Chroma. So yes, at this point the image on the screen don’t match the scene that I photographed.

Or do you mean the “scene” is the image being edited?

Despite what it might seem like, I’m not hostile to OCIO. I did make an OCIO config file to use with Blender and Krita. That was a lot of work and required delving into the OCIO documentation. Everything worked, but Blender has an awkward interface for photographic editing and not nearly the range of editing algorithms as GIMP, PhotoFlow, darktable, RawTherapee. And in Krita, well, I didn’t see any advantage to using OCIO over ICC profile color management given that Krita provides both, and the “lossless” aspect of editing using Krita doesn’t depend on using OCIO.

I’m also not hostile to an ACES workflow. I’ve read an awful lot of documentation on ACES (the official documentation and various tutorials). I can see the advantage of the full-blown OCIO/ACES workflow for making movies. There are people who make their living creating “look luts” for specific movies and it seems like a great career for someone who has the skills and the artistic talent.

You mention using whatever tonemapping you want in your OCIO config, and being able to change them on the fly, in real time without touching your data. Well, my data is the original scene-referred interpolation of the raw file, and I don’t modify that file. And if I did, it would be easy enough to do the rendering again.

But moving on to the question I really want to ask, have wanted to ask an OCIO person for a long time, and also trying to answer your question about “when do I apply my manual tonemapping”. Here’s a starting image, actually a luminance conversion to black and white of the original interpolated raw file. My apologies, it’s not a great photograph before or after I was done (well, I’m not a great photographer, so that’s expected):

before-tone-mapping

Here’s the “after tone mapping” image:

after-tone-mapping

Here’s a screenshot of the layer stack:

layer-stack

So working around to my actual questions:

When you say “You can use whatever tonemapping you want in your OCIO config, you can change them on the fly, all in real time without touching your data” - where does this tonemapping in the OCIO config come from?

How would I make a tone mapping for the OCIO config that produces the same final image as the layer stack that I used to tone-map this image?

This is what has always puzzled me about the possibility of using OCIO workflow similar to what is done for movie-making:

Where do those LUTS come from?

Cn a single LUT handle the same changes that can be made using masks and layers to confine tonality edits to specific portions of the image?

And if a given LUT will only be used for one image, how time-consuming is it to make that LUT?

About whether I was tone-mapping “blind” or something, no, here’s the basic procedure that I used: The ground was much darker than the sky, so I made a copy of the original image, used Levels to add something like +2 stops of ev to the copy, added a solid black mask, and painted on the mask to raise the ground tonality as desired. So there was never a time when I couldn’t see the entire image. I wasn’t editing blind.

If I had wanted to tone-map the colours.exr image, I would have started by bringing the data down enough stops to be able to see all the colors before I started editing, as @samj did . This isn’t editing blind. If I were working with an image with 20+ev stops of dynamic range, well, I just don’t do that sort of editing, so I don’t have a clue what I’d do.

There is a great big difference between bashing ICC profile color management and offering to open the door to another type of image editing. I think a lot of people on this list would be very interested in learning about OCIO and LUTS and etc. But this constant bashing of ICC profile color management doesn’t seem to be a good way to approach the goal of persuading people of the value of OCIO/ACES.

I just tried to steer the conversation back into an area that actually focuses on using OCIO, by asking real questions about actual editing of real images. You seem to be intent on moving the discussion back into the realm of bashing the very idea of editing using GIMP, or ICC profile color management, or both. Don’t you think it might be more useful to actually try to teach people OCIO instead of continuing on this very unproductive track of bashing, bashing, bashing?

I already understand that you don’t like my terminology. But the distinctions I was pointing to obtain whether you are using OCIO, ICC, “color management by device calibration” or just shoving numbers around in a spreadsheet:

channel-data
chromaticity diagram and Rec709 primaries from these files:

You’ve hinted that your major objection to negative channel values lies in how they are generated. But regardless of how negative channel values might be produced, the distinctions I pointed to are all just mathematical. The dreaded “shouldn’t be multiplied” colors fall into areas marked “1” and “2” on my diagram. These are problem values that must be dealt with at some point. Your way is to either start with a larger color space, or else clip. My way is to start with whatever color space seems convenient for whatever given editing goal, and keep track of the colors I’m generating, via sampling and via soft proofing and via clip warnings provided by my image editor.

Instead of continuing to express your dislike of ICC profile color management and unbounded this and that, why not try helping the people on this list to discover what can be done with OCIO?

2 Likes

Going back a couple of pages, sorry, …

That’s a limitation of a tri-stimulus colour model. Some real-world colours can’t be represented by any combination of red, green and blue physical lamps. If our goal is to represent the colours of a real-world scene, a model that permits only positive values of physical lamps is insufficient.

If we want to represent all real-world colours in a tri-stimulus model, we must either use imaginary lamps (outside the CIE horseshoe), or permit negative values, or both.

None of these solutions (imaginary lamps, negative values, or ignoring some real-world colours) is satisfactory. But a model must do one of these.

1 Like

Negative values were acknowledged but how an OCIO workflow would deal with them in contrast to the “ICC model”, if you will, wasn’t elaborated on. I think the crux of the discussion is figuring out each other’s paradigm and finding common ground upon which we can have a productive discussion.

Yes, @afre, that’s what I’m trying to pin down. @gez and @anon11264400 seem to be advocating a model that doesn’t permit negative values. Therefore it must either use imaginary primaries, or be unable to represent all real-world colours.

And that’s a valid point of view, of course. But I’m trying to pin down if that is really what they are saying.

seem to be advocating a model that doesn’t permit negative values. Therefore it must either use imaginary primaries, or be unable to represent all real-world colours.

Nowhere did I state this. There is a trade off between the spectral locus extremeties and the ability to manipulate imagery under a tri-light system.

  • Pick a reference space that works well for pixel manipulation and covers the needs of your gamut.
  • Clamp negative values at 0.0 (via out of gamut or overshoot lobes of a sampling algorithm / convolution for example) when manipulating / compositing as, in a vast majority of instances, those values are undefined non data in a pixel manipulation design.
  • Use a proper view transform that accurately represents the camera capture or desired photographic output, not a display referred model kludged into trying to work with scene referred workflows.

So that includes “change any negative values to zero”, yes? The model doesn’t permit negative values. If the result of an operation is negative, you immediately make it zero.

Okay. The model allows imaginary primaries, I suppose? So that enables it to represent any real-world colour? Is that correct?

[As an aside, an idea for image editors: when out-of-gamut colours are detected, a button is enabled that will “calculate a better colour space (reference space)”, and the computer converts the image to AdobeRGB or ACES or whatever is needed to accommodate the colours.]

Okay. So to summarise those aspects of your model:

  1. Negative values are not allowed. When an operation results in a negative, it is immediately clamped to zero.

  2. Imaginary primaries are permitted, so any real-world colour can be modelled.

  3. However, in practice, that reference space will probably be too large.

  4. Hence, in practice, only colours within the triangle of the primaries can be represented. (But the triangle can be made any size, to accommodate photos or painted images or whatever.)

Is that correct?

Absolutely I am aware. You are intelligent, extremely well-informed, and generous with your time in helping people get started with OCIO color management and using OCIO software. This generosity can be seen for example by your contributions to many different forums and your efforts to improve Blender’s color management. I think you also might have been involved with helping to implement Krita’s OCIO code, though I’m not sure.

Your generosity extends far past public contributions to forums to include time spent in private email exchanges, helping to sort out the confusions of dealing with color management. I never would have figured out the intricacies of setting up Blender without your patient tutoring.

I’ve benefited greatly from discussions with you about color management in general, and so have many other people. But I’ve never come away from our discussions with any conclusion that ICC profile color management can’t be extended - already has and is being extended - to work quite capably with high bit depth floating point images with channel data that exceeds 1.0. This of course doesn’t throw any aspersions on the value of OCIO/ACES workflows and color management.

On the contrary, I am saying that clipping is the worst solution, at lest if applied before the very final step of saving to the output image format. Instead, an “unbounded” conversion to a wider colorspace seems to me a better solution in order to further proceed in the editing without risking troubles… do you agree with this?

Hmm, I must have been unclear in what I was writing (edit to correct typo, originally I wrote “must not have been very clear” - note to self, don’t just edit part of a sentence!). But actually I was asking Troy whether he thought my tutorial was a completely wrong way to edit an image, given that he advocates something like “clip as you go”.

I sort of thought you liked my tutorial :slight_smile: .

Do I understand correctly that the VFX industry has come up with Rec.2020 and ACEScg as optimal replacements of the widely used sRGB? Would you recommend either one as default reference color spaces for photo manipulation?
The expert users can always change the default, for all others we need to provide some default that limits the potential troubles to a minimum…

Sure! But since I was quoted at the top of your reply, I thought it was addressed to me as well… :slight_smile:

Converting to a wider color space is always an option, and whether this is the best option depends on what the user plans to do next. But yes, I agree that clipping is often the worst solution. But not always, it depends on the source of the out of gamut channel values, and also on what the user intends to do next.

In my tutorial the original out of gamut channel values were created by deliberately adding a rather large amount of saturation to the layer in the “chroma” group, and immediately reigning the saturation in using a layer mask that targetted the general colors to which I wanted to add saturation, and then fine-tuning the mask to further limit the saturation as desired.

The thing is, colorfulness is a combination of lightness and chroma. So in the tutorial I worked back and forth between the chroma group (modifying the layer mask) and the lightness group. At the very end, the lower left corner of the sky had some channel values that were still out of gamut, and I used both hue changes and also chroma changes, to prepare those areas for output. The fall foliage still had some small areas where the brightest orange and yellow colors were out of gamut, and those areas I just clipped.

My impression is that this is the only “ingredient” in your list that is missing in current photography-oriented editors… am I right?

No, you didn’t bash my workflow. You bashed my terminology (fictionalized, questionable, worthless, junk, path of folly, nonsensical, goofy, figment of some parroting from a goofy idea, haste to mash a keyboard, rubbish and a disservice, hackery, quackery, and a whole jingo lingo set; display referred kludge, etc).

It would be nice if you would admit that the distinctions I was trying to make with my made-up/fictionalized/etc terminology - which I’ve also drawn on the xy chromaticity diagram for the Rec709 color space - are mathematically valid distinctions and quite independent of what type of color management one might or might not be using. Or else explain why these distinctions aren’t valid in any context whatsoever. But you’ve already admitted that the distinctions do matter:

It matters whether an RGB color’s channel values go below zero. There are two ways this happens:

  • either the color is outside the gamut (xy triangle) of the RGB working space (1 or 2 negative channel values). Please note this doesn’t mean the user can’t see the color on their screen, as the color gamut of the screen might be larger than the color gamut of the RGB working space.
  • or the color is inside the gamut of the RGB color space, but has a negative luminance (3 negative channel values), which is a distinction that I overlooked. In this case the color is physically impossible, “less than 0% of the ambient light” is reflected.

It also matters whether an RGB color’s channel values go above 1.0. One reason this matters is because such colors can’t be displayed on a screen that’s calibrated to match the chosen RGB working space. However, if the gamut of the screen exceeds the gamut of the RGB working space, such colors might, or might not, be displayable on the screen.

Changing gears somewhat, @anon11264400, is the following description anywhere near correct and could you clarify where my description goes off the rails?

My understanding is that an OCIO/ACES workflow such as you describe - at least when used in connection with movie production and such - is very much oriented towards output on specific display devices. And the monitor on which the user views the results of applying a LUT to the RGB data is calibrated to the match the selected output device, with the monitor itself allowing to select which output device to emulate.

In this OCIO/ACES workflow is ACES the reference color space? Or is it just the storage space? Anyway, there is a LUT transform from the reference color space to the selected output device to which the monitor has been calibrated, and another LUT transform to give a particular “look” the the image. I don’t remember the order in which these LUTs are applied, but I think the “look” LUT is applied first, and then the resulting image is further modified to fit the chosen output device.

You can’t switch references on the fly in a non-destructive workflow as many operations are chromaticity-dependent. For instance, any operation that does a multiply on RGB values will produce different results in different reference spaces (that’s another reason why the “unbounded” colour gamut encoded in RGB is a bad idea).
For that reason is preferred to carefully choose the reference beforehand, considering your source material, the possible outputs, etc.
The ACES workflow has several different colourspaces available for archival and processing, designed to be both future-proof and maximum quality because of the industry needs.
That doesn’t necessarily mean that you need to use ACES for everything (and I’m pretty sure Troy never said that).
Your own photographic work may have a more suitable reference space you’ll choose wisely.
If you’re only producing graphics for the web today, it would be pointless to go beyond rec.709 as your output will be sRGB.

Choosing the appropriate reference seems the only way to avoid troubles, not keeping undefined data in your processing pipeline.