Unbounded Floating Point Pipelines

Hi @gez,
thanks again for your pointer to cinematiccolor (I have started reading the pdf, but I’m not done yet), and for setting up your test. Now I hope you can also help me understand what I’m seeing :slight_smile: I am using the unbounded-processing branch of RawTherapee, and here is what I get. (NOTE: RT doesn’t support exr, so I converted your image to a 32-bit float tiff using GIMP).

https://filebin.net/58n4s4vj798yweh1/Peek_2018-03-16_10-56.mp4

Here’s the JPEG output I get with the above.

So, what’s going on? As you might have noticed from the various saved versions shown in the file dialog in the video, I played with a lot of stuff to try and understand how what I get relates to your comments, but being fairly ignorant about all this, I’d like to avoid guessing.

Thanks!

@agriggio, that’s very interesting. And I didn’t know there was an unbounded branch, I will try it.

@gez, if want to explore this “new method”, what s/ware would you suggest to start with please. I’m not well up on colour science, and used to using RawTherapee and to a lesser extent Gimp. I would be wanting to re-process some of my photos (from RAW of course, Canon .CR2) and see what happens, comparing to my previous efforts. I’m not into shading / rendering / animation. Any tips to get started would be great.

@Elle

Bonjour,

If you want to have a ‘maximum’ of colors from your exr image in a jpeg file you may be able to use G’MIC.
After we can use our artistic gifts to get something good from this beautiful image
 [It’s a joke]

gmic -i colours.exr -n 0,255 -to_rgb -o colours_gmic_norm0-255.jpg

:o)

1 Like

And when do you apply your manual tonemapping? Before or after editing?
Is it editing a tonemapped image or editing without tonemapping going blind because you don’t see what’s going on with your colour?

Also, what you said about the canned algorithm tonemapping is not correct. You can use whatever tonemapping you want in your OCIO config, you can change them on the fly, all in real time without touching your data. Fully customizable.

I think your words “stfu”, “bullying” and “buddies” are inappropriate and more than a little misleading:

No GIMP dev has ever told me to “stfu”, though sometimes they had to work pretty hard to get me to understand that they finally did understand what I had been trying to explain.

No GIMP dev has ever “bullied” me into writing patches. I’m very proud of the few patches that I’ve written that have been accepted into GIMP’s code base. I’m also amazed at the courtesy and helpfulness of the many GIMP devs who helped me write those patches, starting from the very first patch I wrote (with a lot of help from the other GIMP devs) to switch GIMP from using LCMS version 1 to using LCMS version 2.

I’ve always thought you seem like a very nice person, insofar as one can make such a judgement based solely on interactions over the internet. I greatly appreciate your input several years ago in helping to test some issues in GIMP color management/color mixing (which issues have long since addressed in the GIMP code base), and in helping to proofread and improve many of the articles on my website. I learned a great deal from our discussions, and I hope you also did. But I don’t see how our interactions could be described as “being buddies”.

Regarding patches, maybe a year ago I signed up to get email notifications for GIMP bugs. I am astonished at the sheer volume of bug reports from people demanding that this, that, and the other change in GIMP code be “immediately” done, which has given me an appreciation for the constant request for patches. I’m also astonished at the rudeness of many such requests for changes in the code, and equally astonished at how politely these rudely phrased requests are dealt with by the GIMP devs.

There seems to be a general lack of understanding that “I want this in GIMP” doesn’t mean that whatever the person wants just magically happens. There is no magic. If people want changes in the code “someone” has to write the code.

Also much “code writing” that the devs for any project have to do, involves keeping up with changes in libraries that the program uses. Every single time a library - from GTK or QT, glib, and other “big” libraries, to LCMS and all the way down to the smallest file format library - every single time these libraries undergo major changes that break backwards compatibility (this happens with version changes), all the devs from all the project have to modify their own code to keep up with the changes. This is time-consuming work that surely isn’t especially gratifying from a programmer’s point of view. And then there is the constant need to keep track of bug fixes and improvements, between the major version changes.

Right now the big change in progress for GIMP, aside from getting 2.10 out the door, is the change from GTK2 to GTK3. And the bug reports from people on Windows and Mac keep pouring in, many times caused by changes in these proprietary operating systems, with scarcely a Windows or Mac developer stepping up to the plate to help.

So yes, patches are welcome. And if there is a feature or bug fix that someone really wants, that’s a good feature or bug fix for that person to work on.

That is a simply tonemapping :slight_smile:
Just for curiosity how good works blender for real world hdr video or image?
This is 1000nits perceptual quantizer st2084 with 2020 primaries

This is my tonemapped conversion to rec709

This is the same shot in linear range with 709 primaries and compressed in 0-1 range

Uncompressed 16bit tif:
hdr.7z (71.8 MB)

Tried the EXR file in darktable (note that darktable seems to be able to get the primaries from the exr itself)

Opening the image of course shows ‘incorrect’ results (since darktable doesn’t at least currently do any tone-mapping on the view)

(of course checking with over exposure indicator shows the problem)

Tried 3 different methods, first global tonemap with drago operator

Second using exposure picker (getting about -2 ev)

Third using the curve tool (curve clearly get extrapolated to higher L values)

Note that internally darktable uses CIELAB so is actually not linear although since it does use floating point it can use pretty high L values.

To all, an apology. I never thought this topic would go ballistic like it has.

To those who are attempting to learn, I commend you in sticking to the dialogue in spite of the dismissiveness.

To those who are attempting to teach, I feel compelled to point out that you are responsible for your message and it’s intended effect; if you really want to affect change, you need to consider the student’s frame of reference. It most certainly is not yours; that’s not a put-down, it’s just acknowledgement that we all think differently.

So, I’m really am trying to wrap my head around scene-referencing, and the first order of business I’m stuck at is how to handle my raw camera input. The ACES and OpenEXR literature address the topic specifically, and if I read all this correctly they do acknowledge the need for some sort of input camera transform to get to a color-consistent basis for commencing scene-referred editing. Well and good, but what I’m not getting right now is, what’s the destination of this first color transform, color-wise? In what I do now, it’s a high-gamut working profile like Rec.2020 with no gamma-oriented transform so the data stays linear with respect to its original capture. To answer an earlier question, this transform in my software is unbounded, owing to LittleCMS’s ability to handle of floating point transforms in this manner.

If I can get past this little hurdle, then I’ll tackle the LUT basis for the rest of it, as well as adapting my curve tool and other such to linear space.

2 Likes

I have the feeling that this questions are very much GIMP-centric, and are due to the fact that GIMP does not yet allow for non-destructive editing.

However, GIMP is not the only layer-based editor in the FOSS world. Both Krita and my own PhotoFlow have non-destructive tools which can be placed anywhere in the processing pipeline.

In particular, in PhotoFlow you can place a tone-mapping tool on top of the layers stack, and see the tone-mapped output in real-time while making adjustments to the non-tonemapped unbounded data. The user can choose among different tone mapping operators, like simple gamma curves or “filmic”-like from here and here.

As a developer, I would like to know: does this conceptually match what Blander is doing behind the scenes to provide the output you show in your screenshots? Is this the right approach to the problem of rendering out-of-bounds pixel values to a display, and thus improve the "visual feedback"during the process of editing the unbounded image data?

It is some time ago that I looked into it but if I remember correctly the destination should be defined by the OCIO config. Do note that as far as I understand OCIO works a bit different then ICC, in OCIO there is no reference color space all other color spaces are tied to[1] but everything is defined in respect to the working color space[2] so to share exr/hdr data you should also share the OCIO config (just giving the ICC profile is not enough)

Note that this is my personal understanding and I might be wrong

[1] Which is what XYZ/LAB is used for in ICC
[2] This reference color space is often called Linear all other color spaces defined in the config have either a to_reference and/or a from_reference

1 Like

Hi again.
First of all, it’s not my intention to pick a fight with anyone, but I do believe that in these cases it is crucial to point as soon as possible to the made up stuff that clounds the understanding of the real issue and move on.
“Unbounded” is a concept that muddles the waters, because it immediately refers to a bound that will be exceeded, and the idea of a scene-referred model is that light emissions go from 0 to infinity. When we shoot real-world scenes or render synthetic scenes that mimic reality, we’re capturing that. From 0 to infinity (devices will have its bounds, but the theoretical scene won’t).
At any rate, although our cameras have a limited dynamic range, when you shot a scene you’re capturing a portion of that infinite intensity range that mantains a relationship with the intensity rates of the real scene (point A is two stops above point B in the real scene and in your capture, etc.)
The goal of a scene-referred model is to keep those rates intact.

Unbounded, used in the context of ICC transforms refers to the concept introduced by Marti MarĂ­a of using an RGB colorspace as a PCS.
The problem with that unbounded concept is that produces garbage RGB that can be used in a proper emission model. The proof of that is the squirrelly colours mentioned above in this thread from simple operations.
So, the whole idea of stuffing this RGB as PCS in the processing pipeline is absolutely problematic for processing, and implying that operations that fail have to be revisited while they are perfectly valid for valid RGB values, sounds unreasonable.

Now, if “unbounded” only means not clipping intensity, then we all agree. That’s the key part of a scene referred model. But in that case, why are we calling this “unbounded”? It has no bound by definition. Scene-referred already implies that, no reason for inventing a new term.

Then, if we’re all talking about the same (scene-referred model), isn’t a good idea to take a look of existing implementations that are open source and already used in libre apps?

A scene-referred workflow relies on a single reference, a view, and from-reference and to-reference transforms.
All the processing is done in the reference, the source material that is not in the reference space is converted to the reference, the output and view go through from-reference transforms to the needed space (it’s all in the OCIO docs).
Your view allows you to actually SEE what you’re doing in your display, colour pickers, UI elements, etc).

You can have different configs with different references, different transforms and different views, created specifically for your needs.

Probably phrased it wrong I meant that in the OCIO standard there is no reference defined, while this is the case in the ICC standard. The advantage on the ICC side is that you only need to share the ICC profile to get it to work everywhere and the advantage of the OCIO way is that it is extremely flexible.

I think we are talking past each other here, lets see if I can put it differently.

In the ICC standard all color space transform must be to and from XYZ, this makes XYZ the defined reference color space for the ICC standard, in contrast the reference color space for OCIO is undefined in the standard and can be chosen at will then the config is used to tell how to transfer other color spaces to this reference color space. Does that make sense?

In the ICC standard all color space transform must be to and from XYZ, this makes XYZ the defined reference color space for the ICC standard, in contrast the reference color space for OCIO is undefined in the standard and can be chosen at will then the config is used to tell how to transfer other color spaces to this reference color space. Does that make sense?

Absolutely. Which is why I said it was a protocol.

With OCIO you could operate in whatever primaries you wished, with the only convention being that the reference is scene referred linear. Every transform would he defined in reference to that.

This also allows 1:1 ingestion without tripping through potential quantisation issues.

Hi,

I agree that there is a misunderstanding on the terminology. To me, “unbounded” simply means that there is no bound.

I just checked the first post of the thread, and there is no mention of ICC transforms. I think the context is more general: we have a pipeline of processing operations, and what is being discussed here is that it would be good if such pipelines did not assume any bound on the input and did not impose any bound on the output (except of course in the last step, when actually displaying the picture).

Now, if “unbounded” only means not clipping intensity, then we all agree. That’s the key part of a scene referred model. But in that case, why are we calling this “unbounded”? It has no bound by definition. Scene-referred already implies that, no reason for inventing a new term.

If I understand what you write correctly, scene-referred has a lower bound at 0. Unbounded means that you can also have negative values. So, if the concepts are different, they deserve different names



or maybe I’m just a victim of this whole confusion :slight_smile:

Yes, as the instigator of this thread, that is where I started. The bounds I was considering were more about dynamic range, related to the noise floor and saturation point for the sensors. Once the integer-based raw data is placed in a data format, say, floating point, there is now the possibility to push tones out of the original raw range, and my thought was that data should be preserved for possible dealing in other tools. White still has to be corralled, at some point just prior to humans looking at the image, no?

Scene-referencing seems to put a better perspective on that endeavor, I just need to reorganize some synapses to deal with it


Unbounded means that you can also have negative values.

Historically this has been referred to as clipping and clamping, likely drawn from circuitry and what not.

In contrast, scene referred linear float data is a whole working model, complete with algorithm differences from its display referred counterpart.

@agriggio wrote
Unbounded means that you can also have negative values.

Doesn’t unbounded mean the opposite?

1 Like