Unbounded Floating Point Pipelines

When I follow the link, and then the link to the PDF, nothing happens for me. If I open a new tab, I just get a blank window. Firefox 58.0.2. Just checked other PDF links render ok (well one anyway).
***** update - got it, via windows /IE

Thank you, @Elle, thatā€™s what Iā€™m not getting, also. In my software (which I wrote, not one of the mainstream products), I have a linear gamma (that is, no gamma) calibrated camera profile I assign to my raw image which was opened raw/linear with libraw, that is, no colorspace transform, no gamma application. Then, I convert the image to a high-gamut working profile (currently Rec2020), and I edit from there. And then when Iā€™m ready to output to a tiff or jpeg, I convert using a profile with gamut suitable for the medium, usually sRGB for the color-unfriendly web. Until that output transform, Iā€™m dealing with a high-gamut array of pixels. Just because Iā€™m using ICC profiles to characterize my image in the pipeline doesnā€™t mean Iā€™m editing at display-reference.

Then, the motion picture industry comes in and tells me Iā€™m doing this wrong. Okay, I take a look, and see that if I want to take that premise and solve it, Iā€™ll have to write more software, particularly corollaries to the ICC-based tools I use that were written by others, and (maybe) reorganize what Iā€™ve already done in mine, to accommodate. Iā€™m not there, yet.

Well, actually itā€™s @anon11264400 and @gez who are telling you that you are doing something wrong :slight_smile: . In reality a lot of people who use OCIO and ACES in their workflows, also use ICC profiles and ICC profile color-managed editors. Search the internet for the terms icc and ocio, and youā€™ll pull up discussions of the difficulties and possible solutions for converting data for use with the two different types of color management.

Before modifying your code to exclusively use OCIO and ACES, it might be a good idea to study what and how people are using these tools to get from a scene-referred input image to a final output image, and also try using some of the software that uses these tools. @anon11264400 mentioned that Nuke has a freebie version. Thereā€™s also Blender and Natron - hereā€™s the Wikipedia article on Natron: Natron (software) - Wikipedia - any other possibilities?

I would very much like to see - somewhat as @afre has already suggested - a concrete example with ā€œbeforeā€ and ā€œafterā€ images, with specific steps to show the process and the result of using OCIO and ACES to edit a still image, preferably using free/libre software, or at least software thatā€™s ā€œfree as in free beerā€.

It hardly seems fair for someone to post to a forum about using editing software - most of which uses ICC profile color management - where developers of said software also participate, and announce that everyone is doing it wrong and should switch to OCIO color management and an ACES workflow, without actually demonstrating the benefits to people who edit still images, or at least providing a concrete example to follow.

2 Likes

Please refrain to ask that kind of questions if you didnā€™t read what I actually wrote. I have no idea where you got that I said that.

The ICC workflow was designed to produce reliable colour between devices. Itā€™s all tied to devices/displays. Itā€™s definitely display-referred.
I asked you earlier to check the documentation of rendering intents and references and point where do you think it says otherwise.
Itā€™s print oriented: it considers the 0,1 range all across the board because it relies on the simplicity of matching the display range with the reflectivity of a printed output (the albedo of the output surface in the case of reflective prints via substractive mixing).
It was also designed with integers and limited precision in mind, and itā€™s quite effective there, and thatā€™s why it was widely used for DTP and desktop color management.
It was created after the needs of limited hardware with a specific goal in mind.
Why did the movie industry not stick with it and come up with something else instead? Because it was inadequate for scene-referred imagery.
So they came up with something else, designed from the ground up for wide dynamic range and wide colour gamut.
If any of you think the above is inacurate, please step up and tell why.

We have computing power, memory, cheap storage and increasing bandwidth for our internet communications to justify moving to a scene-referred workflow. It IS designed to keep the most of the original data captured from the scene.
We have HDR displays now. Itā€™s not a thing in the future, the TV in my living room is already capable of displaying HDR.
HDR capable computer monitors are also appearing in the market. Operating systems are starting to support HDR
How long do you think it will take until HDR is as common as HD is now? How long did it take to HD to become the lower end? Whereā€™s your DVD player now?
Iā€™m also an ink-on-paper guy. Iā€™m a graphic designer and I still do a lot of printed work, and I do rely on ICCs for that of course. But as I said above, thereā€™s nothing that prevents me from going from scene-referred images to display-referred for SDR screens or printed outputs, and that doesnā€™t prevent me from still using ICC where they belong.
That doesnā€™t change the fact that HDR is where screens are going, and you need a proper scene-referred for that.

Is that conversion to Rec.2020 an ā€œunboundedā€ conversion? (i.e.: encoding out-of-gamut colours as meaningless RGB) or is it destructive, keeping the light ratios of the scene but with the colour gamut constrained to the primaries of 2020 (with all the out of gamut values lost?)
Does 1.0 mean something in the resulting image?
What are you displaying in your screen? is it a display-referred conversion from rec.2020 to sRGB or your display profile with ICCs?).
Is that conversion applying any tonemapping or unless you manually produce a tonemapping curve everything above 1.0 will be clipped?
What do your colour pickers, histograms, and other UI elements that involve colour show? Are they managed? Are they display-referred?

In some of those answers lays the difference between a display-referred workflow and a scene-referred workflow.
The latter has been used by the movie industry successfully. It works, tools exist.
The former implies that for every situation that exceeds the display range you have to think ways to solve problems, that include blending modes and even arithmetic operations falling apart.

Is there something else beside nuke? Since a lot of users here use Linux and canā€™t verify if the trail will run on there (since you have to make an account, and a lot of commercial SW that has a Linux version only offers a trail version for Windows IME) maybe something like Natron, Blender or maybe even Krita (which has besides and ICC workflow also an OCIO workflow)

Actually looking at the krita website (specifically here: https://docs.krita.org/Scene_Linear_Painting) I can see that the biggest difference is that scene referred doesnā€™t define black and white so especially a couple of blend modes are effectively undefined in scene linear (also some filters that rely on the sRGB trc but those should also be a problem in display linear)

Interpreting the above with what I know transferring from display linear to scene linear should be relative easy (just forget that 0.0 and 1.0 have any particular meaning) on the other hand going from scene to display is hard and might require some form of tone-mapping to get correct (or at least an appropriate view/look transform).

I do like to note that although display linear does define black and white it might indeed sometimes be useful during processing to go a bit beyond those points and only clamp those values at the end product or when needed so as to preserve as much of the image as possible during the process (I do note that in this case those values donā€™t have a real meaning which they might do when using a scene linear).

Also @gez HDR displays arenā€™t HDR in the sense that hey are scene linear, using the definition above (having a non-defined white and black point) makes these displays definitely display linear (since the black and white are defined) it is just that the white of an ā€˜HDRā€™ display is just so much brighter (in comparison to the black) then the white of a LDR display. (If HDR displays would need scene linear editing most modern digital cameras would have needed it for ages by now since most have a much higher dynamic range then displays and even adobe products still use ICC) Also note that most ā€˜HDRā€™ displays still use protocols that use either 10 or 12 bit integers for storing and communicating the display data

My only intention is to cut through the layers of invented terms and made up stuff and point out that there is something available that exists and was designed specifically for what you need. Itā€™s not incompatible with display/device-referred work, itā€™s future proof and itā€™s being actively developed and expanded. So why insisting on hacking a legacy colour workflow?

This is probably one of the reasons why libre applications are so far behind commercial ones in terms of colour. Everyone is discussing HDR, 4K, Rec.2020 and most of our apps fall apart when you walk away from sRGB. This is an opportunity to begin early and be better equipped for the future (and today maybe :slight_smile: )

Yes, sure. They are displays with a maximum intensity ceiling and the output has to be adjusted to that range.
But I wasnā€™t discussing the internals of HDR displays here neither how images are stored for HDR delivery. Thatā€™s delivery, thatā€™s output.
The matter here is, of course, the master. Keeping the scene ratios and colour latitude captured as faithful as possible to the source.
In essence weā€™re talking about the same goal, how to get there is what we donā€™t agree.

Here you hit the nail: Adobe is completely unprepared and it fails miserably with scene referred data because of how tied it is to ICCs and display-referred editing.
They have been trying to push features in to mend that but their products are still a mess when it comes to scene-referred editing. It will probably take some new-gen programs to appear to completely solve that. Photoshop is behind Krita in that regard. Krita developers saw the opportunity and did something about it.

My main point was more that it is (at least theoretically) possible to use a display referred workflow to output to a ā€˜HDRā€™ screen, now of course in the cinema world this is often not used since the source files used in the mastering process are scene referred but those scene referred source would still be needed to be tone-mapped in the display space of the ā€˜HDRā€™ screen.

Now of course the dynamic range of the ā€˜HDRā€™ screen might be big enough that working in a suitable display referred space might not be the most suitable thing to do and I am pretty sure the current version of ICC profiles are definitely inadequate to do this so in practice it might be easier to work in a scene referred and then map on output (which also should make it easy to work with LDR screens just use a different map function)

So in essence you are right about the same goal different ways to get there, I just personally would phrase it differently.

Yes, check what @anon11264400 said above about model/view.
You always need a view transform to accomodate a portion of the sceneā€™s dynamic range to the capabilities of the display. That requires tonemapping (the resulting output will be display-referred of course), and thatā€™s already covered by the workflow designed by the movie industry. Thereā€™s nothing to invent or hack.

Nothing to invent or hack if using the movie industry way of working, which is (currently) a bit more complex to setup and use (mostly due to unfamiliarity but that is still an issue and some missing stuff regarding display profiling/calibration). So I do expect a lot of people will begin to try to invent/hack a fully display referred way of working similar (or maybe build upon) ICC profiles - to be honest it wouldnā€™t surprise me if the ICC consortium is already working on something like that - of course if that is a good thing is a question I canā€™t answer.

Itā€™s actually way simpler. Give it a chance and learn more about it.
As I said above, Iā€™m a graphic designer and I come from ICC. Itā€™s not rocket science and it doesnā€™t take long to get the basics of this different model once you shake off some crap you carry from your ICC and display-referred experience.
The interesting part is that in the process youā€™ll learn that many things you thought you knew from your experience with display-referred workflows have to be revisited, and many things that didnā€™t make sense or produce odd results with display-referred applications start to act more naturally, closer to what you see in reality and in photography.
For me it was a game changer, and I hope that free software developers and users see it too. Not trying to attack you guys because you have different ideas, Iā€™m just trying to help because the way I see it, youā€™re taking the rocky road to fail town :smile:

Mostly meant it that most (if not all) consumer displays[1] are not color correct so to work with it you need to load a color lut in the GPU and load a display profile, currently almost all the tooling for this assumes ICC profiles and to get this to work for OCIO you need to extract the info from the ICC profile and adjust the OCIO config by hand[2], now this isnā€™t that hard but is still extra steps that currently not many people want to take.[3]

Still probably going to take you up on it and try it out

[1] To an extend this is even true for professional displays but most of those have an inbuilt LUT so the OS/display system can treat it like a nice well behaved output (even if it isnā€™t).
[2] Which requires editing a text file, which a lot of people find scary for some reason
[2] Many people, even artists to this day still donā€™t calibrate/profile their screens!

Well, I did read what you said: First, all the applications I mentioned use ICC profile color management. Second, youā€™ve said repeatedly that ICC profile color management is designed for use wih display-referred editing, considering only the range 0-1. Third, the applications I mention all are capable of editing using floating point processing and most of them already allow the creation of channel values outside the range 0-1. The logical conclusion is that these editing applications should clip channel values outside the range 0-1 if and as they occur, or else the applications are violating your description of how ICC profile color management is supposed to work.

Another interpretation of what youā€™ve said about ICC profile color management - which coheres with your subsequently stated desire to see ACES and OCIO more widely used in free/libre editing applications - is that instead of fixing what you see as violations of ICC profile color management, instead all of our free/libre editing applications that currently use ICC profile color management, should stop using ICC profile color management and start using ACES and OCIO. But I wasnā€™t thinking along these lines when I asked you if you were saying that the applications that do use ICC profile color management should stop allowing channel values that are outside the range 0 to 1.

Well, I donā€™t need to check the ICC specs to verify what you just said, as Iā€™ve read the specs through more than once, and also more than one version of the specs. I can go one better than what you just said, which is to point out that ICC specs (V2 specs, I think there is somewhat a change in V4, but the terminology is slippery and I might be misremembering - edit - checking again, nope, no mention at all) contain no mention at all of using ICC profile RGB working spaces.

Further, the V2 specs disallow negative channel values, which means most or all of our camera matrix input profiles are ā€œillegalā€, non-conforming to the specs, because camera matrix input profiles require at least one negative XYZ channel value.

The thing is, people who use ICC profile color management have a long history of ignoring the stuff in the specs that isnā€™t convenient, and adding on missing functionality, such as putting source white points in the white point tag for V2 profiles and expanding the way profile conversions work to allow not clipping otherwise out of gamut channel values.

So we do have and use not just camera input profiles that donā€™t cohere to the specs, but also RGB working spaces, some of which donā€™t cohere to the specs, and so on.

You make the point that the ICC specs as originally designed were based on old technology. This is a point with which I wholeheartedly agree. But itā€™s true of specs in general. Itā€™s really hard to predict the future, so specs are based on existing technology. And people push past the limits of the specs, and the specs get updated. Please compare V2 specs, V4 specs, and iccMAX specs and youā€™ll see what I mean.

Ok, letā€™s do something about that.
Here are the rules:

  • Open GIMP (your own patched version with all the clips and clamps removed).
  • Open the attached EXR (a test scene-referred image I just crafted for this test)
  • Capture screen, attach the screenshot
  • Go to the exposure command, take the exposure down one stop.
  • Capture screen, attach the screenshot
  • Export the result as sRGB JPEG and attach the file here.

For your convenience, that image Iā€™m attaching has a limited colour gamut (rec.709 primaries) with linear transfer, so you donā€™t have to perform any complicated colorspace conversions.colours.exr (70.4 KB)

Please send those screenshots and then Iā€™ll send you my ā€œafterā€ using OCIO with libre tools (Blender or Krita). No need to go even close to ACES (which I never mentioned, by the way, but you quoted me as I did).

Oh, btw. A hint: The values used in that EXR are scene values that fall inside the dynamic range that any consumer camera could capture in a single shot. Nothing too extreme.

1 Like

Hereā€™s the screenshot before lowering the exposure by one stop:

Hereā€™s the screenshot after lowering the exposure by one stop:

I donā€™t see the point of exporting a jpeg from a file with channel values greater than 1.0. But as this seems to be what you want (though weā€™ve already established that Iā€™m having a difficult time trying to understand things you are trying to say), hereā€™s the jpeg:

My apologies, these files and screenshots are from default GIMP compiled from code pulled this morning. I deleted my local installations of my patched ā€œCCEā€ version of GIMP several days ago and havenā€™t yet reinstalled it. Currently Iā€™m using a ā€œbarely patchedā€ version of default GIMP for actual image editing. I havenā€™t updated my ā€œCCEā€ version of GIMP since late last year.

Default GIMP is faster than my patched GIMP, especially when painting. I quite enjoy the ease with which I can switch between editing linearized RGB and perceptually uniform RGB (something I canā€™t do with my ā€œCCEā€ version of GIMP except by doing an ICC profile conversion just to change the Tone Response Curve). The new default GIMP composite options and blend mode options are quite useful. And etc.

So I donā€™t anticipate reinstalling my patched ā€œCCEā€ version of GIMP any time soon if ever. Instead, Iā€™ve gone back to using different prefixes for different color spaces, which is quick and easy given that currently there are only three files to modify to change the default GIMP internal working space primaries from sRGB to a user-chosen set of primaries.

For the purpose of this test GIMP default will produce the same results as your patched version, so no problem.
That EXR image is synthetic, of course, but the scene colours contained could easily come from an overexposed passport or any similar color chart shot with a camera.

So letā€™s analyze the images you sent by looking at the screenshots.

Could you identify orange in ANY of the images you just sent? One of those squares is orange.
One is lime, one is lilac (I know, those colour names arenā€™t very precise but you get the idea).

What we have here in your screenhots is colour that is in-gamut (color-wise) but beyond the display limits, and GIMP canā€™t show them properly.
Why? It has clamps removed, the processing tree is using floats, it hasnā€™t mangle any data (yet) and yet we canā€™t even see what the real colours are.

Itā€™s simply channel clipping. How? if you removed all the clamps and the data is ā€œunboundedā€?
You can see the pixel values unclamped, but you canā€™t actually see the colours, despite they are in-gamut.

I think itā€™s hard to argue that this is NOT the result of the inherited display-referred editing model it took from Photoshop and other venerable imaging aplications.
I think itā€™s hard to argue that a colour management pipe that is not designed to deal with values beyond the display range has nothing to do with this.

A scene-referred workflow knows how to deal with this situation. By using views, as @anon11264400 hinted above.
A problem that is already solved, an extra problem you have to deal with to accomodate ICC in this mess.

Iā€™ll get you some proper ā€œafterā€ screenshots in a moment.

GIMP needs ways to view out of gamut channel data. Nobody will deny this. Cinepaint had such a viewer, but none has been coded up for GIMP. Perhaps you would like to submit a patch instead of complaining about things that havenā€™t yet been coded?

Everyone associated with GIMP development knows that GIMP will sooner rather than later need ways to view out of gamut channel values. Your ā€œdemonstrationā€ proves nothing.

Iā€™m pointing at a SINGLE issue that is already solved in Blender and Krita via OCIO, an issue that you admit that needs to be coded (with the usual ā€œpatches are welcome or stfuā€ bullying thatā€™s so classy and friendly).
But hey, my demonstration proves nothing. Come on, Elle. We used to be buddies when GIMP devs bullied you with the same, donā€™t you remember?

SINGLE ISSUE, Elle.
It will eventually be there, yet most of the libre apps canā€™t deal with that.


The original image through an OCIO view transform (original data is unaltered)

One stop down (directly from the view, data unaltered)

Two stops down (directly from the view, data unaltered)

JPG export from the view with no exposure changes

Now tell me, youā€™re a photographer, which result is more artist friendly? Which is more ā€œwysiwygā€ in terms of what a photographer might expect from shooting that scene? Which one is more intuitive to edit?
Yours in an undefined future or the results you can get easily right now with Blender without any modifications or with Krita using the stock Blenderā€™s default OCIO config?

Be honest.
And this is just a SINGLE ISSUE.

That jpeg was tone-mapped by an algorithm. I prefer - have always preferred - to tone map my images by hand, making my own artistic decisions. Thatā€™s exactly why I started shooting raw instead of just using a point-n-shoot and saving camera jpegs, many years ago.

When I have actual channel values in my actual image files that are 2+ and 4+ stops above 1.0, I already know why those values are there and what I want to do about them. I donā€™t need some application that does automatic tone-mapping for me.

1 Like

The level of smugness has eclipsed what is acceptable. Letā€™s take a moment to cool off and think about how communication, collaboration, and respect can help improve our craft and tooling. Thanks.