Unbounded Floating Point Pipelines

So it’s OCIO and proper scene-referred workflows for you. But it’s not new territory for people who have already been there, so that new territory has road signs

Nobody asked. The reaction was “we can do the same with icc”

With the non-commercial version of Nuke you can have an idea of what it is to use that workflow in a production-proven environment. You don’t have to keep it or use it for your work, just research.
You can do that with Blender, Krita or Natron too, with the caveat that some parts of OCIO may have been implemented incorrectly or partially.
So you have open source software, documentation and implementations to investigate. You also have a freely available (as in beer) software you can try.
You can even watch youtube videos if you don’t want that software installed in your computer

It’s crucial to keep in mind that the vast majority of users are held hostages of display-referred applications like photoshop or (name any image editing application you know). That’s why the first reaction to the adoption of scene referred workflows is “that’s not practical”, “i always worked this way and my work is fine”, “everyone does it this way”, etc.
It’s a cultural shift that technology already allows, but legacy software and habits are holding back.

I have some experience showing Blender artists the benefits of scene referred work, and once they get it, no one is going back.
The same applies to photography. Once you get it, you can’t stand the display-referred model, it’s very limiting and in certain contexts completely inadequate.

Also I said that icc had zero penetration for scene-referred work, something you missed in your reply. Do you have any example of scene-referred apps using ICC? Krita doesn’t count, you will use OCIO for that.

Hmm, I was about to object, but in a sense you are exactly right - there has been a lot of “we can do that”.

Hmm, given the length of this thread, would you be willing to start a new thread for people who would like some tutoring and assistance in getting started using OCIO software? That would be a good way for people to get actual experience, which might also help clear up some confusions about “terminology a la OCIO vs ICC”.

Speaking of terminology, I’m not always clear what you mean by “scene-referred” because sometimes it seems to somewhat conflate “linear processing” with “keeping channel values proportionate with the original sensor-captured data” and sometimes with “don’t modify the original scene-referred data but instead keep it at the top of the pipeline and apply transforms and creative edits”, which transforms and creative edits do often result in an image that isn’t any longer proportionate to the orignal sensor-captured data.

I know full well that you know exactly what you mean by “scene-referred” and “scene-referred work” in various contexts. I’m just saying that sometimes I’m confused about what you mean in any given context by “scene-referred” and “scene-referred editing”.

Actually is right the opposite.

Basically you want scene-referred for computer graphics rendering and display-referred for grading(photo, video, hdr video and the same graphic rendering)

If you have a scene referred of hdr video and a sdr monitor you should use a tonemapping operation for a fast display.

Scene referred ( linear gamma with value above 1) is also good for photographic exposure compensation and some but not all blending layer mode.

@age,
That’s something you can do in a scene-referred workflow through the view. You don’t need the data phisically converted to display-referred values.
Actually, chaining your look and the view transform allows to keep your creative manipulation independent. There are previsions for that.
The concepts you describe need to be put in the context of real work.

You ALWAYS need a good view transform when you have scene-referred material.
Otherwise you might end up with an image of a mountain covered with yellow snow :yum:

That’s because most of the blending modes you use are display-referred, not because the scene-referred model is limited. Check where does alpha blending work better. Is it with display-referred images?
Also take a moment to consider whether the blending modes you have in mind work ok with a display referred image with log transfer. :wink:

1 Like

@paperdigits
Fair. But he asked for a tutorial for porting stuff. They are different models, so rather than porting what is needed is a clear distinction from the scene and display referred models.
We have already described the basics of working with scene-referred imagery and why adapting a display-referred model to use unclamped data or conflating gamut mapping on the RGB data is problematic.
I don’t think that’s the kind of things a tutorial with pictures can fix, although I already produced images to illustrate some of the problems.

I would be willing to answer questions about how it is to work with a scene-referred applications and what kind of things are immediately available with that model that are problematic with display-referred editing. And yes, a new thread would be the best place for that.

This was just 1 of my points. Just to make it complete. I was also interested in the practical side of things. What will change from the workflows we know?

You know a “OCIO for photographer” series. If you have also a part about porting apps, then we might have ocio support for RT or DT at some point maybe.

Scene referred model is limited
https://docs.krita.org/Scene_Linear_Painting

"In particular, there’s many a tool in a digital painter’s toolbox that has hard-coded assumptions about black and white.

A very simple but massive problem is one with inversion"

And what about curves tool, or sigmoidal contrast? They work in the 0-1 range.

You convert the data to a display-referred value in a non-destructive way but you need to do it because scene-referred is limiting for the grading.

In the end there are only disadvantages to use a scene-referred workflow for everything.

“Scene Referred is also often in Linear Light, which while suitable for computer graphic rendering, is not suitable for grading workflows”
https://www.lightillusion.com/aces_overview.html

I think that a sufficiently proficient artist would be able to use the tools well no matter which system they subscribe to. It is like the brand wars, where there are fans and anti-fans of various gaming consoles, OSs, fashions and lifestyles. In one of the papers that I linked to earlier, the author noted that there is little to no perceivable difference between ICC and OCIO results, provided that the algorithms used are robust. In another, the proponents of OCIO agree that there is room for improvement and that opening the specs up to public review is necessary. Though the discussion has been heated and sometimes going in circles, I would like to see more collaboration as opposed to comparisons.

1 Like

It’s interesting that you linked two relevant articles with tons of information of how scene-referred and OCIO are relevant and flexible but cherry-picked a few lines that suit to your argument.
I know it will be sound harsh if I put this way, but please pay attention to the rest of each article.

ACES is ONE workflow for scene-referred imagery. It was designed by the AMPAS and it’s intended for cinema and vfx. But there are still other workflows based on OCIO (because it’s that flexible) you could use, and there are illustrated in the very article you linked. So why just keeping the part that says something apparently negative about ACES and not focus on the rest?
Blender, for instance, uses OCIO but hasn’t implemented the ACES workflow. Sames goes for Krita.

THe same goes to the Krita article. You choose to use it as a proof of the limitations of the scene referred model, while it clearly shows the benefits of it, with some minimum caveats that are mostly changes of habits.

I’d say that that claim is false: they work on the scene linear range too, but having the UI mapped to the scene data without limits makes them impractical to use, so rolling the UI through the view is enough to work with the scene data without actually converting the pixels to display values.

You’re telling this to a person who has more than 20 years of experience working with imaging software professionally, and a person who was involved in free software as a user and collaborator for almost a decade.
I’m a graphic designer and I earn my keep with my graphic design work. During almost a decade I was one of the few designers in the world who could say that used free software exclusively for his professional work (which included tons of print work, motion graphics for tv and web and all the related software and procedures you use for those tasks).
2 years ago I switched back to commercial software after seeing how stubborn some of the developers of free software were regarding the pro users needs and how they wanted to re-invent the wheel again and again and never moving forward.
The change in my productivity was astounding. I was very proficient with free software, but now I’m doing my work in a fraction of the time, with better quality and consistency.
Am I here to bash free software and speak about how great commercial software is? Not at all. I told above that Photoshop and other Adobe software I use is also lacking.
But I could also experience with software that is modern, has been proven in production and provides convenience and flexibility to users who need it for real work.
So this “a sufficiently proficient artist” argument is something I would argue against. I was proficient, but the software was hindering my work. If I’m still discussing this it’s because I believe that free software has the potential to change, otherwise I wouldn’t care.

My apologies, I read what you wrote as meaning nobody in the “industry and artistic communities” uses ICC profile color management, which seemed like an odd claim. But maybe what you really meant was that the people using ICC apps aren’t also using floating point processing?

Yes, specifically I meant that there are just a few scene-referred image manipulation programs (Like Krita and Affinity Photo) and they use OCIO.
The others are digital compositors (Nuke, Fusion or even Natron) that also use OCIO but aren’t geared specifically for Photo work.
The rest are RAW develop programs. You can find ICC there, but they are built rather to produce display-referred imagery and not really equipped for complex manipulation or compositing, other than grading and preparing the raw photo for the output.
Using a more modern approach in those applications (what I think we’re discussing here) would allow them to both get used to produce beautiful display-referred images but also solid scene-referred material from cameras.

@gez I understand where you are coming from and am not criticizing you on that. In fact, I appreciate what you are doing here. What I am trying to say is that patience and charity is required for the discussion to flourish, especially where there is a gulf in exposure and skill sets. After all, we all have to start from somewhere :wink:.

Somehow, I don’t see how the yous being exchanged back and forth being very constructive. ICC and OCIO approaches are vastly different. I just need more concrete examples rather than just terminology, as I remarked right after the @anon11264400’s first post. [Edit: He has since messaged me with some good suggestions.] In sum, I hope that we can make it more enjoyable to read for the passers-by who might happen on this thread.

1 Like

A strictly logical consideration not meant to sound hostile: Descriptions of how OCIO apps do work with floating point camera-captured images, do not constitute prescriptions for how ICC apps should work with floating point camera-captured images. This is a logical consideration that parents remind their kids about, all the time.

If people on this forum gain experience using OCIO, such experience might encourage adding OCIO to existing ICC-only apps, which I think would be wonderful. It also might provide useful pointers in improving how ICC profile apps handle floating point camera-captured scene-refered images. It might even convince some people to abandon using ICC profile applications when processing floating point camera-captured images, which would make @gez and @anon11264400 very happy :slight_smile: . So there is much to gain from learning more about OCIO.

Returning to the refrain “ICC profile apps also make this possible”, a major portion of the Krita article on scene-linear painting is devoted to the color mixing benefits of using linear RGB. Working using linearly encoded RGB in an ICC app has exactly the same benefit as doing so in an OCIO app because the math is the same.

Also, the same limitations listed in the Krita article on “which operations work” when using OCIO to operate on linearly encoded RGB, also apply when using ICC apps to operate on linearly encoded RGB. Nothing different here.

One big difference in OCIO apps vs ICC apps clearly is the tone-mapped view transform. The Krita article on using OCIO gives a nice explanation of how the view transform works, and it seems like it might be useful in some cases. However, I’m not at all convinced of the value of always putting a view transform in the way of letting the user see the un-tone-mapped data.

PhotoFlow’s filmic tone-mapping code can be used to simulate such a view transform, even if not (yet?) so conveniently as in an OCIO app. So I’ve been experimenting with using the PhotoFlow filmic tone mapping as a “viewer”, keeping it at the top of the layer stack. All that happened with my first sample image is that the “viewer” distorted the displayed tonality in ways that would make getting the image I wanted not just difficult but impossible.

One is always putting a view on data. In the case of one’s efforts of attempting a scene referred reference viewed under a display referred output model, it is worsened by the fact:

  • No single pixel “lives” at the code value it will in the final image.
  • A potentially large number of values aren’t displaying correct colour ratios due to display referred limits.
  • The view isn’t applying other potentially mandatory transforms to make the image WYSIWYG within the context, such as desaturation, gamut mapping, etc. This isn’t solved via a “soft proofing” styled approach due to the above points.

A strictly display referred legacy post-camera transform pipeline does solve this, but loops us all the way back to the “Why linear?” question I posed above in the logical progression to scene referred manipulations. It also of course, brings us into the domain of non-data values beyond the encoded limits, etc.

None of this, nor the design of a view based model of management such as OCIO, suggests a fixed view transform. On the contrary, it supports multiple different views for different needs as required.

This is pure rubbish. Even the cited link is of questionable merit given it is undated and from a vendor that offers a colour management solution that may be impacted by ACES adoption.

ACES related discussions are an aside here, but worth addressing absolutely erroneous claims. ACES has some extremely knowledgeable imaging technicians behind it, so their experience is worth listening to. If one reads the ACES explanation from one of its engineers, one will see that Look Modification Transforms are applied in the scene referred domain, which calls into question the entire body of this poster’s claim.

1 Like

Uncharted Territory vs. well-cartographed maps. :slight_smile:

That’s the point.

Remember that it’s not only about the existence of a view transform but the goal of keeping the pixels in reference, linear scene-referred values. Avoiding unnecessary colorspace/model conversions that could be rolled through the view without altering the light ratios from the scene.

Since the beginning, to me this argument has sounded more like “distructive vs. non-destructive” editing more than “ICC vs. OCIO”.

Any non-destructive editor by definition keeps the original pixels. At which point one goes from scene-referred to display-referred values depends on the order of the tools inserted in the pipeline, but this is never a “point of no return”…

1 Like

Scene referred implies always keeping radiometrically linear values in the reference, therefore it goes beyond mere non-destructive, as that may imply a nonlinear reference model.