Unbounded Floating Point Pipelines

Considering what Troy and I have been saying as an “attack” to libre software shows how far you’re missing the point.
You feel that you need to “defend” yourself and your choices instead of realizing that you got information about an alternate way of doing things you didn’t see (because you’re so focused on a legacy worfklow designed around the print industry needs). You could take advantage from something that is free, available and works and use it to make better libre tools. But your call.

It’s not that I’m defending free/libre software. I just think your attacks on ICC profile color management are pointless and misinformed. No software does everything. I’m 100% convinced that OCIO-based software is incredibly useful. That doesn’t mean all editing software should use OCIO color management. It also doesn’t mean ICC profile color management is not suitable for high bit depth floating point editing.

I only mention free/libre software because that’s the software that I use. But let’s put free/libre software to one side.

What do you think about After Effects and PhotoShop? Both of these use ICC profiles.

The point I am still not getting, and I guess I am not alone here, is: at which stage of the processing OCIO becomes a useful/better alternative to ICC?"

Applying LUTs is possible without OCIO, so this cannot be an argument…

What practical benefits it brings to the editing of scene-referred data?

It keeps the view isolated from the underlying constant of scene referred data. A huge chunk of “creative” looks are actual repeated elements of a proper view transform. This permits one to focus on the manipulation / compositing etc. without having creative / technical transforms contaminate the reference.

A transfer function is merely a portion of a view control, but certainly not useful in more complex camera transforms, nor useful if you are going to various outputs. Individual views allow those to be developed and reused.

Try Nuke.

They are absolute crap for scene-referred data. The artist workflow is confusing, inefficient and slow. They are very convenient when it comes to display-referred imagery, but fall short with scene-referred material.
I said earlier that Krita (with its shortcomings and some anchors inherited from its display-referred origins) is ahead of Photoshop in that regard.
But still, any image manipulation software that has to rely on a scene to display transform prior manipulation (Photoshop after camera raw or lightroom, but also Krita) is limited compared to a pure scene-referred workflow.
So you can see my problem is not with Libre Software at all.

Yeah, at the super low price of $4500 or you can rent it for only $1200/quarter… For an application who’s primary focus is VFX, not photography.

Here’s the thing, you seem like you’re smart and all, but if you want to convince people of something, you need to tailor your argument towards that group of people. Nuke is probably a non-starter here, and since you seem to have trashed all our other software (and terminology), it doesn’t seem like you’re going to convince too many people. Couple that with the smug and condescending tone, and that’s probably why this thread is coming up on 200 posts but doesn’t seem to have gone anywhere.

We’re all here to learn things, but if you’re not willing to illuminate the way with thoughtful comments and insight, but prefer to bludgeon, then what is the point of posting at all?

1 Like

Try Fusion then which is free.
(although you can get a non-commercial license of Nuke too)

Or Affinity Photo, which costs $ 50

Blender and Krita already have OCIO and you can experience the processing detached from the view transform (although in the case of Krita the experience is still a bit limited).

Are you completely sure of that? A scene like yours exposed properly can go up 4 stops without clipping to yellow with a proper view transform. I can easily show you.

That’s where the problem lies. You suggest that putting the tonemap operation before or after in your stack makes the difference, but the thing is that when you put that operation your actual pixels are adjusted to the display range, so whatever you do next is done on information that has been altered by the tonemapper.
When you have a view detached from the processing pipe your view transform doesn’t alter the pixels, yet you have the correct visual feedback from your operations.
With the workflow you describe it’s either going blind or clipping your output.
Could you please explain how do you edit your imagery for a scene-referred output and simultaneously see what you do during editing? Does it involve turning on and off operations in your processing stack temporarily?

Wrong.

Again, see first post.

I am against made up terms. Some folks asked questions, I answered.

Use what you want. I really don’t care.

Because with regards to useless made up terminology, some of us end up trying to explain things to folks that have an interest. That task is complicated when burdened with awful concepts and terminology. Sadly that crosses over into the complete dearth of Libre / Open Source software that can negotiate the issues.

If you want to call it smug and condescending to go over real issues that other approaches have solved.

Because Nuke isn’t used to manipulate photographic sources.

As an aside, I did say try; there is a free beer trial edition that allows someone to actually muck with the conceptual model.

Again, you seem to have skipped my main point, which is that if you want your argument to be successful, then you need to tailor it to your audience (which is us).

With that in mind:

  • Nuke is pretty much a non-starter and “free beer” is the (much) less important of the software freedom.
  • saying you are against “made up terms” and the other less kinds words you used without actually defining what the terms may or may not mean isn’t helpful.
  • you do care what terms I use, otherwise you wouldn’t be here calling them non-sense and other things
1 Like

Nuke was used as an example, based on the fact that it does deal with photographic / physically plausible models. This is the reason I said try, as someone that is capable of, in this case writing an entire piece of software such as Photoflow, would probably find it very useful.

I again, don’t really care what software you want to use. I don’t!

I do care about terminology. I am not going to rehash easily sourced terms and re-explain how display referred gamuts work because others far more wise have done so, and the information is readily available.

Ding! This one sentence put it in perspective for me, and I was able to approximate a scene-referenced edit in rawproc, my hack raw converter, by doing these things:

  1. Nullify all gamma transforms in my ICC profiles, make them 1.0
  2. Limit my colorspace operations to 1) assignment of the camera profile to the raw/linear image at file/open, and 2) convert to a linear gamma sRGB profile for display and output. I may make a linear gamma version of my display profile when I get home.
  3. Simulated a view transform at the end of my processing chain with available tools. I don’t have a LUT tool, yet, so I used my gamma and blackwhitepoint tools.

With that, I was able to create a decent output image by just working on the floating point linear data. Now, my tools all currently have a white-point basis, but I can envision modes where they work on the scene-defined dynamic range. Display in rawproc is a bit unique; you can select any tool to display the image at that point of processing, but in this experiment that only works correctly if the last tool in the chain, the last tool of the “view transform”, is selected.

So, in any of these workflows, I believe you need at least one color gamut transform if you have a camera input, as that color gamut will be larger than any display device and an intelligent conversion (rendering intent) has to be applied. In scene-referred, that’s occurring at the end of the pipeline.

Now, this experiment doesn’t have the benefit of all the scene-referred tools and such, but it proved the concept for me.

It appears to me that the key issue with ICC tools and profiles is that gamma and gamut are coupled. But, the profiles don’t have to be used that way. And, right now, I can’t find a way to accommodate my camera calibration in any other format, so ICC is it for me for the time being.

This is a contradictio in terminis; any term is made-up. My point being: who is right here? Failing to acknowledge that the other party might be correct, is essential for a fruitful discussion. The impression I get from reading through this heated topic (and I am by no means any expert, but very eager to learn), is that there is a lot of matter-of-fact statements that things should be A, and B makes no sense. And anyone who doesn’t exactly see this, is not yet as smart as they should be… That is rather condescending and above all, not really constructive.

Please be aware that there are people who want to learn, and this requires education. Just telling me that what I know is wrong, or that I use an old-fashioned method or anything, is not making me want to know more.

Edit: as an example, when you say

I am not going to rehash (…) because others far more wise have done so, and the information is readily available.

It would be more helpful if you would actually point me into the right direction to a web-page, instead of deflecting.

5 Likes

@anon11264400 instead of recommending to try nuke you might want to recommend Natron which is an open source nuke like compositing editor


Anyway I think there is a bit of telephone going on here in that over the years the display referred way of working and the scene referred way of working have developed their own terminology (both based on the original CIE terminology) which then drifted in common use (in somewhat different directions) so that when people from different sides try to communicate you end up with a comedy of errors.
To help alleviate this people should point to their sources and other helpful materials in my case that would be the ACES docs, the Scene Linear Painting subsection of the Krita docs, the OCIO website and probably some stuff I am forgetting right now since I just came home from work.

3 Likes

I also have difficulties understanding your point here… the tone-mapping layer that @Elle (and myself) added on the top of the processing pipeline has the same role (although probably in a simplified way) of what you call “view”.

The workflow being non-destructive, one can add as many adjustments as needed below the tone-mapping layer. Those adjustments will manipulate the scene-referred values, because the tone-mapping does not alter the original data, it only applies a transform on-the-fly before sending the pixels to screen.

Finally, wether to apply or not any adjustment to the already tone-mapped (and not anymore scene-referred) data is an artistic choice of the user. My understanding is that OCIO also applies a number of manipulations to the pixel values in its view, and not just a simple non-linear transfer function.

Again, could you explain what is the conceptual difference between the two approaches, taking apart the obvious fact that in OCIO the view is separated from the processing pipeline, and it is in some sense “standardised”?

Thanks!

@Thanatomanic and @dutch_wolf hit the nail on the head. All terminology in some sense is “made up”. The particular terms that a given group might “make up” depends in part on that group’s historical starting points, and there can be a great deal of confusion when two groups with different backgrounds and assumptions try to communicate.

On the one hand, OCIO color management from the beginning was designed to work with scene-referred data, which by its very nature can and usually does contain RGB channel values that are much above 1.0 and sometimes below 0.0.

So in this view there is nothing to “unbound” because nothing was ever “bounded”. But very often there is a need to clip/clamp. And given the nature of the data that’s being worked with, necessarily the encoding uses floating point rather than integer precision.

On the other hand, ICC profile color management started out (around 1998) dealing exclusively with input and output devices, where by definition the channel values are confined to the range 0.0 to 1.0 floating point equivalent, though the original specs didn’t even allow floating point values, but instead used integer values. The original specs also didn’t allow negative XYZ values, which meant that one couldn’t actually make an accurate XYZ camera input profile without creating a profile that violated the ICC specs.

When the ICC realized that these restrictions were making it impossible to use ICC profile color management in the type of workflows used to make movies, the specs were changed. Here’s a link to one of the early (2006) documents proposing appropriate changes:

Quoting:

However, there is increasing interest in the use of floating-point color encodings, for exchange and as working spaces. Practically, such encodings can be thought of as unbounded [emphasis added], and it is therefore impossible to support them using current [emphasis added - the specs have changed] ICC profiles.

In the context of ICC profile color management, the changes in V4 specs indeed were an “unbounding” - the ICC’s terminology, not made up by myself or by Marti Maria - of previously bounded channel values, along with a change from only allowing integer encoding to allowing floating point encoding.

To summarize:

“Unbounded” is an ICC word that indicates that the ICC specs have been changed to accomodate floating point scene-referred channel values. Back in the late 1990s and in the V2 specs, the ICC wasn’t concerned with scene-referred channel values, only with device channel values. Hence V2 specs deal with integer encoding and “clamped by definition” channel data.

“Unbounded” doesn’t have any meaning in the context of the development of OCIO color management because there was no prior assumption that scene-referred channel data was ever bounded in the first place.

Given that OCIO and current ICC specs both allow encoding floating point scene-referred channel data, now both types of color management face the same issues of deciding when, where, how, and why to “clip/clamp” RGB channel data.

Speaking of unfamiliar terminology and “clipping/clamping” channel values, @anon11264400 and @gez , in OCIO terminology, do these two terms mean the same thing?

1 Like

The most obvious difference I can see is the inefficiency of the artist workflow.
In your model you start blind (not wysiwyg) unless you place a tonemap op. If you place a tonemap op you have to make sure that every op you insert afterwards is moved below the tonemap op. And the output of those operations depend on the position in the stack (if you placed them above the tonemap and adjusted the parameters, you’ll have to modify those parameters if you move them below.
If you want to save the result of your editing as say scene-referred exr you have to remove or turn off the tonemap op, et cetera.
That’s quite a difference if you ask me.

Elle, that’s perfectly clear. However, it’s extremely difficult to find a single appliction that uses icc V4 and deals with scene referred dats properly, while there are many (including some libre ones) that chose OCIO for that purpose and already work.
So the question remains: why hacking a spec that exists on paper only but has near zero penetration in the industry and artist communities (having to deal with the many loose ends) when you have an open source library that is being widely used with success?

The main goal of OCIO is that it gives consistent user experience across all supporting applications.

In my opinion is not necessary but the main issue here is that every tonemapping algo is not standard and just a creative pixels manipulation, so it’s just good for a fast preview, nothing more.

Well, instead of saying “hacking a spec” (which has somewhat negative connotations), let’s say instead that various developers of various free/libre image editing softwares are working to take advantage of floating point processing in ICC profile applications. This is new territory that we are exploring.

I don’t think there is a high level of familiarity on this forum with OCIO. So to enable us to make our own decisions about whether “floating point ICC” should be scrapped entirely, or whether it might be nice to add OCIO to a given software (as Krita has), or whether perhaps we just want to change/improve the way we use floating point channel values when editing our ICC profile color-managed images, we need concrete information and also example workflows.

So having someone help to educate us about using OCIO would be very nice. The first step might be to suggest some software to get started with. But it seems Blender, Krita, and Natron aren’t sufficiently good to use? And Nuke and Fusion are not “free/libre” but only “free beer” and also require registration for using their “free beer” versions.

Yes, the movie industry makes use of OCIO applications. But the movie industry also uses ICC profile applications.

I don’t have statistics. But the claim that ICC profile color managed editing applications have “near zero penetration in the . . . artist communities” seems to me to be unlikely. How many digital artists use OCIO software for their artwork, and what type of artwork are they producing? Does “artist communities” include photographers? Are most artists who use Krita using OCIO instead of ICC, given that Krita supports both?

I’d very much like to have some links to venues that showcase work done by digital artists using OCIO (and especially photographers if there are such), along with an idea of what specific softwares these artists are using. After all, a major goal of everyone on this forum is making nicer pictures, more efficiently and with less work if possible.