Unbounded Floating Point Pipelines

The real question is: is aces or scene referred without IDT from raw supported/possible by all cameras?

It looks like it’s possibile to use an aces workflow only if the camera’s manifacturer gives support to aces.
https://www.usa.canon.com/internet/portal/us/home/explore/product-showcases/cameras-and-lenses/aces-compatibility-software

https://groups.google.com/forum/m/#!topic/academyaces/CijK3EGJPpE
“I’m interested in converting Nikon RAW NEF files into ACES”

"
Understandably, there’s a desire to use footage from cameras that aren’t
classified as “motion picture” cameras, or we haven’t had detailed
conversations with the manufacturers about. In those cases we’d encourage
you to contact those manufacturers and tell them about your desire to use
ACES with their products. We’re very happy to work with any manufacturer to
integrate ACES.

In some cases we may have the ability to create reasonable IDTs without the
direct support of the manufacturer"

Why a raw file needs an IDT?

I think that this can be taken as the first stone of our new building. So let’s take for granted that we agree on the advantages of manipulating pixels in linear, scene-referred representation.

However, we are still hitting a wall when we come to the point of sending the linear pixel values to the output device. Here again, I think we agree on the general concept that a view is needed to map the scene-referred values into the dynamic range of the output device.

That said, let’s see if we can find a concrete starting point. Would you agree with the following statement?
In a non-destructive workflow, an ICC conversion from the linear scene-referred working colorspace to the output display profile is a very specific, and possibly over-simplified, kind of view transform.
Here, non-destructive workflow means that the user always manipulates the linear pixel values, which are then converted to the display profile on-the-fly.

Yes.

Even data forwarded to output without an ICC is a form of a view transform with similar problematic limitations, as both end up being a display referred dump[1].

[1] There has been much talk about the potential for a parametric V4 ICC leprechaun for scene referred data. No such official sighting has been corroborated by authorities, and said leprechaun is still considered at large.

An example of this is the default view in Blender. (and the reason why Filmic Blender was created).
Thanks to OCIO it was possible to replace that poor default view for something better, and it’s a mechanism that allows to implement easily other views and other devices as needed.
Back to Blender’s crappy default: It just dumps the 0,1 portion of the scene data to the display and applies the non-linear gamma function.
The result is what I showed earlier with the swatches example, and can be confirmed with GIMP, Photoshop, Darktable and all the programs that dump that data to the screen.
Now, compare that to what a camera usually does (even a point and shoot camera that produces jpegs): There is ALWAYS a transform that compresses the camera’s DR in the display range and re-arrange the tone distribution in a pleasing/natural way.
Even Darktable starts with an autmatic base-curve similar to what camera models apply (unless you override it and turn off the base curve).
Nothing wrong with that, because one expects to get a picture as close as the real scene that was shot. Some artists will prefer to turn off the base curve and go for a personalized tonemap, but the issue remains: You don’t want the data just dumped to the display. You want some degree of wysiwyg.
OCIO view transforms allow that, and in a way that doesn’t mean that you have to actually modify the scene-referred data. That information remains available, making the image operations produce a physically plausible result (which is otherwise adversely affected when the display transform is baked in the pixels).

Garbage is information that will produce undesired results when processed, or that it won’t be displayed properly. Conversely, a view that doesn’t display valid information properly can be treated as garbage too :smile:
I guess the concept has different meanings depending on what you expect from your processing pipeline, but what I do know is that scene-referred images are a proper digital representation of your scene. That means in practical terms having a “virtual scene” you can shot with one or many “virtual cameras” (the view/display) and in every case the output will be correct.
Imagine a scene-referred image as a virtual scene you can take with you. Want a sRGB jpeg from it? Want a rec.2020 tif? done. Want an HDR image for your 1000 nits TV? Also possible. All from the same master.
Sure, you may argue that the same is attainable from a RAW image, but think about convenience: Can you edit a RAW straight away, or you have to go through the development process again and again for different outputs? The first advantage of a scene-referred workflow is that convenience.

Next: different cameras (forget about cinema for a moment, let’s talk about photography): You and your buddy are going to take pictures at a party. You like Canon, he prefers Nikon.
Your model has a DR of 12.5 stops, his only 11 and some. Different primaries, different looks. But you want the party photos to be consistent.
With the usual display-referred workflow using a raw development software this is usually a headache, specially because most softwares aim to mimic the vendors secret sauce as starting point, so you have to manually compare shots side by side trying to produce a unified look, which is tedious, error prone, and never turns out really well unless you put an insane amount of work on it.
Scene-referred in a common reference space through the same view transform greatly mitigates that. Not to mention how much easier is obtaining a unified grading from that (basically you’ll use the same look on the different shots and everything will be consistent, making the fine-tuning the only remaining step).

I’m not saying that you CAN’T do that with a display-referred workflow, but it doesn’t take much to realize how a proper scene-referred model with a detached view is beneficial.

p.s.: I would like to study the examples posted above by @Carmelo_DrRaw and @ggbutcher but I think it would be better to do it in separate threads. This one got too long and broad.
Could you please move them to specific threads so we discuss them there in depth?

1 Like

That’s another point that needs some further clarifications. The FLOSS RAW processors that we are usually discussing in this forum (at least RawTherapee, Darktable, PhotoFlow, and I guess also RawProc) all offer the possibility to generate neutral renderings of RAW files. This can be based either on custom-made camera profiles obtained from color-checker shots (or something equivalent), or on the standard matrix camera profiles provided by Adobe. In both cases, although with different accuracies, the processed RAW file provides colorimetrically correct colors (hope this is the right term).

The neutral rendering is the default one in PhF, and it can be selected with a couple of mouse clicks in RT and DT. In this case, the three editors give practically the same rendering for the same RAW file, at lest at the level of display output. PhF is more geared toward linear RGB editing (by default, the interpolated RAW image is converted to linear Rec.2020), while RT and DT go through an internal RGB-> Lab conversion.

I have copied the relevant part of the post in the PhotoFlow specific thread: Scene-referred editing with PhotoFlow - #28 by Carmelo_DrRaw

Yes, rawproc essentially offers the equivalent of dcraw’s -o 0 -g 1 1, which delivers a RGB image array bereft of both the conversion of the camera colorspace and transforming the data from linear to perceptual. It also allows one to assign a camera-calibrated profile to this data.

Oh, of note is that I re-organized the dcraw semantics for -W, “don’t autobright, default do” to “autobright, default don’t”. So, rawproc doesn’t autobright unless you specifically tell it to do so.

All of the above contributes to delivering linear data to the rawproc workspace. So, my understanding is that this data would then require application of an exposure tool with adjustment to put a known middle gray patch in the image at 0.18. Then, the data could be called scene-referred.

Yes, I’m aware of that. What I wanted to highlight with that comment is what users get: It’s either a vendor-like base curve as a starting point, or it’s a non-wysiwyg view.
In the case of the former, you get the data altered already, in the case of the latter you get a visual feedback that is not what you expect, and unless you’re really aware of how the program processes the data, it’s not immediately obvious what’s the next right step to make.
With a view transform you solve both problems automatically and simultaneously.

If it’s already confusing to adjust a single photo when your view isn’t wysiwyg, think about matching shots from different sources.

In a proper scene-referred workflow you only need your linear data scaled so middle gray is pegged at EV0, and what you get from the view is what you expect (something close to what you saw though your viewfinder when you captured the scene).

That being said, it is clear that non-destructive raw development tools are closer to a proper scene-referred workflow than other tools (like GIMP or Photoshop) that were designed to deal with display-referred imagery.
The basics are covered, processing linear RGB in 32f precision without clipping intensity is of course the way to go, but you still need to deal with the transforms from-reference and to-reference and you still need a proper view transform and make sure that the operations are geared towards scene-referred linear RGB (without flipping the actual pixels to non-linear or switching to different color models).
The from-reference and to-reference transforms can be certainly done through OCIO. It has been suggested here that ICC transforms could be used for that too, although are little to none real-world examples of floating point transforms using ICCs (and this is only a part of the discussion).
Then we have the view tranform. Also covered by OCIO.

“Neutral” would be in the colour space primaries that are of your camera.

When experimenting here, it is going to be far easier to start with REC.709 based primaries as a majority of views are designed for REC.709 based primaries. For example, the Filmic set was intended as an introduction to the scene referred domain, and as such, many folks aren’t aware of primaries and the target of REC.709 was chosen for ease.

Exactly.

That is, if you had a spot meter and plopped it into the scene where your +/- EV0 point would be according to your exposure, the grey card would ideally meter 0.18 when loaded from a linearized TIFF via dcraw -T -4.

Of course, actual white balance will matter here if one is seeking R=G=B channels, and that would require shooting with a D65 white point, or using a manual Bradford adaptation matrix for the input, which could be integrated into the OCIO transform for the particular camera and reference space needs.

1 Like

So, I’ve noticed quite a few deleted posts in this topic. While I understand that a user should have control over their own posts and information, it can be disruptive to understanding the flow of the discussion and breaks the context that much of the conversation may depend on. This makes it particular hard for others, or future others, to follow the topic (which is especially hard in a topic that is already above my head, and quite long).

This also breaks the normal transparency that is a nice way to see how a conversation may develop or flow.

Pretty please consider this carefully before deleting things - maybe edit them or add more context to them if needed instead? I feel like there’s really good information here, even if there’s some mis-understandings, and it would really suck to lose that information for others (even future Pat when he finally is able to wrap his head around color stuff).