Scene-referred editing with OCIO enabled software

I’m opening this thread to show examples of some concepts discussed in the thread “Unbounded Floating Point Pipelines”

The idea of this thread is using freely available tools to illustrate the concepts and scene-referred workflows, so in this first post I’ll show a basic setup of Blender for quick tests. Other tools and examples will come later.
Questions are welcome.

Blender is a 3D modeling and rendering FLOSS software. It’s not designed for photography, but it has a compositor built in that can be used to illustrate features and procedures that can be relevant for photo editing as well. It uses OCIO for most of its color management.

Some caveats: Blender/Cycles uses a linear rec.709 reference internally. OCIO provides the infrastructure needed for using any reference, but some legacy code present in blender makes some assumptions based on a rec.709/sRGB output, so at the moment is not possible to use wider gamut references, although there is some ongoing work to remove that limitation.
For the same reason ACES in Blender is out of question, so don’t take the following information as a reference to the ACES workflow.

Setup:
We’re going to use Blender compositor and Blender’s Colour Management (based on OCIO).

.
First step is switching from Blender default (legacy) renderer to Cycles. This is not strictly necessary if we’re going to edit external images, but it’s needed for rendering 3D images with a more physically plausible renderer.

imagen

Next, we’re going to find the colour management section under the “Scene” panel on the right side.
The default config has a simplistic view that is inadequate for proper scene-referred work.
The drop-down selector with the “View” label allows to switch between the views defined in the OCIO config*.
There are also looks designed for each view.
We’ll use for these tests the view “Filmic” that was designed by @anon11264400 as a replacement of the default view transform. Filmic sports desaturation and dynamic range similar to the response of a filmic cinema camera
Note that this is not the original OCIO config designed by @anon11264400 but a modification done by Blender developers. Original Filmic Blender OCIO config includes a Log view and has false colour implemented as a look). It’s available at Troy’s github page:

*) Inside Blender’s install directory, under Blender/Version/datafiles/colormanagement you can find the defaut OCIO config and luts. That’s the place to put third party or your own configs.

With Filmic Blender setup as view we can proceed to switching to the compositor:

On the top row, there’s a workspace selector. Change to compositing.
Then mark the checkbox “use nodes” so the nodal compositing is activated.
When that is done, an input node corresponding to the current “render” layer appears. It’s connected to the Composite Output Node.
We’ll add a viewer node with Add > Output > Viewer and connected to the renderlayer node (just like the existing composite node).


If we click on the image icon by the “Image” label in the lower left, we can change the input for the image viewer. We can select the “Viewer Node” as source, which means that we’ll see what the viewer node is fed with.
Nothing is showed so far, because the render layer is empty. Press F12 to render the default cube, you should see the rendered image in the viewer (if the render is done in the same window where the node tree appears, just press esc when rendering is done to go back to the node viewer).

If you click on the rendered Image, you can see the value of the pixels. On the left side, you can find the scene-values. On the right side, the “CM” values (which means the values post-view transform).
Experiment changing the view and reading the values. The scene values will remain the same, while the values on the right will depend on the view.

You can try the same with an external image:
Just drag and drop a scene-referred image* to the node editor area and connect it to the viewer, replacing the renderlayer node.
Experiment with the different views, with the exposure slider, etc.

*) Blender seems to have some troubles dealing with some tiff files. For the purposes of this test it is advised to use floating point EXRs.

Ok. This was a brief introduction to how to setup Blender for our tests. Get familiar with the rather unfamiliar UI and feel free to ask questions.
When you’re ready we can move forward with other examples.

10 Likes

Cool! I’m starting to think through what I’d do to rawproc, my hack software. To start, one key question I haven’t found the answer to in my reading: Do OCIO gamut transforms go through CIE XYZ, or are they direct, reference->view?

Hi @gez - your screenshots and descriptions of what to do are very helpful.

Continuing from here:

Here’s how Blender looks on my screen after following your instructions:

Is there a way to make the image in the lower left panel show more than the tiny bit of the image that it shows right now?

How does one “bake in the lut” and export the png that shows the image that I saved from Krita as an exr file?

Is the word “Filmic” enough to guarantee that the filmic mapping is the one from the Krita instructions? It seems Krita has more places in the UI to specify "use this file to accomplish that ocio task:

the config file
the input color space
the display device
view
look
the components
exposure
gamma

Whereas the only things to pick in the Blender UI seem to be display device, view, exposure, and gamma.

Always custom designed.

For example, if you provide the sRGB OETF, and use it as the transform to_reference, the primaries are D65 REC.709, and the image you tag as such will be display linear.

If you load an EXR, and tag the image as “Linear”, it is loaded 1:1 and the primaries would be “as is”.

Views are likewise, and can be complex chains of transforms as required. It is up to the configuration designer[s] to assert that the configuration is coherent.

@gez Thanks for taking the time!

Check this brief introduction to OCIO in Nuke:

As it was already discussed in the other thread, non-free software is out of question here, so I’ll leave this video which clearly shows how views are implemented in Nuke (and how different OCIO configurations and views can be selected in Nuke) so you don’t have to install privative software in order to see it at work.
Not a forced view transform, but a mechanism to use whatever view you want/need for each project.

Other interesting things like the OCIO nodes provided by Nuke are covered too, which illustrates the possibilities of interoperation between OCIO-based applications.
(for instance, imagine you designed a complex look or just a ASC CDL grade with Blender, and take that to other applications like Krita or any other OCIO enabled app and produce the exact same result.)

@gez - My apologies, I don’t know which thread to post this question to.

In the proposed OCIO workflow for editing photographs:

in the production of the scene-referred interpolated raw file, if I understand correctly, the proposed OCIO workflow requires that the interpolated raw file be scaled “to make middle gray to meet 0.18”. Please correct me if I’m mistaken on this point.

Is it also the case that the white balance of the interpolated raw file must reflect the actual “color of the light” at the time the image was taken, perhaps determined using a white balancing device, perhaps by selecting a preset such as “daylight” or “tungsten”?

Yes, in order to set the “on exposure” reference value for the scene, But that doesn’t make that your resulting image has to be “on exposure”. The exposure slider in the viewer helps you there.

I’d say that the most desirable situation is to balance your image so it matches your colourspace white point. However indirect lighting and multiple light sources of course will always have a role shifting colours.
It’s a good question and I suspect that there isn’t a single answer, as scenes might have a huge variation (a scene lit with red lightbulbs come to my mind, and I’m sure that’s not something you’d want to balance).

Hmm, OK, thanks! @gez for verifying about the scaling. But what about the white balance. Is the image supposed to be white balanced to the color of the light at the time the photograph was taken, in order to be used in the OCIO scene-referred workflow?

I was expanding the reply. Check above.

The particular case I had in mind was first, there’s a clear “color of the light” to which normally one would white balance the image. And second, the photographer has already decided on artistic grounds to use a completely different white balance.

Is it required in the proposed OCIO workflow to white balance to the color of the light and then “reverse” that white balance and then apply the artistic white balance - three steps to the desired white balance, instead of just one step of applying the desired white balance during the interpolation process?

The reason I ask is by analogy. You are very clear that scaling is important for the initial generation of the scene-referred image. This seems to be the case even if the user perhaps might not be interested in producing a final image that’s tone-mapped from the scaled image.

I’m asking “by analogy to scaling” if a realistic white balance is also required, assuming such is possible given the actual scene lighting.

You pointed to some common complications and considerations about white balancing - some light sources are difficult or impossible to white balance away and might be part of the mood of the scene. So I’m guessing it’s OK to go ahead and apply the artistic white balance during raw processing.

But instead of guessing what’s OK and what’s not OK in the described OCIO workflow, I’d rather know what the usual or recommended OCIO workflow actually is.

So is it OK to apply the artistic white balance during interpolation? Or is it necessary to white balance to match the original scene?

Bear-of-little-brain is thinking that, if the essential operation of scene referencing is to align the linear data to a middle gray reference of 0.18, that all three channels need to participate. Doing this with a known middle gray patch in the scene will effectively produce corrected white balance.

OK, @gez has clarified that for the described OCIO workflow, the raw file should be white balanced to the color of the ambient light illuminating the scene, and also scaled such that middle gray is equal to 0.18. @gez - thanks! for clarifying.

Starting with my first “use an OCIO workflow” raw file, my best guess is that the “color of the light” is “Daylight” as the light is a mixture of late afternoon sunlight, sky light, and window light.

The scene is backlit, with part of the scene outdoors (seen through a window), and part indoors. I was using a long macro lens, so most of the scene (both indoors and outdoors) is actually bokeh. The final “ICC-profile-processed” output image can be seen here, if anyone is curious: Pictures in progress

So having white balanced the raw file to Daylight in accord with @gez 's recommendation, my next question is "Where in the scene should the gray card should go to get a reading of middle gray?

The first hurdle to overcome is that I didn’t put a gray card anywhere in the scene when I took the picture. But I’ve read several places that grass is fairly close to middle gray.

So just now I put a gray card in the indoor garden, on a rock just under some spider plant leaves, and photographed the gray card and leaves to see “how close” the vegetation in the indoor garden is to middle gray. Here’s the result, which is not too bad:

Below is a screenshot showing the interpolated raw file that I’m trying to process using the recommended OCIO workflow:

  • Left: The “normalized” raw file white balanced to Daylight.

  • Center: Putting middle gray in the band of sunlit grass, and scaling, using Sample Points 1 and 2 (sunlit grass outside the window - the bright band of bokeh through the center of the scene behind the garden ornament) as a stand-in for middle gray.

  • Right: Putting middle gray in the indoor garden, and scaling, using Sample Points 3 and 4 (spider plants in the indoor garden) as a stand-in for middle gray.

Of course most of the colors in the rightmost image are blown out in the screenshot, but I’m guessing that’s very often the case with scene-referred images in the recommended OCIO workflow.

So which is the properly scaled image to save to disk from the raw processor as an exr file to give to Krita or Blender? The center image (middle gray on the outdoor grass) or the rightmost image (middle gray on the indoor garden spider plant leaves)?

It will depend on how you generated the file. A typical dcraw -T -4 linear file will be anywhere from -1 to -3 or more stops to adjust the code value to the “at exposure” that the camera worked with. That is, loading a linear raw file and trying to deduce Lab values from it directly will be incorrect; you would need to scale the values up such that the camera raw output to linear integer TIFF corresponds to the assumed code value of Lab L = 50 value of 0.184187 linear.

The ornament of itself is fairly irrelevant, practically accidental. The point of the image is the play of lines and colors, dark areas and light. I feel very exposed writing the preceding two sentences given derogatory comments that have been cast my direction in this “currently three threads and counting” discussion of how and why a photographer might benefit from using OCIO. So if anyone wants to say “that’s a garbage image produced by a garbage workflow”, I will ask you to please don’t bother.

I always align “one shot only” in-camera exposures to avoid clipping highlights. Fortunately in this image the shadows are not excessively noisy and can take quite a bit of stretching.

So in this case (sufficiently clean shadows), you are suggesting scaling the image so that the indoor leaves are at middle gray, yes? Even though in my final image (the image posted to my website page) these leaves are intentionally darker than middle gray?

I think by “exposure” in the above sentence you mean the in-camera exposure, yes? Most definitely setting middle gray on the leaves during the in-camera exposure would have blown out the highlights.

Again, my standard procedure is to set single-shot in-camera exposures specifically to avoid blowing out the highlights. When possible I make ev-bracketed shots. For this image possibly I could have bracketed because the camera was already on a tripod. But the light itself was changing rapidly, which in turn was radically rearranging the background bokeh. I did try to catch a second shot but it was too late, the nice patterns were gone.

Why a half-float? Why not 32f?

Where is this exposure slider? I figured out how to add a “Curves” node in Blender but not an Exposure operation. But I think you mean something else? Could you post a screenshot?

So you are saying that a technically correct scene-referred image does? does not? require setting middle gray in the image (I mean by scaling, not while making the actual camera exposure) on some technical basis?

I can sort of see what you and @anon11264400 are talking about regarding scaling images to middle gray picked on some as yet obscure (to me) basis when you are aligning multiple images/frames/scenes in a movie.

But so far I am completely lost as to what constitutes a technical - as opposed to artistic - reason for setting “middle gray” in a single still image (bracketed still shots count as one single image).

For “artistic” reasons, well, middle gray is just a point on the grayscale. Sometimes “middle gray” occupies hardly any of an image, merely small streaks and patches on the way from ligher to darker and back.

GIMP didn’t do anything funny on the data.

My apologies, I don’t understand a single word of what you just wrote. Who? assumed what? that would lead to knowing where middle gray should be when one scales the image? Are we back to the idea that the dynamic range of the camera itself somehow determines the required amount of scaling? Maybe @gez could translate what you wrote?

Very, very likely not blowing out. Very likely exceeding the exceptionally limited dynamic range of a raw dump to display.

Every camera will encode differently.

Shoot a middle grey card according to the exposure zone you want creatively. If you are going to strategically under expose the shot, put the grey card in the corrected position in the scene.

Essentially once you know how dcraw -T -4 encodes for your camera at a given ISO, it won’t change. The same scaling will apply.

Setting the leaves at middle gray “in camera” would have required adding 4 more stops to the in-camera exposure, which would have blown out the highlights.

How many OCIO transforms are in the pipeline described by @gez for Blender in this thread, and by @anon11264400 for Krita (https://docs.krita.org/Scene_Linear_Painting), between opening the image in OCIO-capable software, and seeing it displayed on the screen?

I’m not very familiar with OCIO, but there seems to be several:

  • from image to screen
  • the filmic mapping
  • the “contrast” mappings
  • at least in Blender, a linear mapping (that clips) is provided

So far this discussion of OCIO workflows seems to assume that the user is:

  • editing a linear sRGB image
  • on a monitor that has been calibrated to have sRGB/Rec709 primaries
  • with the assumption that the monitor’s tone response is sufficiently close to “gamma=2.2” to not cause significant tonal distortion in the displayed image
  • and with sRGB output in mind.

In other words, complications in an OCIO workflow caused by . . .

  • a user wanting to edit in a color space with a wider color gamut
  • on a screen that isn’t calibrated to sRGB/Rec709 primaries and/or for which “gamma=2.2” is not sufficiently close to the monitor’s calibrated/profiled tone response
  • and wanting to output in some color space other than sRGB

. . . so far have not been considered.

Which of the OCIO transforms in the OCIO pipelines for Blender and Krita are color space independent, such that they don’t need to be modified to accomodate editing an image in say, the Rec.2020 color space, using a custom-calibrated and profiled “wide gamut” display that has a color gamut that’s close to (but perhaps not nearly an exact match for) AdobeRGB1998?

Which of the OCIO transforms require that “someone” (the user? the developer?) provide a new OCIO LUT transform if:

  • the image isn’t an sRGB image
  • the display isn’t calibrated to match sRGB/Rec709 primaries and gamma=2.2 tone response
  • and very possibly the desired output space isn’t some “tone response curve” variant of sRGB and/or isn’t a “tone response curve” variant of the display’s actual color space?
1 Like

I’m on a bus right now and i can’t elaborate, but as an advance I’ll just say that the answer to each one of your sentences is “no” :grin:
I’ll clarify as soon as i have a keyboard handy.

You’re confusing a particular OCIO-based implementation with the characteristics of OCIO workflows.
The filmic view is just A view transform. Looks are optional, the only true to the OCIO workflow are the to-reference and from-reference transforms. As simple as that, and not really different to ICC cm with a “working space” in that regard.

Absolutely no. Blender’s OCIO config was designed for a srgb output and a linear rec.709 reference but you may have as many OCIO configs with whatever reference and output you choose. It’s up to the artist.
(Check the Nuke video i posted earlier)

Such complications don’t exist.
The software developer or the artist can provide different configurations to address each situation, OCIO doesn’t force a specific colorspace.