Please, lets keep the discourse civil. There is no need to be rude.
Well, for starters, I wasn’t exactly expecting “none” as the answer. I sort of thought maybe that some of the LUTS actually were useable in situations where the reference isn’t rec.709 and the output isn’t sRGB. So now I learned something. Thanks!
Well, I rather suspect that a lot of what I want to do when editing my images can just as well be done using OCIO as ICC, given the right config files and LUTS. And now I know that “the right config files and LUTS” isn’t just a matter of taking care of the monitor profile.
Again, I’m not hostile to OCIO, I think the ACES/OCIO/etc way of handling images is fascinating. I get an odd little glimpse into that world every now and again because I subscribed to the openexr dev list quite a while back, to garner info on how GIMP should handle exr files, and never unsubscribed. And periodically I peruse the internet to learn a little more.
I am rather hostile to the degree to which @anon11264400 and @gez have repeatedly bashed ICC profile color management. And I wish to never hear the word garbage or leprechaun in this discussion again, though I suspect that’s a hope in vain. But these specific hostilities are entirely aside and apart from an educational discussion of what OCIO is and how to use OCIO to accomplish various editing tasks.
It seems the entire pipeline of LUTS from start to finish does require tailoring to the color space of the image (the “reference” color space, yes?) and the color space of the monitor - which in the case of the filmic LUTS is the same as the output color space wrt to the primaries if not the TRC - and in the case of an output color space that doesn’t share the primaries with the monitor color space, it seems one or more additional LUTS are required, yes?
It might help people better understand OCIO workflows if you could provide a short description (one-liner) of what the various LUTS in the filmic_blender zip file actually do in terms of “transforms from something to something, for some purpose”.
On this forum we had already quite many discussions about display calibration/profiling, something which already confuses quite a lot of users.
The good thing is that once the calibration/profiling procedure is completed correctly, the user is left with an ICC profile that can be directly used with our FLOSS image editors. All that is needed is to set such ICC as the “display profile”, and load the associated LUT into the video card (something which is well documented).
If now we are going to add another complex configuration layer to this, I am afraid we will discourage a lot of users from following this path. Bear in mind that this is not criticism, nor a way to put some discredit on the OCIO workflow… this is just common sense.
We need a plan.
Worry about those that can see the problems of shoehorning scene referred data into a display referred chain. Either they will see that or they won’t.
In many instances, REC.709 references are all that are required given that many folks can’t see or understand the needs / limitations / complexities of an alternate reference space.
When designing Filmic, the goal was to be as straightforward as possible, for experienced pixel pushers. The video Andrew Price did has nearly a million views. Of the many emails I have received, mostly from experienced imagers with large amounts of paid professional experience hours racked up, very few were asking about display profiles, nor alternate reference spaces. That is, those are very important aspects, yet a non-issue for most people coming to terms with a scene referred reference space model.
No, not at all. I’m not ready to make another OCIO config file just yet. I’m still trying to figure out simple stuff such as "when @gez says “do this or that in Blender, what’s he talking about”, and “why exactly do I need to put a gray card in my scene”.
Obviously also I’m working on figuring out things such as “How specific are the various LUTS to a given working/display/output space”, which now I know the answer is “very”.
By comparison, in PhotoFlow (using ICC profile color management), the filmic curves operation only requires linear RGB, isn’t specific to any pre-specified primaries, and can be fine-tuned “on the fly” using the sliders, with defaults very close to what the filmic_blender LUTS do. But this isn’t the way OCIO works, which is not something that was obvious to me. Well, I was pretty sure there was no “fine tuning on the fly”, but the “filmic LUT is specific to input/output color spaces” part did surprise me.
So yes, this OCIO stuff is rather different from what I’m used to. I did make a custom OCIO config a long time ago, that went from Rec.2020 to my custom monitor profile. But it’s not good any more because I’ve long since updated my monitor profile. In ICC profile color management updating one’s monitor profile doesn’t require updating other stuff, because of the XYZ/LAB PCS.
Right now, in rawproc, I’m probably going to build a OCIO-transform tool that can be inserted like any other tool in the processing pipeline. True to rawproc philosophy, it’ll be there to use or misuse as one wishes…
With that, I’m going to keep my ICC code intact, ostensibly for the following purposes:
- colorspace-convert my camera-gamut image to Rec709, or Rec2020 when I can learn to make ocio.configs.
- post OCIO-transform to bring to calibrated display gamut. My home displays are pretty much sRGB, but I have three horrid displays at work, I intend to calibrate them when I can come in off-hours with my colorimeter.
I can probably get away with loading raws using srgb/linear, instead of raw/linear then colorspace-converting to Rec709.
Since OCIO modifies the image array in-place, I don’t think I’ll have to make changes to the image library. Just get a pointer to the image array, and call processor->RGBapply(imgptr);
I think this setup will be fairly easy to insert, and will give me sufficient capability to learn and compare with the olden ways. I did @gez’s Blender exercise and it was instructive regarding how to integrate. I compiled OpenColorIO last night, easy-peasy compared to LensFun, so I’m shelving lens correction for a bit in exchange for some immediate gratification.
With DisplayCal you can do that easily from your existing device icc profile (the 3D LUT Maker).
Well, If your software already has display correction via ICC I guess you could hook the output from the OCIO device and correct it.
It wouldn’t be too different to what other applications do when converting from the “working space” to the screen space.
So let’s say your OCIO device is sRGB (as an example, it could be anything you want). You could take the display-referred output through your ICC screen correction (from sRGB to your display) and that’s it.
The option is to produce a 3D LUT and chain it to the OCIO view in the config so the output is already corrected for your specific device.
It may seem a bit more complicated than just having your screen profile installed, but I’m going out on a limb and say that if you managed to produce your own screen profile it shouldn’t be particularly difficult to create the LUT you need for your OCIO config.
That being said, though, I admit that fiddling with text files is scary for some people and the current implementations could be more friendly in terms of UI.
So, there’s configuration file entries to specify primaries and white point for a transform; or, generate a LUT from them using one of the apps?
Have you ever seen a bind configuration? DNS lookup server from the olden days. I’ve done those, nothing in a text file scares me…
Depends on what you mean here.
If you are talking about taking something
from_reference it is relatively trivial via a couple of matrix transforms. The matrices are encoded as four sets of four units, row by row. Assuming a matrix or 1D LUT is invertible, OCIO will create the inverted direction automatically. In the case of a view transform, it is prudent to make sure your
from_reference stanza is filled as it is used most frequently in view output transforms, and will be far more performant.
If you know what to do with matrices, then it is probably as easy as digging up a matrix transform example and substitution with your needs. Beware, wider reference spaces bring their own nightmares including posterization along the saturation axis, and that brings with it the needs for desaturation for high emission values, along with other potential sweeteners including gamut mapping.
If on the other hand you are talking about adding in a display profile, there are more than a few examples out there as to how to tack on a custom view via a 3D LUT or via the aforementioned matrix approach, however your needs are suited.
Start simple. Build up. Be warned… Not all is as simple and trivial as it may have seemed with limited dynamic ranges…
I’m back at home from my vacations, so I had the chance to produce a couple of tests showing how the view transform may affect even simple photographic work.
I produced a couple of RAW shots of an IT8 target, converted them to tiff with dcraw using the parameters -T -4 -w -q 3 -n 100, then loaded the tiff into Blender and scaled the exposure to match middle gray with 0.18 linear.
I wasn’t very satisfied with the white balance from the camera, so I tweaked a bit with a multiply operation.
The examples are single shots only. No bracketing, no exposure stacking.
In one of them I shot and exposed the chart intentionally under a shade, so the background was blown out and clipped (with a linear scene value of around 4)
Even in this example with a relatively low dynamic range, it’s easy to see how having a proper view transform with desaturation helps producing a more natural image, while the default view from blender (that clips everything above 1) produces a really bad result in the highligts.
It’s important to mention that the filmic view still had a lot of headroom available: If I stacked exposures to capture more detail on highlights, the view would preserve those details that are chopped off by a simplistic view transform.
 in the false colour view, the orange parts show highlights that get affected by the desaturation, but not clipped. All that area could have more detail if I produced a wider dynamic range image.
Here is a link of the other shot, in RAW, Tiff (from dcraw) and EXR (from Blender, scaled scene-referred).
Scene-Referred Test - Color Chart
Scene Referred EXR from a single shot
Scene-referred editing with PhotoFlow
After tearing my hair out inserting the OpenColorIO library into rawproc’s build system (Only LIkes Shared Libraries -gah!!), I’ve finally got a crude ociotransform tool to work. Right now, it only does OCIO::ROLE_SCENE_LINEAR -> OCIO::ROLE_DEFAULT, which with @anon11264400’s Blender config.ocio does a “Linear sRGB/709” to “sRGB” transform, and it looks right. Since the chromaticities are the same, the only real change is with gamma.
I’m working on a Surface tablet using msys2, and my success came when I finally did the following: 1) install the msys2 OCIO package, and 2) turned off -static in the rawproc link line. All the other stuff I’m linking to is just .a files, so it stays static. I also started to unpack the OCIO public configs, but after about 1.5G of ACES stuff I aborted that and got @anon11264400’s Blender config, much, much smaller. I’m hoping Linux goes smoother, except I’ll have problems cross-building for Windows unless I pack the .dlls in the installer. Not my preference.
Next step is to figure out what parameters to expose in the tool pane. For sure, input and output colorspaces, I think, but I need to determine how transforms are built in the config files, because I don’t think I need to build them in the tool.
So, I turn off CMS (input.cms=0), open the raw with:
input.raw.libraw.colorspace=raw input.raw.libraw.gamma=linear input.raw.libraw.cameraprofile=Nikon_D7000_Sunlight.icc
and insert as the first tool a colorspace convert to sRGB-elle-V4-g10.icc. Then, insert the ociotransform tool, and the gamma gets its due. Here’s a screenshot, ignore the ociotransform panel, I copied the colorspace panel code and I haven’t deleted the old widgets yet:
ocio on the Surface is a pig, ~6sec per transform for my 16MP D7000 images, so I multithreaded it and the time decreased to ~1.5sec, much better. Same general approach as multithreading cmsTransform in LittleCMS.
I started to figure out how to apply one of the LUTs to get a scaled display image, but I’m out of time, need to sleep now so I don’t sleep through my morning meeting.
The build thing is a bit of a roadblock, but I’ll definitely keep the code at least as a conditional compile.
(yes, I know, a lot of programming “language” above, but I think the coders will appreciate the story)
Gut the ICC and load the untransformed pixels as they would be fed in Linear. The ICC will be the largest performance hurdle for the load.
The output will be slower if you are set to the sRGB OETF due to the lookup nature of the from_reference direction. Change that to any well defined transform with a to_reference and from_reference and it will claw the cycles down to milliseconds.
A simple matrix transform can be made for the Nikon, via the Adobe matrices in dcraw. The ACES transforms are large due to the 3D LUTs.
[Play Raw] Luminosity masks
I am bumping this old thread to let you know that I have introduced some basic OCIO infrastructure in PhotoFlow.
Currently it is possible to insert the “Filmic Blender” OCIO transform (using one of the supplied “looks” to control the contrast of the rendering) between the output of the processing pipeline and the conversion to the display device.
The filmic display transform is particularly suited to display on screen the output of a scene-referred pipeline, where pixels values can greatly exceed the maximum brightness of the display device. In such cases, the filmic view allows to “squeeze” the scene dynamic range into the range covered by the display. In addition to a filmic tone mapping curve that compresses the highlights in a pleasing an natural way, the view transform also introduces some desaturation and gamut compression in the highlights, mimicking the typical response of film emulsion.
@gez please correct me if anything I said above is wrong…
Here is an example of the display of an HDR image with about 24 stops of dynamic range.
First is the pixel data sent to screen directly through an ICC transform to the monitor profile:
next is the same image displayed through the “filmic” OCIO config, with no further processing:
Here is the specific OCIO config for this example:
The experimental OCIO code can be found in the
ocio branch on GitHub. Pre-compiled packages from this branch can be downloaded from here: https://github.com/aferrero2707/PhotoFlow/releases/tag/unstable
The work is far from being completed, but should already give an idea of the potential of this approach on scene-referred edits…
As far as I know this is not entirely right or at least not optimal. In general an OCIO config is defined around a linear/scene referred color space which in case of blender (due to limits of the rendering engines) is a linear space with the sRGB/rec.709 primaries, it might also define some other linear or working spaces (ACES does this with ACEScg for example). When working with this it is preferred to work in the OCIO defined linear space(s) so in your above case the ‘working color space’ option should also be taken from the OCIO config and the “display profile” should be disabled (since it is already set with the display setting of the OCIO settings)
I hope this made sense! And thanks for your work, even considering the above nitpicks I do think this is good enough to start some experimenting of myself!
 This can be defined with roles option in OCIO config (note is a bit outdate there is a compositing_linear option as well which can be found in the ACES 1.0.3 config)
The way it currently works in PhF is the following:
- the pixel values are converted from linear Rec.2020 (or whatever colorspace they are in at the end of the pipeline) to linear Rec.709
- the pixels are processed through the OCIO transform
- the output is converted from Rec.709 with gamma=2.2 (which is the output encoding of the filmic looks) to the actual display profile
It is indeed sub-optimal because the gamut is squeezed to Rec.709 for the OCIO processing, but at least is correct from the color management point of view. Refinements will hopefully come in the not-so-far future…
Definitely sub-optimal since not all OCIO configs will work with that setup (for example there is no option to input linear rec.709 into an ACES config at least without adjusting). Still I think still good enough for some experiments with trying to get a per shot look (filmic blender is in this regard actually a bit better/easier to work with then ACES due to having its equivalent to the RRT put in the look instead of the output)
Yes, the reference space is different for each OCIO config, and therefore it needs to be somehow hard-coded. I am going from the working colorspace to linear Rec.709 in the filmic case, and I will have to go to linear ACES2065-1 (if I’m not wrong) in the ACES case.
Does it makes sense?
Ah, yes that makes sense and I do agree that it will be hard to get around something hard coded in a RAW editor since we don’t have these nicely defined log spaces that we can use as input, still I think for a proper OCIO workflow the conversion should happen earlier in the pipeline (note this is with my personal understanding of OCIO) .
Still I would like to thank you for your work on this, having at least basic support will make it much easier to experiment with different options!
 Although every manufacturer has their own log space so it can get a bit confusing
I shall wait for the Windows binary.