Scene-referred editing with PhotoFlow

Following the lively discussion that is ongoing regarding the advantages of scene-referred editing and OCIO-based processing pipelines, I would like to start a thread specific to my PhotoFlow editor, as I have the impression that it is already quite well equipped for this kind of approaches.

PhotoFlow differs from most of the other FOSS RAW editors in the fact that its pipeline is not based on the Lab colorspace. Instead, the choice of the color model and specific color space used for the editing is left to the user, and can be changed on-the-fly by inserting colorspace conversion layers anywhere in the processing pipeline. Currently PhotoFlow supports RGB, Lab and CMYK models.

Moreover, many of the available tools are already adapted to work best on linear RGB pixel values, without any clamping or assumptions regarding the range of possible values. For example, all the built-in RGB colorspaces are based on V4 ICC profiles with parametric TRCs (linear or not), and the conversions are performed in 32-bit floating point precision.

The two main items I would like to clarify and improve are the following:

  • which tools are missing and/or not properly implemented with respect to scene-referred editing?
  • what needs to be added in order to provide a correct scene → display transform? What would be needed to support custom monitor profiles? What about preparing images for printing instead of on-screen display?

I am ready to invest time to introduce OCIO in the pipeline, but I would like to clarify exactly what and where…

Is OCIO needed in the scene-referred editing part, or is the current approach based on parametric and linear RGB ICC profiles adequate enough?
How to tweak/extend OCIO transforms when producing images for printing, possibly allowing the user to optimize the output for a specific printer/paper combination?

Probably the best way to start will be to take the examples that @gez will post in the other thread that focuses on Blender, and see how they can be adapted to PhotoFlow…

What do you think?

1 Like

Top man! Excellent. I shall have a try at this in due course (bit busy at present).
It would be nice if whatever tools emerge don’t need the user to be a colour science expert!

That’s great!

For me a first step for photoflow could be export the scene-referred data in the .exr container.

So it will be possible to make some tests with that .exr image in blender or other programs that support OCIO

Is there a good link what OCIO is? Wikipedia has noting. :frowning:

No .exr output possible (yet). Does Blender understand floating-point TIFFs?

http://opencolorio.org/

https://docs.blender.org/manual/en/dev/data_system/files/media/image_formats.html

Above suggests not.

I just used Blender to open a floating point tiff exported by PhotoFlow. It did open. But it wasn’t displayed properly in the little image window and it didn’t “render” when hitting F12. Whereas for an exr file everything worked as expected. Maybe @anon11264400 or @gez can give a more definitive answer.

@Elle - thanks for checking! For some intial tests, we can use RT or DT to convert from floating-point TIFF to EXR, at least until I get proper support for EXR output in PhF.

The questions as written somehow seem to imply that you are thinking about replacing part or all of what the user currently can do using strictly ICC color management, with an exclusively OCIO pipeline. Is this the case? Or do I misunderstand?

also GIMP, Krita, and ImageMagick

No, not at all, don’t worry :wink:
My question is how we can use OCIO to eventually get better display or print output from a linear-gamma RGB image with arbitrary floating point values.

I haven’t had a shred of time to look at Photoflow on a case by case basis, however I can give a loose outline. Note all of these would operate at the tail end of the chain:

  • As a bare minimum, a view transform on the tail end of the data chain.
  • Add an exposure variable for the view.
  • Add a power function with fulcrum for the view, where fulcrum is the value that the power function operates around. Divide, power, multiply.
  • A look slot, for the custom looks.

Those are obviously the bare minimum. To make the algorithms actually useful:

  • Make sure that the operations work under scene referred values. Where possible, make them radiometrically correct. Exposure etc.
  • Offer transform control on the UI elements. For example, it is sometimes desireable to have the UI roll linear values direct, while other times it may be useful to have an OCIO nonlinear technical transform. This is one way, and the UI would convert to reference values. Same applies for curves, etc.

Post view transform[1], one would follow the traditional ICC chain of late binding to proof etc. Note that currently there is no interoperability between ICC workflows and OCIO, so it would require the pixel pusher to make the proper choice as to what ICC is appropriate.

OCIO V2 will receive ICC support for the situations where someone has used an ICC toolchain for profiling a display, versus a more traditional LUT format.

An ICC will work fine assuming the definition matches the OCIO output. For example, rarely is the canonical sRGB transform applicable given displays are a pure 2.2 power function. This would require a native 2.2 power function ICC with REC.709 primaries akin to Elle’s in her GitHub. Same applies for printing assuming the display and OCIO configuration offered the correct variables.

[1] There are some technical views that may require an adjustment post view, applied on scene referred data. This may be challenging to implement in a layers based approach. Cross bridges as they come…

And I’d add that the same applies to late-binding CMYK transforms. ICC is probably the most adequate way to go from the OCIO display-referred output to the cmyk device space, provided that an ICC profile is created for the OCIO output.

According to Graham Gill, a pure 2.2 power function isn’t obtainable with a real display. Always some sort of compromise is required in the shadows:

About Gamma
http://argyllcms.com/doc/gamma.html

Please note that “in the shadows” is not a trivial part of the image as viewed on the screen.

Also, it is possible to make a custom OCIO LUT that is customized to one’s actual monitor profile, instead of simply assuming the monitor is an sRGB monitor. @anon11264400 wrote an awesome step-by-step tutorial:

https://wiki.blender.org/index.php/User:Sobotka/Color_Management/Calibration_and_Profiling

The tutorial is extremely well-written and easy to follow. The only place that I found difficult was modifying the OCIO config file, which took quite a bit of reading through the OCIO docs before I figured it out.

There is a section in the tutorial on “targeting” sRGB. A different LUT is required if, for example, the user wants to target AdobeRGB or Rec2020. So it’s one LUT per targeted color space, required different OCIO configurations for each targeted color space.

When I followed the tutorial I “targeted” Rec.2020, and was able to get Blender to display an image. with the image looking exactly the same in Blender using OCIO color management, as it did in GIMP using ICC profile color management.

Could you unpack a bit the phrase “assuming the display and OCIO configuration offered the correct variables”? Would this require generating an OCIO LUT from the RGB working space (vocabulary question: “reference space?” “target space”?) to the printer profile? or maybe to the monitor profile? Or are you talking about “baking a LUT” as per this page:

http://opencolorio.org/userguide/baking_luts.html

I have done a fair bit of testing on modern displays, and a pure 2.2 comes pretty darn close to the hardware. The point I was making here is that a pure 2.2 power function is far superior to tagging an output as sRGB due to the difference in the OETF in the linear toe portion.

The difference between the OETF linear toe and the standard 2.2 power transform in the DAC is huge by comparison.

The OCIO chain will dump to whatever transform is supplied. In the case of a typical OCIO display cluster designed for sRGB displays, this means that targetting a power function of 2.2 is the most ideal generic target.

Given such a target, it will result in broken output if someone subsequently takes that data and views it tagged with an sRGB ICC, where a pure 2.2 power function ICC with REC.709 will yield 1:1.

Hmm, I was asking about printing. The entire sentence (quoting you) reads “Same applies for printing assuming the display and OCIO configuration offered the correct variables.” What does “same applies” mean wrt to printing? It wasn’t obvious, at least not to me, what “Same applies” might mean in the context of printing.

@gez is better to answer this as he is as close to an offset / spot print expert as anyone I have spoken to. The general principles are pretty easy though.

If we think about WYSIWYG, and assuming we keep this simple and consider a typical sRGB display, we know that it is displaying a pure 2.2 via the hardware DAC and REC.709 primaries. If we have views designed for OCIO properly, the display grouping for sRGB will design for that.

This means that if we consider the data in the reference space as scene referred, and the “final” output as the rendered output of that in the display referred domain, we want to print that output as closely as possible to what we are seeing[1].

This means that for print, the output, after all adjustments post-view etc. is the desired buffer to be fed through the ICC chain for print.

In the case of a reference being wider gamut and a view transform being a different set of rendering facets, the proofing would be based on whatever series of transforms have happened for that particular output view.

[1] Ignoring rendering intents and such for the sake of clarity.

Yes. Also i’d say that gamut warnings and even rendering intents (except maybe the ones dealing with white point adaptation that make a more complex case) seem like something that could be easily implemented on the OCIO view with 3D luts, leaving the need of an ICC managed conversion for the display referred output to the printer space.
It’s just a thought, but maybe something interesting to investigate.

What would be the practical advantage of skipping ICC for the display conversion, given that rendering intents, soft-proofing and gamut warnings are already implemented in the current ICC-based approach?

For example, how does the OCIO approach to gamut compression compare with the perceptual intent tables of cLUT ICC profiles?

I mentioned that as a thought experiment only. Some aspects that are currently done using ICC could be replicated with OCIO, but I guess ICC has the advantage, specially in the field of transforms to the CMYK model. It’s far more convenient to use it, since it was designed for that.
OCIO will deliver a well-defined scene-referred colorspace and view, and that’s compatible with a screen correction using ICC, and output colorspace transforms using ICC (like the mentioned case of CMYK, where ICC is probably the most adequate, proven and streamlined method).

You could create looks with 3D luts simulating perceptual or colorimetric intents for an output, gamut warnings, etc. An example of such technical looks is the false colour look in Troy’s Filmic Blender (in Blender 2.79 it was implemented as a view, but was originally a look on top of the log view).
The disadvantage in this case, and that’s probably the main reason ICC seems a better fit for those tasks, is that you should produce the luts for every transform, while ICC does that using profiles, which is way more convenient and standarized, at least in the print industry (and screen correction).

In case it wasn’t clear in the other thread, I wasn’t arguing against ICC per se (I don’t think @anon11264400 was either, but I won’t speak for him). What I argue against is about putting ICC in the processing pipe, flipping colorspaces, models, etc. back and forth. That’s what clashes with a scene-referred model.
If ICCs are going to be used for input-output transforms only, with the benefit of the convenience that color profiles bring when it comes to converting between display-referred images, I’m not against it at all.
ICC can be replaced by OCIO when it comes to input transforms, provided that the colorspaces used are defined in the OCIO config, but preparation for print is probably better to be left to ICCs.