Scene-referred editing with PhotoFlow

OCIO uses displays to select various colorimetric idealized outputs. Within each display is a grouping of views, which are the various views designed for that display.

The reference is fixed, and roles / transforms describe the various to / from contexts.

As far as I can tell that won’t work since not all OCIO config have rec.2020 configured and even if it is it might not be named consistently (also as @anon11264400 points out there are multiple views and in some cases people want to switch between them)

To use display profiles it is better to convert it to a lut and add it to the config (as explained here: https://wiki.blender.org/index.php/User:Sobotka/Color_Management/Calibration_and_Profiling )

@dutch_wolf @anon11264400 - Would it be possible to perform such an operation on-the-fly, using code from argyll CMS? In this case, the user would just need to specify the display ICC profile in the options, and the 3D LUT would be generated by the software itself. This would have the additional advantage to guarantee consistency between the source profile assumed by the LUT and the one actually being used in the software.

I’ve been puzzling over the concept of WYSIWYG as used by @gez. I think what this means is as follows:

  • If the output color space in the OCIO config file actually matches the calibrated and profiled screen on which the image is viewed,

  • Then the user will always and only see the actual colors produced by the OCIO LUTs and other editing operations that are performed to transform the original image as encoded in the reference color space.

So it seems to me that if some other color space such as Rec.2020 is assumed by the OCIO pipeline as the output color space, then what the user sees on the screen isn’t what the user will get when the processed image file from the OCIO pipeline is saved to disk and opened with some other software, unless of course the user is actually looking at a Rec.2020 monitor.

As far as I can tell not really since there is no machine readable indication what color space it might actually be (so a view might be sRGB but it might be not, also different views can have different intents (like a false color view on the “brightness”))

I think this is one of the weak points of OCIO and a clear indicator that it was designed for groups with larger budgets who could afford things like DCI-P3 and full sRGB (probably with built in LUTs) monitors.

I am not sure that we should aim at providing all possible types of views. Instead, and at least for the beginning, I think the software should provide a small set of views targeted to photography work and realistic rendering of natural scenes. Hence, the OCIO configuration and the definition of the views should be shipped together with the software, so that the source colorspaces are known and well under control…

Does this make any sense?

No, because the application doesn’t know the context within OCIO.

OCIO was designed as a compartmentalized package, which means it was designed to be used as a whole. Within that, there is currently no context within the design to pass the metadata required for an ICC protocol.

It would simply require setting the output referred context to the appropriate ICC that matches the particular OCIO context.

Exactly. Two displays, Apple P3 and sRGB 2.2 standard would suffice as entry points I suspect.

[quote=“Elle, post:74, topic:7039”
the concept of WYSIWYG . . .

If the output color space in the OCIO config file actually matches the calibrated and profiled screen on which the image is viewed,
[/quote]

Yes, what @dutch_wolf says.

The thing is, some, perhaps not all, photographers are quite well used to editing colors that can’t be displayed on their screen. For example, a photographer who is preparing an image for printing on the printer sitting next to the monitor - what’s on the screen is merely a guide. The real goal is making a nice print.

If I had a high-end Epson or Canon fine art printer beside my monitor (don’t I wish :slight_smile: ), or if I were sending a print to an establishment that offers such printing, I’d be very upset if my processing pipeline confined all my colors to the gamut of my calibrated and profile monitor, because the actual output color gamut is the output color gamut of the printer, not the monitor screen.

How many people on this forum have Apple P3 monitors? How many have monitors with good sRGB presets? Well, that would be the people using wide gamut monitors. How many people using wide gamut monitors are willing to confine their output to the sRGB color gamut?

Every single person with an Apple product since late 2015[1]?

I believe you jumped the shark.

[1] Which assuredly forms a disproportionate number of folks interested in art / graphic design / photography work.

That does make sense[1] at least to me.


@elle DCI-P3 is relative wide gamut compared to sRGB IIRC so working on wide gamut monitors is possible, I don’t think there are a lot of OCIO configs out there with AdobeRGB display/view but it should be possible to make those. Also with the help of different views[2] it should be possible to proof and with a false color view it should be possible to show out of gamut colors. Now of course setting this all up is quite a bit harder then setting up an ICC workflow (which again show that OCIO was developed for studio work, you have a small group set this all up and the rest just use it)


[1] And then for people who know what they are doing they can set the OCIO environment variable to override this default (and in that case of course disable the ICC output transform if people know how to do this they should know how to get a profile in a OCIO config)
[2] Pretty sure this is how they check if a print to film (since there are still theaters with only a film projector not a digital one) will look good.

Replying in passing, displayCal has a LUT generator option. I wonder if it is usable in a OCIO transform?

Yes.

Appending an ICC chain post-view transform as outlined via @Carmelo_DrRaw would work fine as well, again assuming the contexts match.

@dutch_wolf - your reply to my concerns is helpful.

@anon11264400 - I don’t know whether you intend to hurt my feelings with your rather nasty way of responding to just about everything I say. But your way of speaking is personally hurtful and doesn’t contribute anything positive to the discussion. You seem like an educated and articulate person. Try to find a neutral way to address what people say.

I don’t.

Nowhere in this entire thread has there been a hint of what you suggested regarding limitations of a potential gamut to sRGB. Your statement was essentially non-sequitur.

Okay, doing a little reading while dialed into a soul-sucking teleconference, if displayCal can generate a LUT for the calibration that transforms from a specified colorspace (NOT CIE XYZ) to the calibration colorspace, that file could be used in the colorspace definition for the display in the ocio.config file.

http://opencolorio.org/userguide/config_syntax.html

I would like to see if I understand correctly the first step, i.e. how to generate a scene-referred image with photofow, using a single RAW file.

The steps that I would suggest for this are the following:

  • open the RAW file
  • adjust the exposure as needed, to set the “0.18” gray point in the good place
  • select the “standard” matrix camera profile, select a linear output colorspace, and do not clip the output values:
    21
    The “standard” matrix profile is “neutral”, in the sense that it provides correct and natural colors without any specific “look”
  • export the resulting image to 32-bit floating point TIFF, again selecting a linear output profile:

Optionally, if the lens is supported it is also possible to correct for the lens vignetting directly on the linear pixel values in the cameral colorspace:
00

@gez @anon11264400 - is this reasoning correct, or am I missing something?

Thanks!

I think that the colorspace gamut has to match the reference colorspace gamut in the OCIO.config file, or match a “to-reference” transform.

The TIFF used for input in @gez’s Blender demo produced using dcraw has sRGB primaries assigned, and the corresponding data, per the man page. That would work with Rec709-referenced OCIO configs, as the color primaries in both specifications are the same.

Correct.

That’s correct, although as we already discussed is not mandatory for every case.
We propose sticking to rec.709 primaries at least for these first tests for the sake of simplicity, as it will allow trying existing OCIO configs (like Filmic Blender) without further modifications.
Otherwise, a different configuration should be created (either changing the reference to rec.2020 or producing a transform for linear rec.2020 to the existing rec.709 reference) which adds an extra layer of complexity that is better to leave aside for the moment, until everything else is properly understood.

It looks ok, yes. Just keep in mind that if the ubounded gamut thing is involved, you’d want to mark that checkbox for clipping negatives.
Does the raw developer use libraw?
On the other hand, the black compensation on save seems something you don’t want activated.

Regarding the lens correction, theoretically I think it’s fine, but you’ll have to make sure that the operation is designed for scene-referred RGB and won’t mess with your light ratios.

2 Likes

I did a further test using the example image from your OCIO thread: Scene-referred editing with OCIO enabled software - #43.

I have processed the IT8 target image as outlined above, saving the result in linear Rec.709. The result matches very closely what you obtained with

dcraw -T -4 -w -q 3 -n 100

there is just a slight difference which is probably due to a small difference in the saturation points that are assumed when normalizing the RAW values.

My TIFF (together with your original TIFF from dcraw) can be downloaded from here: Filebin | ibhc7xtz1dz4dssw

No, it is based on RawSpeed (same as for Darktable).

The vignetting correction is applied to linear pixel values in the camera colorspace, before any ICC conversion, which I’m pretty sure is the correct procedure (and the one expected by the LensFun code).