Scene-referred editing with PhotoFlow

I see these numbers being thrown around in the OCIO threads and I think to myself: my DSLR probably has 8 stops at best :persevere:. The discussion on exposure blending sure is helpful!

Thatā€™s probably the source of misunderstanding. The sRGB version is not linear, and was added only to show that V4 ICC profiles can correctly handle HDR data. Could you take the Rec.2020 version in the last posted link that also contains the RAW files?

That sounds too low, even for entry level DSLRs. What camera do you have?

Curious how an sRGB nonlinear OETF is applied on values beyond the display referred range given that it is strictly a display referred encode.

Any chance of an EXR instead of TIFFs for this to avoid file handler inconsistencies?

Well, the concept of dynamic range isnā€™t precisely defined, because the required amount of detail isnā€™t defined. For any camera that generates values that are proportional to scene luminance, a 12-bit camera has 12 stops of theoretical dynamic range, but how may are usable? The bottom few stops have values 1, 2, 4, 8, 16, ā€¦ If we want shadows that contain readable detail, that certainly excludes the bottom couple of stops, and probably two more.

I will prepare it later this evening.

I think maybe you didnā€™t understand my question, or maybe I didnā€™t ask the question clearly enough.

Consider two photographs:

  • A photograph of the sun taken on a clear summer day at a latitude of roughly 45 degrees (as an aside, hopefully nobody does this without taking suitable precautions).
  • A photograph of a flame emitted by the typical birthday cake candle.

If one is scaling the image file intensities to equal the intensities in the actual scenes, the photograph of the sun will require a hugely greater scaling factor than the photograph of a candle flame.

If one is scaling the image file proportionately for some specific technical or artistic reason, then after scaling (and by wildest coincidence possibly even before scaling) the maximum intensities in both files might be exactly the same.

There are camera sensor comparison site that test for dynamic range.

This site site - Nikon D7000 Review - Imatest Results. finds my Nikon D7000 has about 10 stops of dynamic range.

You can compare cameras at this site.

Thanks for the link, @David_Wilson. The Nikon D7000 saves raw at 12 or 14 bits/channel/pixel. They donā€™t say what they tested the raw at, but I assume 14 bits. The JPEG comes out of the same sensor (obviously) so I suppose that is also from 14-bit inputs. As they rightly say, the useful dynamic range varies according to acceptable noise.

Interestingly, the density response is somewhat non-linear. I expect a standard curve could be applied to make it linear. In my less-scientific tests, a D800 is more linear.

I would like to wrap-up the discussion regarding OCIO and custom display profiles, as I would like to draft an OCIO-based view in the next days.

@Elle @gez @anon11264400 - do I understand correctly, that the easiest way to implement this would be to use an OCIO config that goes from and to some well-defined, standard colorspaces (for example, use aces_1.0.3 and go from ACEScg to Rec.2020), and then use ICC to go from there to the specific display profile?

OCIO uses displays to select various colorimetric idealized outputs. Within each display is a grouping of views, which are the various views designed for that display.

The reference is fixed, and roles / transforms describe the various to / from contexts.

As far as I can tell that wonā€™t work since not all OCIO config have rec.2020 configured and even if it is it might not be named consistently (also as @anon11264400 points out there are multiple views and in some cases people want to switch between them)

To use display profiles it is better to convert it to a lut and add it to the config (as explained here: https://wiki.blender.org/index.php/User:Sobotka/Color_Management/Calibration_and_Profiling )

@dutch_wolf @anon11264400 - Would it be possible to perform such an operation on-the-fly, using code from argyll CMS? In this case, the user would just need to specify the display ICC profile in the options, and the 3D LUT would be generated by the software itself. This would have the additional advantage to guarantee consistency between the source profile assumed by the LUT and the one actually being used in the software.

Iā€™ve been puzzling over the concept of WYSIWYG as used by @gez. I think what this means is as follows:

  • If the output color space in the OCIO config file actually matches the calibrated and profiled screen on which the image is viewed,

  • Then the user will always and only see the actual colors produced by the OCIO LUTs and other editing operations that are performed to transform the original image as encoded in the reference color space.

So it seems to me that if some other color space such as Rec.2020 is assumed by the OCIO pipeline as the output color space, then what the user sees on the screen isnā€™t what the user will get when the processed image file from the OCIO pipeline is saved to disk and opened with some other software, unless of course the user is actually looking at a Rec.2020 monitor.

As far as I can tell not really since there is no machine readable indication what color space it might actually be (so a view might be sRGB but it might be not, also different views can have different intents (like a false color view on the ā€œbrightnessā€))

I think this is one of the weak points of OCIO and a clear indicator that it was designed for groups with larger budgets who could afford things like DCI-P3 and full sRGB (probably with built in LUTs) monitors.

I am not sure that we should aim at providing all possible types of views. Instead, and at least for the beginning, I think the software should provide a small set of views targeted to photography work and realistic rendering of natural scenes. Hence, the OCIO configuration and the definition of the views should be shipped together with the software, so that the source colorspaces are known and well under controlā€¦

Does this make any sense?

No, because the application doesnā€™t know the context within OCIO.

OCIO was designed as a compartmentalized package, which means it was designed to be used as a whole. Within that, there is currently no context within the design to pass the metadata required for an ICC protocol.

It would simply require setting the output referred context to the appropriate ICC that matches the particular OCIO context.

Exactly. Two displays, Apple P3 and sRGB 2.2 standard would suffice as entry points I suspect.

[quote=ā€œElle, post:74, topic:7039ā€
the concept of WYSIWYG . . .

If the output color space in the OCIO config file actually matches the calibrated and profiled screen on which the image is viewed,
[/quote]

Yes, what @dutch_wolf says.

The thing is, some, perhaps not all, photographers are quite well used to editing colors that canā€™t be displayed on their screen. For example, a photographer who is preparing an image for printing on the printer sitting next to the monitor - whatā€™s on the screen is merely a guide. The real goal is making a nice print.

If I had a high-end Epson or Canon fine art printer beside my monitor (donā€™t I wish :slight_smile: ), or if I were sending a print to an establishment that offers such printing, Iā€™d be very upset if my processing pipeline confined all my colors to the gamut of my calibrated and profile monitor, because the actual output color gamut is the output color gamut of the printer, not the monitor screen.

How many people on this forum have Apple P3 monitors? How many have monitors with good sRGB presets? Well, that would be the people using wide gamut monitors. How many people using wide gamut monitors are willing to confine their output to the sRGB color gamut?

Every single person with an Apple product since late 2015[1]?

I believe you jumped the shark.

[1] Which assuredly forms a disproportionate number of folks interested in art / graphic design / photography work.