Unbounded Floating Point Pipelines

@gez and @anon11264400 probably a stupid question but how would an OCIO/ACES workflow look like for an application that uses LAB internally[1] (like darktable), would you need to do a LAB to reference transform first (before going to display) or are there other options? And if so what would that look like?

(note that LAB is extremely wide since it is directly derived from XYZ using only a reference white point, which also determines L=100 but in CIELAB you can have L>100 without loosing meaning)

[1] Or any other non-RGB color space for that matter

My apologies, but I’m stuck on this point. Bashing is one thing, not useful. Distinctions are another, possibly useful.

Are my distinctions as I drew them on the xy chromaticity diagram with the Rec709 primaries - considered quite apart from my terminology which I will freely admit was not very good terminology - valid or not? If not, why not?

Hmm, my screen exceeds the sRGB color gamut by a fair bit in some places (red, yellow, green, blue-green and blue) and falls short in others (violet-blue, magenta).

In GIMP, at floating point, a lot of colors outside the sRGB color space do display just fine on my screen. I don’t understand the sentences you just wrote. So getting down to a concrete example, are you saying that colors sent to the screen should be clipped to the color gamut of the RGB working space?

One reason I ask is because the workflow in my “Autumn colors” tutorial referenced earlier is actually quite color space independent. The only edit that uses the color space primaries is that I used channel mixer to increase chroma. But I could just have easily used GEGL’s “saturation” which operates in LCH, and then the entire workflow could be done in any RGB working space and produce the same results.

The other reason I ask is I have two “output displays” in mind when I edit: sRGB for the web and a wide-gamut printer-paper profile for printing.

Are you saying the proper workflow is to edit for sRGB output for the web, in the sRGB color space, clamping and clipping all the way instead of waiting until the end to bring colors back into the sRGB color gamut?

And then start over in a wider gamut RGB working space and edit for the printer? Even though the final image “pre soft proofing” will be exactly the same in both color space IF I leave colors unclipped?

So the proper procedure is to clip out the “non-data” as it occurs and definitely also before it’s sent to the screen.

And edit the image twice, once for sRGB and once again for print output, in a larger color space suitable for holding printable colors. EVEN THOUGH in the specific case I ask about the final image will look exactly the same pre-soft-proofing.

This makes very little sense to me.

Sorry, we cross-posted. But as the results are identical regardless of the color space, I still don’t see that it matters.

Well, I won’t try to explain them again. The regions are well-defined mathematically on the chart. And it seems to me that what I call “out of gamut” is exactly what you call “non-data”.

Is it possible to create one’s own LUT? Or is editing mostly a matter of trying existing LUTS to see which one makes a nicer looking image?

If it is possible to create one’s own LUT, what’s the basic procedure?

Ah, well I don’t think this is a translation issue so much as an issue of unfamiliar terminology used to refer to a familiar concept. Instead of “emissive” I use “additive” or just “RGB color mixing”. But really “emissive” is probably more accurate. But different groups of people use different technical and non-technical terminology. And sometimes it’s hard to tell whether a given word is meant one way or another.

Interesting since like scene referred LAB has no upper bound (theoretically at least) so although non-linear can be used for HDR[1] (but as you said probably not for a traditional scene referred workflow). Thinking about it the only way it would work is to use XYZ as the reference (so an “RGB” workspace with the primaries 1,0,0; 0,1,0 and 0,0,1 in XYZ) since LAB to/from XYZ is a trivial operation (and XYZ is linear).

But I am just spit balling here so probably completely wrong and to be honest in a scene referred workflow you[2] probably just use something like darktable/ratherapee to get a demosaiced file in a floating point file format with a well known color space and then use something like blender/natron/krita to edit the actual picture

[1] As seen with my earlier post with @gez exr file in darktable
[2] Generic you

I am still following the thread. For those still a bit confused with the terminology, I found that some of them are defined here: http://opencolorio.org/FAQ.html. I also found the notes here useful, though I am not a dev: http://opencolorio.org/developers/index.html (e.g., usage examples). This is mostly for “casuals” like myself.

2 Likes

If I remember correctly, your tutorial was still sRGB-centric, at least when applied to the official GIMP version…

If we suppose to apply a similar edit in a wider colorspace, in which the saturation boosts do not result in negative channel values, wouldn’t be preferable to postpone any local saturation correction to the very end, when preparing the image for a specific output media (display, printer, etc…), and fine-tune the adjustment to the target output colorspace?

I would say that a reasonable target for a modern photographic editing workflow are both wide-gamut displays (Rec.2020?), and high-quality color printing. When producing sRGB images for the web, I think that a proper gamut warning functionality as well as clear indicators for negative and above-1.0 values are the best way to judge if the colors in the edited image fit within the sRGB gamut…

Yes, it was sRGB-centric because it was a tutorial for GIMP.

Regarding deferring adding local saturation until the end, the problem is that chroma and lightness interact when it comes to colorfulness, and also saturation affects the appearance of brightness. So changes to tonality will entail changes to chroma - saving the chroma changes for last would mean redoing the tonality changes. For example, I’m working on a painting of an orange. It flat-out amazes me how turning the “results so far” to a grayscale image completely obliterates the appearance of brightness caused by the orange color, even though the relative luminance is the same. As @briend says, “colors influence colors”.

Even if I had been working in a larger color space, the extreme “chroma” move would still have likely produced out of gamut colors, or in Troy’s terminology “non-data”. But the extremity of that particular move simply doesn’t matter as a mask is immediate added. This has the benefit of keeping all the chroma edits on one layer, unless there were areas for which decreased chroma is desired. But in the particular image, that wasn’t the case.

What would matter is if the chroma layer had its RGB channel values clipped as “non-data”. If that happened, then one would need to be very careful about adding chroma to the chroma layer, as clipping the RGB channel values in the chroma layer would cause hue shifts.

Hmm, regarding HD monitors, this post over on the ArgyllCMS forums is a bit scary from a color-management point of view (see also preceeding and following posts to the discussion):

@anon11264400 and @gez … Ok I have a question … ocio/aces sound interesting. Would you both be willing to work on tutorials for a complete workflow with the 2 tools? From calibration of the devices over editing to print?

Maybe some tutorials how to port ICC/LCMS based apps to ocio? We could put them on https://pixls.us/ or you host them on your site(s).

Then we could compare the ICC/LCMS workflows we know with your proposed workflows.

Thank you in advance!

5 Likes

Interesting since like scene referred LAB has no upper bound (theoretically at least) so although non-linear can be used for HDR

Scene referred workflows are a simple byproduct of a series of questions that anyone with experience can answer:

  1. Why am I seeing odd fringing when mixing / manipulating / compositing? Because the internal model requires operating on energy values, not nonlinearly encoded values. Result: All values must represent linear ratios of energy.
  2. If we require a linear model, we must toss out the display referred model due to the limit, and basic photography represents a range of values where such a limit is a problem. Result: A zero to infinity float representation is mandatory.
  3. If we harness a fully scene referred linear model as a result of above, we must detatch the internal data from the view (Model / View architecture). Result: Model / View architecture with a consistent ground truth of scene referred linear reference of fixed primaries.

A consistent physically plausible manipulation / compositing model requires this, and all algorithms, to work under this construct.

One must not confuse models here, as the two models of [output | display | device] referred and scene referred are vastly different in implications. A reasonable person attempting to gain the benefits of a scene referred workflow will detach the very essence of WYSIWYG design when attempting to shoehorn it under a display referred workflow.

I found more supplementary info that might help this discussion:
– http://acescentral.com/uploads/default/original/1X/38d7ee7ca7720701873914094d6f4a1d4ca031ef.pdf
– http://www.mdpi.com/2313-433X/3/4/40/html

If I understand this correctly then a darktable like LAB workflow is neither scene nor display referred it is not scene referred since it isn’t linear but neither is it display referred since it doesn’t use a defined “white”[2] (e.g. there is no maximum above which the color model doesn’t make sense anymore) nor confined by any primaries. Since CIE L*A*B* is a valid color model it seems a bit strange that it doesn’t seems to adhere to either of these principles, or am I making stuff up?

[1] Of course L=100 a=0 b=0 is the same as the reference white point used in constructing the LAB color space from XYZ but this is relatively arbitrary (since XYZ is linear it is possible to scale the white point arbitrarily and so you can make any Y equal to L=100)

Again, can’t stress this enough…

1.0 means nothing in a scene referred workflow. Nothing. Zero. Zilch. Nada. Zippo. It isn’t a magic number[1].

The whole notion that 1.0 means anything in such a model is a byproduct of model confusion.

It is display referred. Don’t conflate UI with how the reference is established, even if the architecture design is confused.

[1] In some circumstances, such as a single illumination source and some “diffuse white” object such as PTFE, a specific value in a scene referred may represent a rough idea of an albedo value. The statement applies in the general context of a photograph.

Use a linear gamma encoding. Oh, sorry, wrong terminology. Use linear RGB. You can do this using ICC profile color management.

You can do this using floating point processing in an ICC profile color-managed editing application.

What does detaching the internal data from the view mean? Is this what @Carmelo_DrRaw was referring to in the following quote?

This is confusing since per definition (as far as I understand it) a display referred model is bound by its black and white points while a CIE L*A*B* model isn’t bound in the same way L* values higher then a 100 are allowed (and do make sense) so the only thing the white point is used for is to determine the grayscale[1] (what color a*=b*=0 for L*>0 is)

So according to that initial definition CIE L*A*B* can’t really be display referred unless I am missing something here but I wouldn’t know what

[1] Even ACES still uses a reference illuminant/white point (D60 according to wikipedia)

Assume creative. How does one create the LUT?

Also, what other ways to affect the final image are available? What layer blend modes and such? What kinds of edits that aren’t LUT edits can be done? I don’t know how to ask this question, sorry. I’m thinking of the photographic editing operations available as nodes in Blender, but honestly without an example of using OCIO to process an image, it’s hard to know how to even ask “how/what”.