Unbounded Floating Point Pipelines

If I remember correctly, your tutorial was still sRGB-centric, at least when applied to the official GIMP version…

If we suppose to apply a similar edit in a wider colorspace, in which the saturation boosts do not result in negative channel values, wouldn’t be preferable to postpone any local saturation correction to the very end, when preparing the image for a specific output media (display, printer, etc…), and fine-tune the adjustment to the target output colorspace?

I would say that a reasonable target for a modern photographic editing workflow are both wide-gamut displays (Rec.2020?), and high-quality color printing. When producing sRGB images for the web, I think that a proper gamut warning functionality as well as clear indicators for negative and above-1.0 values are the best way to judge if the colors in the edited image fit within the sRGB gamut…

Yes, it was sRGB-centric because it was a tutorial for GIMP.

Regarding deferring adding local saturation until the end, the problem is that chroma and lightness interact when it comes to colorfulness, and also saturation affects the appearance of brightness. So changes to tonality will entail changes to chroma - saving the chroma changes for last would mean redoing the tonality changes. For example, I’m working on a painting of an orange. It flat-out amazes me how turning the “results so far” to a grayscale image completely obliterates the appearance of brightness caused by the orange color, even though the relative luminance is the same. As @briend says, “colors influence colors”.

Even if I had been working in a larger color space, the extreme “chroma” move would still have likely produced out of gamut colors, or in Troy’s terminology “non-data”. But the extremity of that particular move simply doesn’t matter as a mask is immediate added. This has the benefit of keeping all the chroma edits on one layer, unless there were areas for which decreased chroma is desired. But in the particular image, that wasn’t the case.

What would matter is if the chroma layer had its RGB channel values clipped as “non-data”. If that happened, then one would need to be very careful about adding chroma to the chroma layer, as clipping the RGB channel values in the chroma layer would cause hue shifts.

Hmm, regarding HD monitors, this post over on the ArgyllCMS forums is a bit scary from a color-management point of view (see also preceeding and following posts to the discussion):

@anon11264400 and @gez … Ok I have a question … ocio/aces sound interesting. Would you both be willing to work on tutorials for a complete workflow with the 2 tools? From calibration of the devices over editing to print?

Maybe some tutorials how to port ICC/LCMS based apps to ocio? We could put them on https://pixls.us/ or you host them on your site(s).

Then we could compare the ICC/LCMS workflows we know with your proposed workflows.

Thank you in advance!

5 Likes

Interesting since like scene referred LAB has no upper bound (theoretically at least) so although non-linear can be used for HDR

Scene referred workflows are a simple byproduct of a series of questions that anyone with experience can answer:

  1. Why am I seeing odd fringing when mixing / manipulating / compositing? Because the internal model requires operating on energy values, not nonlinearly encoded values. Result: All values must represent linear ratios of energy.
  2. If we require a linear model, we must toss out the display referred model due to the limit, and basic photography represents a range of values where such a limit is a problem. Result: A zero to infinity float representation is mandatory.
  3. If we harness a fully scene referred linear model as a result of above, we must detatch the internal data from the view (Model / View architecture). Result: Model / View architecture with a consistent ground truth of scene referred linear reference of fixed primaries.

A consistent physically plausible manipulation / compositing model requires this, and all algorithms, to work under this construct.

One must not confuse models here, as the two models of [output | display | device] referred and scene referred are vastly different in implications. A reasonable person attempting to gain the benefits of a scene referred workflow will detach the very essence of WYSIWYG design when attempting to shoehorn it under a display referred workflow.

I found more supplementary info that might help this discussion:
http://acescentral.com/uploads/default/original/1X/38d7ee7ca7720701873914094d6f4a1d4ca031ef.pdf
http://www.mdpi.com/2313-433X/3/4/40/html

If I understand this correctly then a darktable like LAB workflow is neither scene nor display referred it is not scene referred since it isn’t linear but neither is it display referred since it doesn’t use a defined “white”[2] (e.g. there is no maximum above which the color model doesn’t make sense anymore) nor confined by any primaries. Since CIE L*A*B* is a valid color model it seems a bit strange that it doesn’t seems to adhere to either of these principles, or am I making stuff up?

[1] Of course L=100 a=0 b=0 is the same as the reference white point used in constructing the LAB color space from XYZ but this is relatively arbitrary (since XYZ is linear it is possible to scale the white point arbitrarily and so you can make any Y equal to L=100)

Again, can’t stress this enough…

1.0 means nothing in a scene referred workflow. Nothing. Zero. Zilch. Nada. Zippo. It isn’t a magic number[1].

The whole notion that 1.0 means anything in such a model is a byproduct of model confusion.

It is display referred. Don’t conflate UI with how the reference is established, even if the architecture design is confused.

[1] In some circumstances, such as a single illumination source and some “diffuse white” object such as PTFE, a specific value in a scene referred may represent a rough idea of an albedo value. The statement applies in the general context of a photograph.

Use a linear gamma encoding. Oh, sorry, wrong terminology. Use linear RGB. You can do this using ICC profile color management.

You can do this using floating point processing in an ICC profile color-managed editing application.

What does detaching the internal data from the view mean? Is this what @Carmelo_DrRaw was referring to in the following quote?

This is confusing since per definition (as far as I understand it) a display referred model is bound by its black and white points while a CIE L*A*B* model isn’t bound in the same way L* values higher then a 100 are allowed (and do make sense) so the only thing the white point is used for is to determine the grayscale[1] (what color a*=b*=0 for L*>0 is)

So according to that initial definition CIE L*A*B* can’t really be display referred unless I am missing something here but I wouldn’t know what

[1] Even ACES still uses a reference illuminant/white point (D60 according to wikipedia)

Assume creative. How does one create the LUT?

Also, what other ways to affect the final image are available? What layer blend modes and such? What kinds of edits that aren’t LUT edits can be done? I don’t know how to ask this question, sorry. I’m thinking of the photographic editing operations available as nodes in Blender, but honestly without an example of using OCIO to process an image, it’s hard to know how to even ask “how/what”.

The answer focused on the software you mentionned.

Lab has a notion of diffuse white, hence why effort to design an HDR-CIELab model was conducted by Dr. Mark Fairchild. Again though, nonlinear encodings are sub-optimal for image manipulation / compositing.

Think the following is quite interesting for this discussion: ACES2056-1 found on ACES docs
If you look in Apendix A section 4.2.2 the colorspace is actually defined for negative values

Also reading that document is seems to be advised to set R=G=B=0.18 to be the value of a neutral gray card (see for example section 5.3.1)

Going all the way to ANEX B provided everything was captured with the reference image capture device (RICD) under a D60 illuminant values for diffuse white are also defined.


I mentioned darktable since that is currently the only software I know that uses floating point CIE L*A*B* so was intended more to be an example and as can be seen above even ACES as a notion of diffuse white :wink:

That is AP0, the archival encoding primaries. Noise floor issues etc. Not for manipulation. AP1 is designed for that.

Ideal, perfect diffuse white that is really hard to find in reality has a reflectance of 100% and you align your exposure using a 18% reflectance card as middle gray. It’s an exposure reference, not something akin to the values captured by your camera or created in your imaging software.
You’re capturing light ratios, they come from diffuse reflection, specular reflections or direct emissions.
In the case of the latter, what do you think 1.0 means?

1 Like

Seems to be also true for AP1 (ACEScg[1]) see section 4.3 of S-2014-004 found at the same page:


I know but @anon11264400 statements seem to imply that a scene referred workflow doesn’t have any preferred meaning of 1.0,1.0,1.0 which is seemingly contradicted by the ACES standard (where that is roughly defuse white under D60)

[1] Which uses the same primaries as ACEScc but since that is a log encoded space I think we can safely ignore that in this discussion

Is it?

Did you read my footnote above?

Yes I did but that things get weird if a non-reference whit point is used is also true for display referred data and ACES AP0 seems to be specifically build around a RICD for which the illuminant is defined to be D60 and in that case 0.18 and 1.0 do have specific meanings (I would like to add that R=G=B=1.0 in AP0 is also R=G=B=1.0 in AP1). It is true that the ACES spec does allow people to deviate from this in which case this meaning is lost but since generally speaking in the cinema industry there is full control of the lightning my guess would be that most aim to follow the advice in the spec (since it makes working with CGI a bit easier).
Ergo according to your footnote it is unusual for there to be an albedo value, to my reading of the specs this is actual the expected way of working so there is usual an albedo value.

Don’t conflate an intensity of a pixel emission value with some meaning.

A perfectly diffuse reflector would reflect a scene referred value of 1.0 if we apply a 0.18 convention for a scene referred middle grey. However, we cannot take a capture and deduce that all values of 1.0 indicate that within the scene the object is a perfectly diffuse 100% reflectance object.

Further still, we shouldn’t be conflating encoding values with working scene referred image manipulation models. This leads down that familiar path of madness.

I would add that when someone says “white point” as a term, it typically references an achromatic chromaticity coordinate. Here however, we are speaking of intensity, which can be a confusion of terminology.