Interesting since like scene referred LAB has no upper bound (theoretically at least) so although non-linear can be used for HDR
Scene referred workflows are a simple byproduct of a series of questions that anyone with experience can answer:
- Why am I seeing odd fringing when mixing / manipulating / compositing? Because the internal model requires operating on energy values, not nonlinearly encoded values. Result: All values must represent linear ratios of energy.
- If we require a linear model, we must toss out the display referred model due to the limit, and basic photography represents a range of values where such a limit is a problem. Result: A zero to infinity float representation is mandatory.
- If we harness a fully scene referred linear model as a result of above, we must detatch the internal data from the view (Model / View architecture). Result: Model / View architecture with a consistent ground truth of scene referred linear reference of fixed primaries.
A consistent physically plausible manipulation / compositing model requires this, and all algorithms, to work under this construct.
One must not confuse models here, as the two models of [output | display | device] referred and scene referred are vastly different in implications. A reasonable person attempting to gain the benefits of a scene referred workflow will detach the very essence of WYSIWYG design when attempting to shoehorn it under a display referred workflow.