in principle I agree with you - other Color Management (CM) systems could be substituted
for ICC based ones. But let’s consider the situation here:
This is about CM of displays, not about application CM in general.
There are only two widely deployed CM systems supported in operating systems: ICC and WCS (Windows Color System). I don’t know of any applications that make use of the latter, all Color Managed applications I am aware of use ICC for display CM. (I don’t count proofing workflows such as video in this, where a display is calibrated to emulate a single specific colorspace, since this is not managing color - it is a fixed workflow.)
Correct me if I’m wrong, but OCIO is aimed at color workflows (i.e. film rendering and compositing), and while I’m sure it could be adapted to work as a system device profiling system, it is not optimized to do so (CMYK etc. ?), and is poorly supported in this role. (i.e. I know of no widely used display calibration and profiling systems that output OCIO profiles.)
In terms of Wayland, I don’t think that the aim should be to put a full, general purpose color management/workflow system into it - in fact this rather goes against the Wayland idea of making the client (i.e. application) responsible for rendering.
Now you could propose that a new “neutral” system be invented to implement CM in Wayland, but this is a huge task to create, implement and maintain, and would make porting existing applications CM much harder. (Also see the appropriate xkcd about standards!)
In contrast, systems that have CM already have ICC display profiles, and applications already deal with them, and there is already a read-to-go ICC CMM implementation that is widely used and supported to drop into a Wayland implementation (lcms2).
The ICC spec is rather large and complex so libraries/programs that can read devicelink profiles suddenly have a larger attack surface, this is especially a problem for compositors since those are the root of trust in the wayland world
Agreed - but this is a reponsibility that Wayland takes on in insisting that it does the compositing
in a fashion that divorces the client from knowlege about what output it is on (but I have some reservations about this conclusion - it needs more investigation.)
For this and other reasons I think it would be mandatory that people from the image editing/creation software are involved in this discussion. To frame this discussion I will try to put down what I think such a protocol should look like.
I’m not sure this is the pertinent angle. What’s more pertinent is getting across that CM is vital to certain applications, and CM means support for both CM tools (to setup CM environments) and CM enabled applications.
In my proposal the compositor internally should composite in linear/scene referred color space with the rec.2020 primaries, legacy/sRGB would be converted to this color space, while basic applications would render directly to this color space. Then the compositor would use a shader to render this composite down to the screen color space and in this space would composite in the advanced applications. In this case profiling can just use the advanced application route and only for the calibration we need to add a way to change the calibration LUT from the wayland protocol.
I fear this is far too complex and prescriptive. I don’t think that any particular colorspace should be baked into the composer, and that it is not actually necessary to tie CM to compositing space (see my suggestion at the end.)
But I have been thinking about Wayland, and it occurred to me that there is a highly analogous display attribute that they must have had to come to grips with :- display resolution (aka density). And indeed, yes, they have a means of dealing with HiDPI. It’s a little clunky, but it’s what they have adopted, and so provides a path for CM that (hopefully) would be less hard to resist.
The way Wayland works allows for a (spatial) transform between pixel buffer and surface. The Wayland compositor uses that definition to rotate & scale pixels before they are composited together.
For this situation with a set of mixed resolution outputs and legacy applications, the application will render to fixed DPI buffers, and the compositor then scales things so that (say) windows end up being similar sizes as they move or cross from one output to another. Naturally this is not optimal in regards to quality, so the approach a HiDPI aware application would take is to create buffers with an orientation and DPI that results in a null transformation from the compositor, and the pixels can then be directly composed to the output.
This translates pretty directly to CM, if you substitute a device color profile for DPI.
Assume the compositor has a profile assigned to each output.
A legacy application won’t label its buffer colorspace, and so the compositor could either assume the source is the same as the output (CM Off mode for speed and the same as current Wayland behavior) or assume that unlabelled sources are sRGB (CM global On mode - yay - a color managed desktop!)
CM aware applications would tag their buffers with the appropriate color profile, and allow the compositor to do the CM conversion, or if they want to take charge of the CM themselves (because they need more control over CM details, or if they are converting from color spaces like CMYK etc. that aren’t supported by Wayland), then they would do the conversion themselves, and label the buffer as the same as the main output it resides on, resulting in a null transform.
(i.e. if HiDPI applications are able to get the information about what DPI the buffer mainly resides on, then the same path should be possible for CM to know what output profile the buffer lies on).
[ I think the space in which the Wayland compositor composes pixels for each output could possibly be managed by a distinct extension, perhaps one that simple provides a devices space <-> linear light curve. ]