The first item is I would say compositor/DE specific and I think adding the second one shouldn’t be to hard at least from a wayland perspective. Although it can be done as a stand alone extension I do think it is a good idea to add it to the color management one (since most applications that will care about this will probably want to do color management as well)
What value would be needed (something like diffuse white is xx cd/m^2 or more like x.y is diffuse white which is zz cd/m^2)?
There needs to be a way of setting the default buffer colorspace and intent, rather than assume sRGB with no specified intent or HDR handling.
My impression from reading through the Wayland protocols, is that it is not their style to retrieve lists of things. Instead they rely on notifications when things change. (I can’t say I am 100% sure of this though.)
I don’t see an “HDR profile” format is necessary. Since it is not defined, it is hard to implement! (And a whole exercise to define in a way that has a hope of being future proof. Best avoided if possible.)
I don’t understand what “color_space_from_display” is meant to do. If a compositor wants to provide default display profiles based on EDID data, then that’s a Compositor implementation detail.
get_primary_display isn’t needed. A client can track the output a surface is on using wl_surface.enter/leave events. An output priority order would assist in knowing which output to maintain maximum fidelity on, and which to leave to Compositor color management.
I’m not clear what the purpose of the set_secondary_surface is.
The proposal has no means of downloading display profiles, downloading standard profiles, specifying buffer intents & HDR handling, or retrieving output priority order.
There is no companion protocol for temporarily or permanently installing display profiles, setting the default buffer color space and intent, or setting output priority order.
That would be in direct conflict with color management. You can’t have some other app. taking exclusive access to the per channel hardware LUTs in a way that is not coordinated with the color management. The first step with that protocol should be a notification that color management is about to be disabled. I don’t see it as a desirable path for calibration, since it doesn’t coordinate with the color management.
I think I’ve explained a couple of times how ICC profiles can be used. If you can see problems with that approach, please explain your concerns.
Not entirely sure what you mean here, do you mean a default for non-CM apps (in which case that would be compositor settings and not in the protocol, since those apps won’t be using this) for CM apps I don’t think we can assume a default. Now that you say it I do need to have a way to set intent (at least for ICC).
It is inspired by the way the tablet protocol communicates available tools
Probably not, will remove it in the next version
These 2 points go together, an app would use get_primary_display and then use that in color_space_from_display to get the color space, I could probably do this in one go but if we go with an external interface for configuration I need something for that and there is still no guarantee a wl_output maps to only one display (e.g. if there is a display cloning another there is no need in the wayland spec for that to need another wl_output). So all in all we can’t use wl_output for this
To assist the compositor in case it is needs to render not to the primary display (see the cloned display example above) strictly speaking not necessary but I thought it a need idea.
For display profiles that is what the color_space_from_display is for (might be named a bit awkward), probably do need something for the standard ones indeed, intents I need to add, HDR handling same (although still need some more info on what to add) and display priority let me think about that one would say that is compositor internal but we might need a way to communicate that.
Currently no, I am still thinking about how that should look like and if it needs to be wayland (or dbus or maybe even a UNIX domain socket)
Okay clear, just was something I encountered during my searches
Just that no one else seems to be doing it so no software support for that and no example profiles, although to be honest no one seems to be calibrating/profiling HDR screens at all at the moment, so we are probably doing a first here.
Yes, with color management enabled, all clients should get color management by default. That brings immediate benefits for non-color aware applications and GUI’s (and even color aware applications that only want to do their own color management when its needed) when connected to wide gamut and HDR displays. So the Compositor needs a setting, and the color management application(s) need an API for configuring it.
Right. I just remember it being stated as a principle of Wayland protocol design.
Right, but specifying an API doesn’t solve the underlying problem. A surface can be on more than one output. Wayland now has a mechanism for clients to track this and keep an up to date list of what outputs a surface is on. All that the client needs for doing its own color management is to decide which output profile to convert to (and tag the buffer with.) Simple situation is if there is only one output. Slightly simpler is if a window is being dragged from one output to another, and it could apply a first in or last out policy. But for mirroring etc., can’t be resolved without information from the user, so an output priority list is a way of this being specified. The Composer provides this information to the client, while the color management application(s) need an API to configure the compositor. I’d imagine a Compositor would provide a default priority, and the implementer can try and be as smart about that as is possible.
If more than one buffer per surface was added to Wayland, then the color aware clients would need modification anyway to deal with it, as well as having a full list of outputs per surface.
Providing multiple buffers per surface to allow for multiple high quality color conversions is a good idea for the future. I suspect it is probably piling too much onto an initial implementation though, given that 99% of the time it should be enough to do high quality conversions for just one output.
Hmm. It came across as getting some kind of handle to the color spaces, rather than downloading an ICC profile.
In my view a Wayland color management won’t fly unless the companion color management protocol is developed and tested at the same time. The communication means is less relevant, only that it is a Wayland type protocol for configuring the Compositor. They communicate about common objects, and have coherence requirements.
Yes, but I suspect this is an attempt to provide a mechanism for things like “Night Light”, without understanding how all this interacts with other uses of the VideoLUT hardware.
The Wayland devs didn’t seem keen on such open ended hardware access mechanisms either.
HDR screens are being calibrated a great deal in the Video world - but as I mentioned, it’s not simple due to the bodgy HDR standards and the TV makers implementations.
But I don’t have a great concern about this aspect - I’m pretty sure it can all be worked out in the implementation. (After all, DisplayCAL is using ArgyllCMS profiling and ICC linking mechanisms with HDR displays, without ArgyllCMS being at all HDR aware. As far as a profile is concerned, and HDR display is just one with a brighter than usual white. It’s using the ICC profiles where it gets more interesting.)
I think it’s pretty clear at this point how the protocol has to look. The CM interface has to enumerate all color spaces it knows of and allow the client to create color spaces. The surface has to enumerate which color spaces the surface was converted to on the last present in order of importance. Surfaces have to get tagged with a color space and intent. For non-tagged surfaces sRGB and an arbitrary intent is assumed.
This ignores HDR, profling, calibration. For HDR I believe it’s a much better idea to wait for Intel to get their HDR support out of the door so we can actually see what’s needed.
I’ll try to find some time to actually implement this in weston.
Just realized I was stupid, I was pretty sure I read that wl_outputs where non-overlapping but rereading the spec/main protocol I couldn’t find that back and it is clear from the intent of the protocols (including the wl_surface.enter and wl_surface.leave events as well the fact that wl_output has a make and model) that there should be 1 output per display and that a surface can be “mapped” to multiple ones (you can have multiple wl_surface.enters without having any leave in between). So now thinking about extending wl_output with a cm_output and prioritizing an output would simply be by declaring for a surface that it is using an color space associate with one of the outputs.
It is but you can use that handle to get the ICC profile (I use a handle here since creating a color space from ICC would create a similar handle)
Agree which is the reason I am developing this in a full git repo instead of using gist or diffs on wayland_protocol, I probably should start on the accompanying protocol soon.
The doing a first was more a reference to the ICC stuff, which I will let to you since you know those far better then I do.
non-tagged surface sRGB with perceptual/relative-color metric should be the suggested way to go but of course this will be compositor depended.
Ignore calibration and profiling at your own peril, which will at least require a unique ID per display (currently wl_output gives make/model which is close but not enough). On top of that it would be good to already think about how HDR would fit in all this so that the driver designers can take in to account what we need from them.
I think what @swick means is that Intel is working on hooking up the Linux DRM/KMS bits to support HDR not that HDR isn’t out there (please correct me if I am wrong) . I do agree that this means that we need to think now about how to hook that up to userspace, and since from a color management perspective HDR is just another color space the best way to do so is via the CM spec.
Yes, but the evidence so far is that it is intended to be a hack, since anything else would involve implementing actual color management.
I don’t think they need to be coupled - in fact if the Intel work is a hack from the color management point of view, then adding a CM framework can be done quite independently, and once it is in place the Intel work can make use of it, turning it from a hack to something that is color managed.
I want to see how the interaction with the kernel will look like and which requirements it has. No point in specifying something that won’t actually work.
Anyway, updated the protocol. Copied parts from @dutch_wolf and took some inspiration from the presentation-time protocol. I’m pretty happy with the overall design (although it might make sense to link a color space to a wl_output) but the text description definitely needs more work.
I’m having a question as a user. If I now buy a HDR- TV and connect my Linux (Ubuntu/X11) notebook with an HDMI cable to it, what will happen?
No HDR content, that’s what I’m quite sure about.
Now the option I have are:
It will not work at all, black screen or something like that.
The colours are off, because the TV is expecting HDR content.
The content will look like on an old TV because the TV knows that it is no HDR content and does some internal conversation.
Looks good to me, only no idea how the application will know the output/display color space in case in wants to directly to display space. (also not entirely sure what color space feedback is for) In regard to using an array or a fd for communicating ICC profiles generally speaking a profile will be a couple of KiB in size (for example the v2 sRGB profile on color.org is about 3KiB while the v4 version is ~60KiB), I think that begins to become a bit to large for a byte array but not an expert.
The hack part should only be the wayland part, since I don’t think we want CM in kernel space (the driver still needs to tell the GPU to turn on HDR, which in turn tells the display to turn on HDR) I do think the DRM/KMS (the kernel parts) need to be well designed.
On anything made for consumers it will default to sRGB, so third option. Anything for professional might default to a wide SDR color space (e.g. adobeRGB) so option 2 but not because it is expecting HDR but different colors.
So, as an application programmer, I think I’d want to have two choices:
Send an image to render with an attached color profile that can be used to convert to the respective hardware display profile(s). If a particular profile is told to me by the operating system, that’s okay, I can convert prior to display.
Send an image to display without an attached color profile that will just be displayed in whatever colors the hardware interprets the values to be.
I expect to be dealing with Qt, GTK, wxWidgets, or some other library that deals with the mechanism being discussed, but I thought I’d interject with this to see where it’s heading…
Actually that would be 3 choices (when this protocol is finished/implemented)
Send with an attached profile (+ intent) and the compositor will transform to display (should mostly be used by anything that handles wide gamut colors but doesn’t care about accuracy)
Send with a specific profile namely one of the outputs you are on, in this case the compositor should blit your image directly to display (buffer), in case the output changes the compositor should let the program know and in the meantime do a best effort transform to the new output (until the program updates the surface)
No profile the compositor will interpret it as sRGB + perceptual intent
And yes most programmers will probably use this via their toolkit of choice not directly!
 The reason for a blit here is that it is the only compositing operation that won’t introduce color artifacts
It is not known in advance on which outputs the surface will be shown and therefore which color spaces it will be converted to. The best that can be done is to give feedback about which color spaces the surface was converted to in the last presented frame (with a priority).
I should probably add the corresponding wl_output to the zwp_color_space_feedback_v1.converted event.
Ah, so a application would first sets the surface with a working color space and then when committing would listen for a zwp_color_space_feedback_v.converted to get the output color space if needed? That is kinda ingenious but not enough information for profiling/calibration (both need to know which screen they are actually running on and the wl_output.geometry.make/model is probably not accurate enough in the case of 2 or more identical monitors), note that if we go the route of using an external interface (probably dbus) for calibration/profiling only an id that can be communicated to this external interface would be needed.
 And any application that doesn’t really care about accuracy can just ignore this, also reading this, it doesn’t even have to set an initial color-space at all this opens some interesting possibilities.