Wayland color management

Like I said, I’m ignoring HDR, calibration and profiling. I’m confident that we can work out one after the other.

Anyway, send an RFC to wayland-devel ([RFC wayland-protocols 0/1] Color Management Protocol).

A couple of reasons

  • due to the complexity this wayland protocol will only support RGB profiles so if your application is handling CMYK or other even more exotic color spaces (an ICC profile can handle up to 15 channels although above 5~6 it becomes somewhat inefficient) you want to do your own transforms preferably directly to the output (for accuracy reasons)
  • I personally would allow a compositor to use an in between compositor space for compositing so long as an application doesn’t use an output color space, this will make mixing SDR and HDR content easier and will prevent color artifacts in alpha blending, but will loose (potentially) some color accuracy, so applications that need that will want to render directly to the output

Ignoring HDR for now I can understand it should be easy enough to add (according @gwgill ICC profile could support HDR and see no reason to doubt him, just not sure if the wider industry will agree with him (if everything else outside wayland/Linux is using something else for HDR profiles we will probably need to follow)).
But calibration and profiling will be needed for a complete CM workflow[1] and without makes this a lot less useful since I (and others) would still need X11/Windows/MacOS to actually profile/calibrate (and doing profilng/calibration directly via DRM/KMS will be equally inconvenient). The worst case would be if every compositor makes does their own thing here.
Also due to the complexities involved it is not something you want to put inside the compositor itself.


[1] There is a reason high-end monitors (look at the eizo cg series for example) include calibration sensors

I do believe that we can create a separate wayland protocol for that. Feel free to give it a try.

The replies on the ML are helpful, too. The person working on getting HDR to work for Intel graphics answered and shared his weston repo I wasn’t aware of so I’ll focus on that and implementing the protocol for now.

I wonder if you could ask to just continue the discussion here and not split it up :slight_smile:

Indeed saw some, I think I have some answers to those as well, just need to figure out how to reply to them when all I have is the archive (not subscribed since I am only really interested in the CM/HDR stuff and not so much in the rest)

Will look into the separate profile for calibration soon, would I be able to propose something that draws a rectangle to a specific wl_output at a specific place, wouldn’t need full surface or even buffer support since a patch will always be a uniform color anyway (so either just 3 integers or maybe one provided we can attach a packing format)

I have a few more questions regarding ICC profiles (@gwgill seems to be an expert here).

The Three-component matrix-based Display profile makes the guarantee that before applying the tone reproduction curve the colors are in (display output) linear space. I could not find any guarantee like this in N-component LUT-based Display profiles so one has to apply the whole lutAToBType pipeline before blending. Is that right?

The other question is if you can have both a matrix and lut based pipeline in a single profile so the compositor could use the matrix pipeline if it has to while a client uses the lut pipeline for higher quality.

The Three-component matrix-based Display profile makes the guarantee that before applying the tone reproduction curve the colors are in (display output) linear space.

Depends if the profile is used as source or destination. If it’s used as source, a lookup through the profile transforms device space (RGB) to PCS (profile connection space, CIE XYZ D50 in this case). If it’s used as destination, then it’s the other way around (CIE XYZ D50 to device RGB).

one has to apply the whole lutAToBType pipeline before blending

AtoB direction is device to PCS and for display profiles models the (measured) device behavior. In typical display profile usage (as destination, not as source), you are interested in the BtoA (PCS to device) direction instead.

The problem at hand is doing efficient blending with two images in device space and that requires converting them to a common linear space. Ideally that would be the display space with the EOTF applied but only the matrix based display profile seems to have the EOTF as a concept.

Another thing: how does the vcgt tag change all of this?

You should probably treat the profile as a black box in this regard. For the purposes of blending you would use the profile as intended - convert from device space to PCS, and if you want linear light mixing, convert to XYZ PCS assuming floating point maths. You could then convert that to some other primary coordinate if you wish, but beware of clipping.

It doesn’t. It’s not part of the formal ICC profile.

Unfortunately that confirms my suspicion at least for lut based pipelines. The reason why that’s not desirable is that the hardware driving the display can scan out from multiple frambuffers and do blending on the fly which is much more efficient than doing the same on the GPU. It requires transformation from display space to a linear space though for which the hardware has a build-in pipeline (degamma, matrix transform, gamma [Kernel Mode Setting (KMS) — The Linux Kernel documentation]).

It seems possible to map at least the ICC matrix pipeline to the CRTC.

So is it possible to have a matrix and a lut pipeline in the same ICC profile?

Great.

Right, but if they are all in the same colorspace (as they should be when blending), then you don’t need perfect linearity, an approximation would suffice to be an improvement over blending in gamma encoded space. i.e. 1D Luts created from the ICC dev->XYZ. (From experience you probably need to dot the XYZ with the full colorant XYZ value of each channel so that the delta is maximized). As long as you do the blending with sufficent precision (i.e. 12 - 16 bits/component) and invert the 1D LUTs before display, the color should remain unchanged when not blended, and with a reasonable correspondence to linear light mixing. (This would not work so well in printing color spaces, but most conventional displays are additive. Not sure how additive WOLED is though.)

No need for the matrix if this is all in display space. Won’t work between different displays on the same graphics card of course - a full conversion is needed (i.e. link the display ICC profiles and transform the pixels using the CPU or GPU).

Yes, but why ? An ICC is not a device colorspace transform - it’s the definition of a colorspace. A device transform is possible when you link two profiles. Doing any part of the transform using a hardware matrix will limit your gamut and gamut clipping/mapping options, that’s why my suggestion for a Wayland color extension allows the application to convert to display space, with compositor conversions as a lower quality fallback when a surface is mapped to more than one output.

In general the CRTC should be reserved for calibration (i.e. ‘vcgt’) support, because in many graphics cards it has higher precision in its output than the frame buffer (i.e. even my ancient nVIdia cards have 8 bit frame buffers and 10 bits out of the D/A to the VGA), and this should be taken advantage of in linearizing the perceptual response of the display to frame buffer values.

XYZ PCS display profiles can have both cLUT and Matrix tags, the matrix tags being a fallback. No-one will thank you for using the Matrix pipeline if the cLUT one is available though - the person making the profile has made it a cLUT profile for a reason.

What’s the status here? :slight_smile:

Quite some discussion ongoing on the wayland mailing list, including some stuff regarding HDR (which as far as I can see will be integrated with the color management at least that is currently the idea)

Still need to start on the calibration protocol which will become necessary but do have some ideas for it.

Hi @dutch_wolf, this is Ankit here. I am trying for adding the HDR metadata extension in swick’s proposed color-management protocol.

I have a doubt regarding how can the EOTF curve PQ/ HLG can be encoded in icc profile files.

Please excuse my ignorance here, and correct me if my understanding is wrong.
I can see in Little-cms there are API s to build tonecurve using standard parametric curves.

cmsToneCurve* cmsBuildParametricToneCurve(cmsContext ContextID,
 cmsInt32Number Type,
const cmsFloat64Number Params[]);

This cmsToneCurve can be then used to encode the required function in ICC profiles

The question is when we retrieve this curve from ICC profiles, how to be sure whether the curve is HLG or PQ, because this information is required to be sent as HDR-metadata from an HDR video content to an application to the weston-compositor.

I haven’t been following the last couple of weeks (there is work to do instead. The SpyderX is still waiting for a driver.)

When I last left it, a major sticking point was the idea that you could implement color management without any facility to actually do calibration or make profiles.

A very revealing view expressed by several commentators was that it was an advantage to have to switch from Wayland to some special utility that takes control of the whole display at the drm/kms level for profiling! I say revealing, because it reveals to me that these comments are from people who have never done a profile in their life, and have no idea what a color management application looks like, and at a technical level don’t understand what a profile is and how it is used. (Dare I say “bike-shedding” ?) And this in itself wouldn’t solve the problem of implementing calibration on a running Wayland system either! (And no, I’m never, ever going to write a Linux drm/kms level color calibration and profiling application. )

I think that @gwgill can better answer this question I am just wondering why the compositor needs to know if the curve is PQ or HLG, AFAIK it just needs to know that there is a curve and what shape it has. For compositing you need to linearize anyway so only reason I can think of is if you want to display something full screen, or am I missing something?

I don’t have the spec at hand but afaik the HDR info frames explicitly encode the OETF as either PQ or HLG.

Probably missing something here but as far as I know HDR comes in with a PQ or HLG curve and we want to compose with SDR data which mean both the HDR and SDR need to be converted preferably to something scene linear, then this scene linear data needs to be tone mapped + color converted and pushed to the screen where depending on the screen we need another PQ or HLG curve. Personally (but as I said I might be missing something) I think we only need the shape of the curve but don’t need to know if it is PQ or HLG (with the exception for the output but that is under the compositors control anyway). Working with only the shape should also be a lot more flexible for when/if in the future they decide we need something else beside PQ/HLG

I’m sorry but you can. Displays advertise their primaries and white point, most are not actually sRGB. That alone is a huge improvement.

Nobody said it’s an advantage to do it that way, merely that it’s possible to do it that way because a proper compositor should guarantee that if a framebuffer with the same color space as the output is being displayed the values in the frambuffer will be the same on the display.

Nobody is against a wayland protocol for profiling but a wayland compositor without a profiling protocol is still useful!

And again: there is no software calibration with a proper wayland compositor which is why the workaround of profiling with KMS is possible.

Right, I’m also not convinced that the compositor has to extract and name the EOTF from an ICC profile for HDR. I just tried to explain where the question is coming from.

1 Like