Wayland color management

@gwgill @swick

I was thinking would it be acceptable if the calibration curve and display profile could be set via a dbus protocol instead of via wayland directly? This should also give some flexibility in implementation[1] and I think dbus already has some idea of privileged vs non-privileged operations. Although this would mean that we have to standardize a dbus protocol alongside the wayland protocol (we don’t want every compositor re-inventing the wheel for this).

I think this should give a better separation between configuration of CM and CM itself

[1] ‘simple’ compositors (like those base on wlroots like sway) could implement it inside the compositor but more complex things (where there is also a full DE running) could put it in a configuration daemon or something

I agree that having a dbus protocol for display calibration and/or rendering intend makes a lot of sense. However I don’t think that it’s necessary to standardize that protocol alongside the wayland protocol. The reference implementation in weston should probably just use the existing static file configuration machinery.

When Gnome Shell and KDE Plasma want to implement color management is the right time to get involved there.

@gwgill One thing that might prove to be difficult when measuring a display is the mapping between physical display and advertised color space of wayland output which would be required for the null transformation.

I still think that bypassing the compositor altogether is a good idea especially since the verification step should make sure that everything works right in the end.

In any case, I don’t think measuring should be a blocker for a wayland color management protocol.

I have 2 issues with that the first being that for calibration and profiling those need to be set in realtime[1], this can of course be done with static files (read on change) but is not ideal. after calibration; the second issue is that if KDE, GNOME and others will each develop their own protocol which will be a problem for tools like argyllcms, displaycal, etc

Anyway now that I have your attention, I would like to try to put some of this in an actual wayland protocol but have a hard time finding information on the xml format, what formats can be send over the line, stuff like that) is there any information for that?

[1] AFAIK calibration always happens before profiling and the profiling needs the calibration curves loaded and after profiling we want to load the new profile

I don’t have a strong opinion here but imo it makes more sense to work out the wayland parts and deliver a reference implementation in weston first.


I think I have something, not entirely satisfied with how to communicate the supported color spaces but I think the surface part is I think mostly how it should be

<?xml version="1.0" encoding="UTF-8"?>
<protocol name="color_management_unstable_v1">

    Copyright © 2019 Erwin Burema

    Permission is hereby granted, free of charge, to any person obtaining a
    copy of this software and associated documentation files (the "Software"),
    to deal in the Software without restriction, including without limitation
    the rights to use, copy, modify, merge, publish, distribute, sublicense,
    and/or sell copies of the Software, and to permit persons to whom the
    Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice (including the next
    paragraph) shall be included in all copies or substantial portions of the

  <interface name="zcm_manager_v1" version="1">
      <description summary="Manager to get information from compositor and attach buffer to CM">
          This interface is for the compositor to announce support for color managed surfaces. 
          If this interface is supported by the compositor any surface that doesn't use this profile is to be assumed to be in sRGB irregardless of buffer size.
          Surface that do use this protocol can either be in a compositor supported color space in which case the compositor is responsible for rendering to the final display space in this first case there is no guarantee to accuracy since the compositor is allowed to use an intermediate compositing color space in this case. The second option is to render directly to a display color space using this protocol the query what that is, in this case accuracy will be guaranteed for the primary display, the compositor should make a best effort to also show mostly correct colors on any secondary displays the surface is visible off but no guarantee to accuracy shall be given.
          In the context of this protocol primary display means the main display the surface is visible on and secondary display means any display the surface is also visible on (either because the primary display is mirrored or the surface is split between two screens), it is the compositors task to determine what the primary display of a surface is - potentially with the help of user configuration - and sent the right events might this change.
      <request name="destroy" type="destructor">
        <description summary="destroy the idle inhibitor object">
            Destroy the cm manager.
      <request name="cm_surface">
          <description summary="Make surface color managed">
              Make a surface color managed
          <arg name="id" type="new_id" interface="zcm_surface_v1" />
          <arg name="surface" type="object" interface="wl_surface" />
      <request name="cm_request_color_spaces">
          <description summary="Get supported color space from compositor">
              Clients can send this request to get a description of the color space supported by the compositor
      <event name="cm_supported_color_spaces">
          <description summary="A fd to a file provding information on supported profiles">
              A JSON file with a list of supported profiles the compositor can render, should also include the display profiles. Send either as a response to a cm_request_color_spaces or on change of information
              The JSON file should include a tag that the compositor will use to identify color profiles, a description which may be a file location of an ICC profile or a set of primaries + curve information (this is needed since currently HDR can't be communicated via ICC profiles) and potentially a common name for programs to identify common color spaces.
          <arg name="cm_config" type="fd" summary="JSON cm configuration fd" />
  <interface name="zcm_surface_v1" version="1">
      <description summary="Interface for color managed surface">
          With this interface the surface can set its color tags as defined in the JSON configuration. It can also use this to get the tag of the primary output.
          If a surface uses a tag that is the same as its primary output the surface must be blended as late as possible before output to display. Otherwise the compositor is allowed to use an intermediate compositing space.
      <request name="cm_set_color_space">
          <description summary="Set color space">
              Set the color space for the attache surface
          <arg name="cm_cp_tag" type="string" summary="Color space tag as defined in cm config" />
      <request name="cm_get_primary_display">
          <description summary="Get primary display tag">
              Request to get primary display tag.
      <event name="cm_primary_display">
          <description summary="Current primary display of surface">
              The current primary display of the surface, can either be sent as response to a cm_get_primary_display or when primary display changes although in that case only when surface was rendering to previous primary display.
          <arg name="cm_primary_display_tag" type="string" summary="primary display tag" />
      <request name="cm_set_secondary_surface">
          <description summary="To assist the compositor in rendering to secondary screens">
              A secondary buffer can be set to assist a compositor when dealing with secondary screens or complex compositing tasks.
              This surface must be the same size as the primary surface, must not already be attached to a cm_surface object and cm_tag must not be a display tag.
          <arg name="surface" type="object" interface="wl_surface" summary="Secondary surface" />
          <arg name="cm_tag" type="string" summary="Color space tag for secondart surface"/>

Well, OCIO is not the best choice for automated color conversion. OCIO doesn’t define any machine-readable interchangable color spaces. It is good for its main purpose: do color conversion for the current display under total control of the user. OCIO is nice, but is surely not usable for predefined “profiles”.

That is the main problem :slight_smile: We should either modify ICC standard (+ LCMS?) to support HDR, or invent a yet-another-format for profiles.

That exactly what I’m pushing for: by default the compositor should expect all the apps just render in sRGB. 99% of developers will never know/care about color management. We must not expect app developers would care about that. It will never happen.

I have just checked: it looks like Windows’ ICM system has no connection to DirectX surfaces API. I have called SetICMMode() for HDC, then created an sRGB surface with DXGI (and, later, scRGB) and DirectX didn’t do any conversion to the display profile. I don’t really know how it is expected to work. It just passes the data through directly to the display without any color conversions, even though I explicitly tell it is sRGB data.

I’ll try to expand it. Graeme tried to tell that one cannot use intermediate color space representation if one want to get good rendering quality.

The point is, when color management system converts data into the display’s color space, it can use different “Rendering Intents”. These “intents” define how the colors outside display color range will be fit into the destination display space. The simplest way to fit two color spaces is to clip non-displayable colors, basically drop them. More sophisticated approaches, like “Perceptual” compresses the source color range (basically offsetting absolute color values) to fit all the source colors into the destination space. The user will not notice the offset (due to eyes adaptation processes), but the image will not have dull flat-filled areas of not-representable colors.

So, if you use “intermediate color space” approach, then the only intent you can use is “clipping”, which creates bad results most of the time. If you want to use “perceptual” intent, you should convert directly from the source image color space to the display color space.

Yes, “sRGB-like” was just a short name for “a color space with primaries and EOTF something like in sRGB” :slight_smile:

The point I wanted to tell was unrelated though: as far as I know, it is impossible to describe an HDR color space with an ICC profile. Though it needs some investigation.

Yes, exactly. GUI elements, like menus and buttons, are expected to be painted on sRGB surface, but the canvas with actual image data is expected to be painted on a separate surface with a different color tag.

I mean the app should have two surfaces: one with sRGB tag for GUI elements (for which the compositor does color management) and the other one for actual image data (compositor passes it directly to the display).

There is not color management for DXGI surfaces in Windows, even though MS claims there is. You create an sRGB surface, and DirectX passes the data directly to GPU without any conversions. Calling SetICMMode for the corresponding HDC doesn’t do anything.

Outch, do you have any information published about that anywhere? I have a lot of questions then :slight_smile:

  1. Does LCMS actually handles this tag?
  2. Does it mean that TRC curves will be normalized by this luminanceTag range?
  3. How the ranges will be connected when linking to SDR?

For the most part, I have absolutely no idea whatsoever what you Techie-Types are talking about when you’re down in the detail ditches. I do, however, “feel the love” :heavy_heart_exclamation: now emanating from this ‘discourse’.

It’s very reassuring that, seemingly, we’re well on our way to avoiding the Colour Chaos which may have occurred had this collaborative discussion not taken place. Bravo and thank-you!


Agreed it was just and example that ICC is not the only way to do it.

Agreed, if we can get ICC to support HDR that would simplefy quite a lot. It is actually quite interesting that ICC is in one way, way to powerful (supports color space with up to 16 channels, while we are only interesting in 3 channel RGB) but on the other hand not powerful enough (no HDR support).

Did MIR had any colour management code? Could any of that be used?

Even if it had it wouldn’t really be useful for this since this isn’t about any particular compositor implementation of CM but how applications can communicate with those compositors.

Okay I have been working on this for a bit, if this works out might start on a dbus protocol proposal

@swick does this look sane? (if it does I might post this also the wayland mailinglist)


No, it was a request from a customer.

I’m not sure what you mean. AFAIK no CMM has explicit provision for mixing SDR and HDR profiles. In the case of the scRGB profile, I experimented with baking in a tone mapping curve so I could use it as an input profile with a standard CMM. For full flexibility some additions are needed when specifying link options to a CMM.
(Or are you talking about the luminanceTag ? It’s a standard ICC tag.)

If SDR to HDR brightness is specified in (say) cd/m^2, then the HDR luminanceTag would be used to compute the scaling factor.

Simplest is to scale white to a given brightness. HDR to SDR needs a tone curve.

TV HDR is currently pretty messed up because to handle it intuitively it needs a known “diffuse white” reference value, but the standards the TV industry rushed out are based on a mastering absolute brightness specification. In a mastering suite you can specify a standard ambient light level and display absolute brightness, but in the real world people adjust their TV’s to suite the ambient conditions. If there was a defined diffuse white then the tone mapping can know to preserve linearity from that level down, while being free to compress specular highlights and light source levels much more aggressively. Mapping SDR to HDR is pretty simple then - map SDR white to HDR diffuse white. The way it seems to be shaping up in practice is that implementors are simply assuming something like 100 cd/^m is the diffuse white, while nothing in the standards actually specifies this.

Sorry for taking so long to answer.

I’ve taken some time to write a rough protocol of how I think things should work. It does ignore a few problems (they are described in the protocol description as FIXMEs).

The way the protocol works for the client is this: you listen to wl_surface.enter/leave and the preferred colorspace output event. You decide which colorspace to render to and tell that to the compositor.

The compositor has a bunch of surfaces with their colorspaces and if necessary does gamut mapping to convert them to the output colorspace.

From what I’ve gathered the general idea seems to be acceptable. Does anyone disagree?

Further, here are the issues that still have to be solved:

FIXME should the zwp_colorspace_v1::information event contain the
          well-known name of a colorspace?

          Right now the client has to infer from the ICC profile that
          an output is e.g. sRGB.

    FIXME should we accept all ICC profiles? Probably not.

    FIXME how should the ICC profile get transmitted? fd passing or
          as "array" (involved endianness).

    FIXME should we let the client attach a rendering intent hint to
          a surface?

    FIXME what about tone mapping?

Any input would be appreciated!

@gwgill @dutch_wolf @Dmitry_Kazakov


Just took a quick look but I think it is a bad idea for clients to set arbitrary ICC profiles as color space, firstly since ICC profiles support more then just RGB but also CMYK and color space which are described in up to 16 channels. Secondly since most programs that care about accurate rendering want to render to displays directly anyway (so those just want to know the display profile and afterwards the compositor shouldn’t touch the buffers anymore). The programs that want to potentially render wide gamut but don’t care about “perfect” accuracy are much better suited by a way to tell them what the compositor is capable off and then tag the surface.

Something like this wayland-colormanager/cm_wayland_protocol.xml at master · eburema/wayland-colormanager · GitHub make I think more sense

Another thing I am worried about (but this just might be my understanding not being complete) is that I can’t find any guarantee that a wl_output will map to one and only one display. (So I didn’t use it in the above)

I agree. That’s the second FIXME.

The question I have is what a suitable subset would look like. Maybe something like RGB Device Connection Space? I don’t know enough about ICC profiles.

They can just assign the colorspace they queried from the wl_output to the wl_surface.

I’m not sure if I understand you here. They can just create an ICC profile that describes their wide gamut colorspace and assign that to the wl_surface.

The compositor already has to be able to do conversion between two arbitrary color spaces (two display with ICC profiles loaded, surface with first color space has to be displayed on second display). I don’t see why we should limit the color spaces to some arbitrary more or less popular ones.

That’s actually a really good point. There is protocols which seem to make that assumption but I’ll have to take another look.

1 Like

This sounds technically feasible, but I’m not clear enough about the different related Wayland protocols (i.e. xdg_surface, xdg_shell etc.) to have a feeling as to what approach is appropriate. It would have to be a Wayland like protocol over dbus, since there will be a lot of common elements (references to outputs, color profiles, surfaces etc.)

Sure, but they are closely related. A profiler will make use of much of the color management protocol in its operation.

I don’t like the sound of using a daemon. Installing calibration curves needs a mechanism to know when they are installed, to facilitate reliable verification or high res calibration, where the calibration curves are changed dynamically with each patch measurement so as to be able to exploit the VideoLUT output bit depth.

The profiler needs to be able to dynamically load profiles & calibration curves to do its job, and there’s no point in creating a CM protocol if it can’t be configured and tested. A CM protocol without the APIs to calibrate, profile and install the profiles is only a half implementation, and simply isn’t worth doing.

This is a bad idea from many perspectives, but I won’t repeat my explanations from the Wayland list here.

That’s rather like writing a compositor, but never looking at the output on a real display - i.e. it’s an academic exercise.

Another way of putting this is that there is no point implementing a protocol extension if it is never tested, and the the application that most fully exercises a color management protocol is the profiling application.

1 Like

I’ve put a brain dump on a Wayland Color Management protocol here. It’s a rough set of ideas at this point, rather than something very formal. It will need a bit more research into the “Wayland way of doing things” to turn it into an .xml.

1 Like