Wayland color management


(Sebastian Wick) #101

Thanks for taking the time to answer.

I just want to make something clear: nobody is suggesting that the compositor takes over all color management tasks. The client is free to do whatever it wants as long as the surfaces in the end are in a specific color space and the compositor is told about it.

In your PDF example you could still render everything into a floating point frame buffer with e.g. CIE LAB color space. The compositor would only convert it into the display color spaces in the end.

Thanks, that’s helpful. I’ll take a look at collink. However this just sounds like we need policy decisions for the compositor. That’s something we can handle.

I guess that’s what I was proposing earlier. I’m not entirely sure why the fidelity would be compromised and you don’t expand on that.

Well, that’s the point we disagree on. It would help if you expand on why you think that’s the case.

This is just weird. Are you now arguing that the compositor should somehow interfere with the color profiling step? The color profiling is best done when you have full control over all the hardware so you can do the best possible measurements. If the compositor then doesn’t make correct use of that measurement it’s a bug in the compositor.

I completely disagree here.

Secondly, calibration/profiling is just another App. It will have a full on GUI, and will want to access the display in both special and normal ways (calibrate, profile, verify).

That would all work just fine. You would press on “calibrate” button, the compositor asks the user to temporarily give control over the display to the application, you draw whatever you need and in the end the compositor takes back control and everything looks like before.

I want to see actual arguments here and not “I’ve done this for 20 years like that”.

I honestly don’t care if you laugh or not. This is a fundamental design decision and so far we have found solutions to all problems without breaking it.

That’s simply wrong. You just have to take a look at existing compositors to know that what you’re saying is not the case.

This goes both ways.

(Graeme W. Gill) #102

In general displays aren’t “like” any particular colorspace - they are what they measure to be. i.e. if they are color managed, they are always defined by a profile.

That would mean an application could never take advantage of a wide gamut display, and could never do a good perceptual mapping from the source space to the display.

Perhaps you meant that you would like a default conversion that assumes sRGB source space for non color aware applications or the GUI elements that you render ?

Why should it be special ? Shouldn’t there be any number of color managed applications and windows displayed on a screen at once ?

But the GUI may want to use transparency on color managed applications for various transition effects etc. Yes the window should be opaque for correct color.

Agreed that a color aware application needs to know the display profile to create a high quality conversion.

I’m not clear on what you mean by that. You are using a color managed API and then using the null profile trick to disable color management for MSWindows ?

Right, but Krita is an application - you aren’t responsible for the system GUI rendering.

(Graeme W. Gill) #103

That is the implications of constraints 1) that the application doesn’t know which display it is rendering for 2) High quality color output is desired.

Please re-read what I wrote. No this isn’t workable, because how will the conversion from L*a*b* to the display space know how to execute the different intents for different pixels in the raster ? How will it know the source gamut for perceptual intent conversions ?

I don’t think you grasp the complexity of this. You really don’t want something like collink in the composer if there is any possible way of avoiding it, and even if you did, it wouldn’t satisfy other application requirements.

But in any case this, or the more practical support of general device link conversions is not needed if the client can be given the hint as to which display profile it should prefer to render to, similarly to the HiDPI case.

I expanded on it at length. See the explanation of intents and how you need both the source and destination gamuts.

I’ve already explained this. If you don’t understand the explanation, then please indicate what you don’t follow.

It’s about the color workflow - how the colors values get transferred and transformed on their way to the viewers eyeballs. What I’m worried about is the whole processing pipeline, and making sure that when a particular buffer of pixels is declared as being in the displays colorspace as defined by the given profile, that in fact the buffer is actually processed in exactly the same way for display as it was when it was profiled.

So saying that application colors are processed through the compositor but that the calibration and profiling tools should take control of the raw display interface, and take it on trust that the compositor won’t differ in how the pixels are processed, is inviting unnecessary disaster. For instance: lets say that my calibration and profiling tools stay as they are, and setup calibration using the hardware VideoLUT and then profile the display. Then the user switches to Wayland, but Wayland doesn’t load the VideoLUT from the ‘vcgt’ tag in the display profile (or that Wayland has some other rendering tweak/feature.) The profile is then not valid. How can I verify it’s not valid ? - I need to run the color profiling tools in verification mode through the Wayland compositor to do this.

To put it another way - any valid color calibration & profiling tool has to have the capability of changing the system calibration and profile, because that’s it’s final task for the user. This capability means that it can set the color workflow up correctly for calibration and profiling, and gives it the extremely valuable assurance that the color processing for profiling is the same one that will be used for rendering, meaning that the profile will be valid, and it is trivial to verify that the profile is correct.

Summary :- from a Color Management point of view, suggesting that the calibration and profiling tools should access the display using a completely different mechanism than the workflow the profile will be use in, is the exact opposite of the best way of doing it.

You should really take a look at one of the commercial profiling applications. No, they don’t issue instructions and then switch to a blank screen - they have instructions on the screen, and graphics of exactly where to place the instrument, etc. No I don’t think it is normal for color profiling tools to have to provide their own GUI rendering library to display their GUI, just because they want the color management set to in a particular way.

I’ve expended a lot of time with detailed explanations, but you don’t seem to want to spend the time researching or understanding.

[ Strangely enough, I’ve not spend 20 years doing the same thing over and over again, I’ve spent it in a constant search for better ways of doing things. ]

Which is perfectly fine - here is another problem requiring a solution :- find a way to let color calibration and profiling applications install calibrations and profiles, so that they can perform the function users expect of them.

I see it as fundamental. Please explain why you think otherwise.

(Graeme W. Gill) #104

I’m confident HDR spaces can be characterized successfully with ICC profiles (as successfully as is possible, given than many HDR displays do too much processing and therefore make any static characterization a compromise). A display profile usually records the absolute brightness in the ‘lumi’ tag.

Yes. An ICC CMM will need some tweaking to cope with linking HDR profiles. For SDR -> HDR there needs to be a brightness intent parameter (where SDR 1.0 maps to). For HDR -> SDR there needs to be suitable tone reproduction operator.

[ I experimented with this a little, some time ago when playing with creating an scRGB profile. ]

(Andreas Schneider) #105

Please stop getting personal (and I mean everyone participating)! We are all here with best intents. This is a complex topic and a lot of people want to learn from this thread. Getting personal drives people away and this is counter productive!

Please keep explaining things to each other, don’t point fingers. If something is unclear explain it again!

(Graeme W. Gill) #106

That’s not how it is coming across from some people.

Sorry, I’ve spent my whole working day doing nothing but composing emails about Wayland & Color Management. I have explained the same points about half a dozen times in half a dozen different ways, and I have reached my limits for a while, unless we are able to move on.

(Sebastian Wick) #107

I’ve taken some time to read up on rendering intend. Some posts make a lot more sense now.


A better approach I think is the hybrid one we were talking
about: Give the client enough information to decide which display
it should optimize color rendering for. When the compositor needs
to display the surface on some other display, it can use a simpler
bulk color conversion to do so. Optimal color rendering can at least
be achieved on one display (hopefully enough to satisfy the demanding
color user), while still allowing the compositor to handle
window transitions, mirroring etc. without requiring huge
re-writes of applications. This is the analogy to current HiDPI handling.

I completely agree.


@gwgill @swick

I was thinking would it be acceptable if the calibration curve and display profile could be set via a dbus protocol instead of via wayland directly? This should also give some flexibility in implementation[1] and I think dbus already has some idea of privileged vs non-privileged operations. Although this would mean that we have to standardize a dbus protocol alongside the wayland protocol (we don’t want every compositor re-inventing the wheel for this).

I think this should give a better separation between configuration of CM and CM itself

[1] ‘simple’ compositors (like those base on wlroots like sway) could implement it inside the compositor but more complex things (where there is also a full DE running) could put it in a configuration daemon or something

(Sebastian Wick) #109

I agree that having a dbus protocol for display calibration and/or rendering intend makes a lot of sense. However I don’t think that it’s necessary to standardize that protocol alongside the wayland protocol. The reference implementation in weston should probably just use the existing static file configuration machinery.

When Gnome Shell and KDE Plasma want to implement color management is the right time to get involved there.

(Sebastian Wick) #110

@gwgill One thing that might prove to be difficult when measuring a display is the mapping between physical display and advertised color space of wayland output which would be required for the null transformation.

I still think that bypassing the compositor altogether is a good idea especially since the verification step should make sure that everything works right in the end.

In any case, I don’t think measuring should be a blocker for a wayland color management protocol.


I have 2 issues with that the first being that for calibration and profiling those need to be set in realtime[1], this can of course be done with static files (read on change) but is not ideal. after calibration; the second issue is that if KDE, GNOME and others will each develop their own protocol which will be a problem for tools like argyllcms, displaycal, etc

Anyway now that I have your attention, I would like to try to put some of this in an actual wayland protocol but have a hard time finding information on the xml format, what formats can be send over the line, stuff like that) is there any information for that?

[1] AFAIK calibration always happens before profiling and the profiling needs the calibration curves loaded and after profiling we want to load the new profile

(Sebastian Wick) #112

I don’t have a strong opinion here but imo it makes more sense to work out the wayland parts and deliver a reference implementation in weston first.



I think I have something, not entirely satisfied with how to communicate the supported color spaces but I think the surface part is I think mostly how it should be

<?xml version="1.0" encoding="UTF-8"?>
<protocol name="color_management_unstable_v1">

    Copyright © 2019 Erwin Burema

    Permission is hereby granted, free of charge, to any person obtaining a
    copy of this software and associated documentation files (the "Software"),
    to deal in the Software without restriction, including without limitation
    the rights to use, copy, modify, merge, publish, distribute, sublicense,
    and/or sell copies of the Software, and to permit persons to whom the
    Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice (including the next
    paragraph) shall be included in all copies or substantial portions of the

  <interface name="zcm_manager_v1" version="1">
      <description summary="Manager to get information from compositor and attach buffer to CM">
          This interface is for the compositor to announce support for color managed surfaces. 
          If this interface is supported by the compositor any surface that doesn't use this profile is to be assumed to be in sRGB irregardless of buffer size.
          Surface that do use this protocol can either be in a compositor supported color space in which case the compositor is responsible for rendering to the final display space in this first case there is no guarantee to accuracy since the compositor is allowed to use an intermediate compositing color space in this case. The second option is to render directly to a display color space using this protocol the query what that is, in this case accuracy will be guaranteed for the primary display, the compositor should make a best effort to also show mostly correct colors on any secondary displays the surface is visible off but no guarantee to accuracy shall be given.
          In the context of this protocol primary display means the main display the surface is visible on and secondary display means any display the surface is also visible on (either because the primary display is mirrored or the surface is split between two screens), it is the compositors task to determine what the primary display of a surface is - potentially with the help of user configuration - and sent the right events might this change.
      <request name="destroy" type="destructor">
        <description summary="destroy the idle inhibitor object">
            Destroy the cm manager.
      <request name="cm_surface">
          <description summary="Make surface color managed">
              Make a surface color managed
          <arg name="id" type="new_id" interface="zcm_surface_v1" />
          <arg name="surface" type="object" interface="wl_surface" />
      <request name="cm_request_color_spaces">
          <description summary="Get supported color space from compositor">
              Clients can send this request to get a description of the color space supported by the compositor
      <event name="cm_supported_color_spaces">
          <description summary="A fd to a file provding information on supported profiles">
              A JSON file with a list of supported profiles the compositor can render, should also include the display profiles. Send either as a response to a cm_request_color_spaces or on change of information
              The JSON file should include a tag that the compositor will use to identify color profiles, a description which may be a file location of an ICC profile or a set of primaries + curve information (this is needed since currently HDR can't be communicated via ICC profiles) and potentially a common name for programs to identify common color spaces.
          <arg name="cm_config" type="fd" summary="JSON cm configuration fd" />
  <interface name="zcm_surface_v1" version="1">
      <description summary="Interface for color managed surface">
          With this interface the surface can set its color tags as defined in the JSON configuration. It can also use this to get the tag of the primary output.
          If a surface uses a tag that is the same as its primary output the surface must be blended as late as possible before output to display. Otherwise the compositor is allowed to use an intermediate compositing space.
      <request name="cm_set_color_space">
          <description summary="Set color space">
              Set the color space for the attache surface
          <arg name="cm_cp_tag" type="string" summary="Color space tag as defined in cm config" />
      <request name="cm_get_primary_display">
          <description summary="Get primary display tag">
              Request to get primary display tag.
      <event name="cm_primary_display">
          <description summary="Current primary display of surface">
              The current primary display of the surface, can either be sent as response to a cm_get_primary_display or when primary display changes although in that case only when surface was rendering to previous primary display.
          <arg name="cm_primary_display_tag" type="string" summary="primary display tag" />
      <request name="cm_set_secondary_surface">
          <description summary="To assist the compositor in rendering to secondary screens">
              A secondary buffer can be set to assist a compositor when dealing with secondary screens or complex compositing tasks.
              This surface must be the same size as the primary surface, must not already be attached to a cm_surface object and cm_tag must not be a display tag.
          <arg name="surface" type="object" interface="wl_surface" summary="Secondary surface" />
          <arg name="cm_tag" type="string" summary="Color space tag for secondart surface"/>

(Dmitry Kazakov) #114

Well, OCIO is not the best choice for automated color conversion. OCIO doesn’t define any machine-readable interchangable color spaces. It is good for its main purpose: do color conversion for the current display under total control of the user. OCIO is nice, but is surely not usable for predefined “profiles”.

That is the main problem :slight_smile: We should either modify ICC standard (+ LCMS?) to support HDR, or invent a yet-another-format for profiles.

That exactly what I’m pushing for: by default the compositor should expect all the apps just render in sRGB. 99% of developers will never know/care about color management. We must not expect app developers would care about that. It will never happen.

I have just checked: it looks like Windows’ ICM system has no connection to DirectX surfaces API. I have called SetICMMode() for HDC, then created an sRGB surface with DXGI (and, later, scRGB) and DirectX didn’t do any conversion to the display profile. I don’t really know how it is expected to work. It just passes the data through directly to the display without any color conversions, even though I explicitly tell it is sRGB data.

I’ll try to expand it. Graeme tried to tell that one cannot use intermediate color space representation if one want to get good rendering quality.

The point is, when color management system converts data into the display’s color space, it can use different “Rendering Intents”. These “intents” define how the colors outside display color range will be fit into the destination display space. The simplest way to fit two color spaces is to clip non-displayable colors, basically drop them. More sophisticated approaches, like “Perceptual” compresses the source color range (basically offsetting absolute color values) to fit all the source colors into the destination space. The user will not notice the offset (due to eyes adaptation processes), but the image will not have dull flat-filled areas of not-representable colors.

So, if you use “intermediate color space” approach, then the only intent you can use is “clipping”, which creates bad results most of the time. If you want to use “perceptual” intent, you should convert directly from the source image color space to the display color space.

(Dmitry Kazakov) #115

Yes, “sRGB-like” was just a short name for “a color space with primaries and EOTF something like in sRGB” :slight_smile:

The point I wanted to tell was unrelated though: as far as I know, it is impossible to describe an HDR color space with an ICC profile. Though it needs some investigation.

Yes, exactly. GUI elements, like menus and buttons, are expected to be painted on sRGB surface, but the canvas with actual image data is expected to be painted on a separate surface with a different color tag.

I mean the app should have two surfaces: one with sRGB tag for GUI elements (for which the compositor does color management) and the other one for actual image data (compositor passes it directly to the display).

There is not color management for DXGI surfaces in Windows, even though MS claims there is. You create an sRGB surface, and DirectX passes the data directly to GPU without any conversions. Calling SetICMMode for the corresponding HDC doesn’t do anything.

(Dmitry Kazakov) #116

Outch, do you have any information published about that anywhere? I have a lot of questions then :slight_smile:

  1. Does LCMS actually handles this tag?
  2. Does it mean that TRC curves will be normalized by this luminanceTag range?
  3. How the ranges will be connected when linking to SDR?

(Hevii Guy) #117

For the most part, I have absolutely no idea whatsoever what you Techie-Types are talking about when you’re down in the detail ditches. I do, however, “feel the love” :heavy_heart_exclamation: now emanating from this ‘discourse’.

It’s very reassuring that, seemingly, we’re well on our way to avoiding the Colour Chaos which may have occurred had this collaborative discussion not taken place. Bravo and thank-you!


Agreed it was just and example that ICC is not the only way to do it.

Agreed, if we can get ICC to support HDR that would simplefy quite a lot. It is actually quite interesting that ICC is in one way, way to powerful (supports color space with up to 16 channels, while we are only interesting in 3 channel RGB) but on the other hand not powerful enough (no HDR support).


Did MIR had any colour management code? Could any of that be used?


Even if it had it wouldn’t really be useful for this since this isn’t about any particular compositor implementation of CM but how applications can communicate with those compositors.