Wayland color management

Thanks for taking the time to answer.

I just want to make something clear: nobody is suggesting that the compositor takes over all color management tasks. The client is free to do whatever it wants as long as the surfaces in the end are in a specific color space and the compositor is told about it.

In your PDF example you could still render everything into a floating point frame buffer with e.g. CIE LAB color space. The compositor would only convert it into the display color spaces in the end.

Thanks, thatā€™s helpful. Iā€™ll take a look at collink. However this just sounds like we need policy decisions for the compositor. Thatā€™s something we can handle.

I guess thatā€™s what I was proposing earlier. Iā€™m not entirely sure why the fidelity would be compromised and you donā€™t expand on that.

Well, thatā€™s the point we disagree on. It would help if you expand on why you think thatā€™s the case.

This is just weird. Are you now arguing that the compositor should somehow interfere with the color profiling step? The color profiling is best done when you have full control over all the hardware so you can do the best possible measurements. If the compositor then doesnā€™t make correct use of that measurement itā€™s a bug in the compositor.

I completely disagree here.

Secondly, calibration/profiling is just another App. It will have a full on GUI, and will want to access the display in both special and normal ways (calibrate, profile, verify).

That would all work just fine. You would press on ā€œcalibrateā€ button, the compositor asks the user to temporarily give control over the display to the application, you draw whatever you need and in the end the compositor takes back control and everything looks like before.

I want to see actual arguments here and not ā€œIā€™ve done this for 20 years like thatā€.

I honestly donā€™t care if you laugh or not. This is a fundamental design decision and so far we have found solutions to all problems without breaking it.

Thatā€™s simply wrong. You just have to take a look at existing compositors to know that what youā€™re saying is not the case.

This goes both ways.

In general displays arenā€™t ā€œlikeā€ any particular colorspace - they are what they measure to be. i.e. if they are color managed, they are always defined by a profile.

That would mean an application could never take advantage of a wide gamut display, and could never do a good perceptual mapping from the source space to the display.

Perhaps you meant that you would like a default conversion that assumes sRGB source space for non color aware applications or the GUI elements that you render ?

Why should it be special ? Shouldnā€™t there be any number of color managed applications and windows displayed on a screen at once ?

But the GUI may want to use transparency on color managed applications for various transition effects etc. Yes the window should be opaque for correct color.

Agreed that a color aware application needs to know the display profile to create a high quality conversion.

Iā€™m not clear on what you mean by that. You are using a color managed API and then using the null profile trick to disable color management for MSWindows ?

Right, but Krita is an application - you arenā€™t responsible for the system GUI rendering.

That is the implications of constraints 1) that the application doesnā€™t know which display it is rendering for 2) High quality color output is desired.

Please re-read what I wrote. No this isnā€™t workable, because how will the conversion from L*a*b* to the display space know how to execute the different intents for different pixels in the raster ? How will it know the source gamut for perceptual intent conversions ?

I donā€™t think you grasp the complexity of this. You really donā€™t want something like collink in the composer if there is any possible way of avoiding it, and even if you did, it wouldnā€™t satisfy other application requirements.

But in any case this, or the more practical support of general device link conversions is not needed if the client can be given the hint as to which display profile it should prefer to render to, similarly to the HiDPI case.

I expanded on it at length. See the explanation of intents and how you need both the source and destination gamuts.

Iā€™ve already explained this. If you donā€™t understand the explanation, then please indicate what you donā€™t follow.

Itā€™s about the color workflow - how the colors values get transferred and transformed on their way to the viewers eyeballs. What Iā€™m worried about is the whole processing pipeline, and making sure that when a particular buffer of pixels is declared as being in the displays colorspace as defined by the given profile, that in fact the buffer is actually processed in exactly the same way for display as it was when it was profiled.

So saying that application colors are processed through the compositor but that the calibration and profiling tools should take control of the raw display interface, and take it on trust that the compositor wonā€™t differ in how the pixels are processed, is inviting unnecessary disaster. For instance: lets say that my calibration and profiling tools stay as they are, and setup calibration using the hardware VideoLUT and then profile the display. Then the user switches to Wayland, but Wayland doesnā€™t load the VideoLUT from the ā€˜vcgtā€™ tag in the display profile (or that Wayland has some other rendering tweak/feature.) The profile is then not valid. How can I verify itā€™s not valid ? - I need to run the color profiling tools in verification mode through the Wayland compositor to do this.

To put it another way - any valid color calibration & profiling tool has to have the capability of changing the system calibration and profile, because thatā€™s itā€™s final task for the user. This capability means that it can set the color workflow up correctly for calibration and profiling, and gives it the extremely valuable assurance that the color processing for profiling is the same one that will be used for rendering, meaning that the profile will be valid, and it is trivial to verify that the profile is correct.

Summary :- from a Color Management point of view, suggesting that the calibration and profiling tools should access the display using a completely different mechanism than the workflow the profile will be use in, is the exact opposite of the best way of doing it.

You should really take a look at one of the commercial profiling applications. No, they donā€™t issue instructions and then switch to a blank screen - they have instructions on the screen, and graphics of exactly where to place the instrument, etc. No I donā€™t think it is normal for color profiling tools to have to provide their own GUI rendering library to display their GUI, just because they want the color management set to in a particular way.

Iā€™ve expended a lot of time with detailed explanations, but you donā€™t seem to want to spend the time researching or understanding.

[ Strangely enough, Iā€™ve not spend 20 years doing the same thing over and over again, Iā€™ve spent it in a constant search for better ways of doing things. ]

Which is perfectly fine - here is another problem requiring a solution :- find a way to let color calibration and profiling applications install calibrations and profiles, so that they can perform the function users expect of them.

I see it as fundamental. Please explain why you think otherwise.

Iā€™m confident HDR spaces can be characterized successfully with ICC profiles (as successfully as is possible, given than many HDR displays do too much processing and therefore make any static characterization a compromise). A display profile usually records the absolute brightness in the ā€˜lumiā€™ tag.

Yes. An ICC CMM will need some tweaking to cope with linking HDR profiles. For SDR ā†’ HDR there needs to be a brightness intent parameter (where SDR 1.0 maps to). For HDR ā†’ SDR there needs to be suitable tone reproduction operator.

[ I experimented with this a little, some time ago when playing with creating an scRGB profile. ]

Please stop getting personal (and I mean everyone participating)! We are all here with best intents. This is a complex topic and a lot of people want to learn from this thread. Getting personal drives people away and this is counter productive!

Please keep explaining things to each other, donā€™t point fingers. If something is unclear explain it again!

6 Likes

Thatā€™s not how it is coming across from some people.

Sorry, Iā€™ve spent my whole working day doing nothing but composing emails about Wayland & Color Management. I have explained the same points about half a dozen times in half a dozen different ways, and I have reached my limits for a while, unless we are able to move on.

Iā€™ve taken some time to read up on rendering intend. Some posts make a lot more sense now.

https://lists.freedesktop.org/archives/wayland-devel/2019-January/039852.html

A better approach I think is the hybrid one we were talking
about: Give the client enough information to decide which display
it should optimize color rendering for. When the compositor needs
to display the surface on some other display, it can use a simpler
bulk color conversion to do so. Optimal color rendering can at least
be achieved on one display (hopefully enough to satisfy the demanding
color user), while still allowing the compositor to handle
window transitions, mirroring etc. without requiring huge
re-writes of applications. This is the analogy to current HiDPI handling.

I completely agree.

2 Likes

@gwgill @swick

I was thinking would it be acceptable if the calibration curve and display profile could be set via a dbus protocol instead of via wayland directly? This should also give some flexibility in implementation[1] and I think dbus already has some idea of privileged vs non-privileged operations. Although this would mean that we have to standardize a dbus protocol alongside the wayland protocol (we donā€™t want every compositor re-inventing the wheel for this).

I think this should give a better separation between configuration of CM and CM itself


[1] ā€˜simpleā€™ compositors (like those base on wlroots like sway) could implement it inside the compositor but more complex things (where there is also a full DE running) could put it in a configuration daemon or something

I agree that having a dbus protocol for display calibration and/or rendering intend makes a lot of sense. However I donā€™t think that itā€™s necessary to standardize that protocol alongside the wayland protocol. The reference implementation in weston should probably just use the existing static file configuration machinery.

When Gnome Shell and KDE Plasma want to implement color management is the right time to get involved there.

@gwgill One thing that might prove to be difficult when measuring a display is the mapping between physical display and advertised color space of wayland output which would be required for the null transformation.

I still think that bypassing the compositor altogether is a good idea especially since the verification step should make sure that everything works right in the end.

In any case, I donā€™t think measuring should be a blocker for a wayland color management protocol.

I have 2 issues with that the first being that for calibration and profiling those need to be set in realtime[1], this can of course be done with static files (read on change) but is not ideal. after calibration; the second issue is that if KDE, GNOME and others will each develop their own protocol which will be a problem for tools like argyllcms, displaycal, etc

Anyway now that I have your attention, I would like to try to put some of this in an actual wayland protocol but have a hard time finding information on the xml format, what formats can be send over the line, stuff like that) is there any information for that?


[1] AFAIK calibration always happens before profiling and the profiling needs the calibration curves loaded and after profiling we want to load the new profile

I donā€™t have a strong opinion here but imo it makes more sense to work out the wayland parts and deliver a reference implementation in weston first.

https://wayland.freedesktop.org/docs/html/ch04.html

I think I have something, not entirely satisfied with how to communicate the supported color spaces but I think the surface part is I think mostly how it should be

<?xml version="1.0" encoding="UTF-8"?>
<protocol name="color_management_unstable_v1">

  <copyright>
    Copyright Ā© 2019 Erwin Burema

    Permission is hereby granted, free of charge, to any person obtaining a
    copy of this software and associated documentation files (the "Software"),
    to deal in the Software without restriction, including without limitation
    the rights to use, copy, modify, merge, publish, distribute, sublicense,
    and/or sell copies of the Software, and to permit persons to whom the
    Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice (including the next
    paragraph) shall be included in all copies or substantial portions of the
    Software.

    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
    THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
    FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
    DEALINGS IN THE SOFTWARE.
  </copyright>
  
  <interface name="zcm_manager_v1" version="1">
      <description summary="Manager to get information from compositor and attach buffer to CM">
          This interface is for the compositor to announce support for color managed surfaces. 
          
          If this interface is supported by the compositor any surface that doesn't use this profile is to be assumed to be in sRGB irregardless of buffer size.
          
          Surface that do use this protocol can either be in a compositor supported color space in which case the compositor is responsible for rendering to the final display space in this first case there is no guarantee to accuracy since the compositor is allowed to use an intermediate compositing color space in this case. The second option is to render directly to a display color space using this protocol the query what that is, in this case accuracy will be guaranteed for the primary display, the compositor should make a best effort to also show mostly correct colors on any secondary displays the surface is visible off but no guarantee to accuracy shall be given.
          
          In the context of this protocol primary display means the main display the surface is visible on and secondary display means any display the surface is also visible on (either because the primary display is mirrored or the surface is split between two screens), it is the compositors task to determine what the primary display of a surface is - potentially with the help of user configuration - and sent the right events might this change.
          
          THIS INTERFACE IS EXPERIMENTAL!
      </description>
      
      <request name="destroy" type="destructor">
        <description summary="destroy the idle inhibitor object">
            Destroy the cm manager.
        </description>
      </request>
      
      <request name="cm_surface">
          <description summary="Make surface color managed">
              Make a surface color managed
          </description>
          <arg name="id" type="new_id" interface="zcm_surface_v1" />
          <arg name="surface" type="object" interface="wl_surface" />
      </request>
      
      <request name="cm_request_color_spaces">
          <description summary="Get supported color space from compositor">
              Clients can send this request to get a description of the color space supported by the compositor
          </description>
      </request>
      
      <event name="cm_supported_color_spaces">
          <description summary="A fd to a file provding information on supported profiles">
              A JSON file with a list of supported profiles the compositor can render, should also include the display profiles. Send either as a response to a cm_request_color_spaces or on change of information
              
              The JSON file should include a tag that the compositor will use to identify color profiles, a description which may be a file location of an ICC profile or a set of primaries + curve information (this is needed since currently HDR can't be communicated via ICC profiles) and potentially a common name for programs to identify common color spaces.
          </description>
          
          <arg name="cm_config" type="fd" summary="JSON cm configuration fd" />
      </event>
      
  </interface>
 
  
  <interface name="zcm_surface_v1" version="1">
      <description summary="Interface for color managed surface">
          With this interface the surface can set its color tags as defined in the JSON configuration. It can also use this to get the tag of the primary output.
          
          If a surface uses a tag that is the same as its primary output the surface must be blended as late as possible before output to display. Otherwise the compositor is allowed to use an intermediate compositing space.
      </description>
      
      <request name="cm_set_color_space">
          <description summary="Set color space">
              Set the color space for the attache surface
          </description>
          <arg name="cm_cp_tag" type="string" summary="Color space tag as defined in cm config" />
      </request>
      
      <request name="cm_get_primary_display">
          <description summary="Get primary display tag">
              Request to get primary display tag.
          </description>
      </request>
      
      <event name="cm_primary_display">
          <description summary="Current primary display of surface">
              The current primary display of the surface, can either be sent as response to a cm_get_primary_display or when primary display changes although in that case only when surface was rendering to previous primary display.
          </description>
          <arg name="cm_primary_display_tag" type="string" summary="primary display tag" />
      </event>
      
      <request name="cm_set_secondary_surface">
          <description summary="To assist the compositor in rendering to secondary screens">
              A secondary buffer can be set to assist a compositor when dealing with secondary screens or complex compositing tasks.
              
              This surface must be the same size as the primary surface, must not already be attached to a cm_surface object and cm_tag must not be a display tag.
          </description>
          <arg name="surface" type="object" interface="wl_surface" summary="Secondary surface" />
          <arg name="cm_tag" type="string" summary="Color space tag for secondart surface"/>
      </request>
      
  </interface>
  
</protocol>
2 Likes

Well, OCIO is not the best choice for automated color conversion. OCIO doesnā€™t define any machine-readable interchangable color spaces. It is good for its main purpose: do color conversion for the current display under total control of the user. OCIO is nice, but is surely not usable for predefined ā€œprofilesā€.

That is the main problem :slight_smile: We should either modify ICC standard (+ LCMS?) to support HDR, or invent a yet-another-format for profiles.

That exactly what Iā€™m pushing for: by default the compositor should expect all the apps just render in sRGB. 99% of developers will never know/care about color management. We must not expect app developers would care about that. It will never happen.

I have just checked: it looks like Windowsā€™ ICM system has no connection to DirectX surfaces API. I have called SetICMMode() for HDC, then created an sRGB surface with DXGI (and, later, scRGB) and DirectX didnā€™t do any conversion to the display profile. I donā€™t really know how it is expected to work. It just passes the data through directly to the display without any color conversions, even though I explicitly tell it is sRGB data.

Iā€™ll try to expand it. Graeme tried to tell that one cannot use intermediate color space representation if one want to get good rendering quality.

The point is, when color management system converts data into the displayā€™s color space, it can use different ā€œRendering Intentsā€. These ā€œintentsā€ define how the colors outside display color range will be fit into the destination display space. The simplest way to fit two color spaces is to clip non-displayable colors, basically drop them. More sophisticated approaches, like ā€œPerceptualā€ compresses the source color range (basically offsetting absolute color values) to fit all the source colors into the destination space. The user will not notice the offset (due to eyes adaptation processes), but the image will not have dull flat-filled areas of not-representable colors.

So, if you use ā€œintermediate color spaceā€ approach, then the only intent you can use is ā€œclippingā€, which creates bad results most of the time. If you want to use ā€œperceptualā€ intent, you should convert directly from the source image color space to the display color space.

Yes, ā€œsRGB-likeā€ was just a short name for ā€œa color space with primaries and EOTF something like in sRGBā€ :slight_smile:

The point I wanted to tell was unrelated though: as far as I know, it is impossible to describe an HDR color space with an ICC profile. Though it needs some investigation.

Yes, exactly. GUI elements, like menus and buttons, are expected to be painted on sRGB surface, but the canvas with actual image data is expected to be painted on a separate surface with a different color tag.

I mean the app should have two surfaces: one with sRGB tag for GUI elements (for which the compositor does color management) and the other one for actual image data (compositor passes it directly to the display).

There is not color management for DXGI surfaces in Windows, even though MS claims there is. You create an sRGB surface, and DirectX passes the data directly to GPU without any conversions. Calling SetICMMode for the corresponding HDC doesnā€™t do anything.

Outch, do you have any information published about that anywhere? I have a lot of questions then :slight_smile:

  1. Does LCMS actually handles this tag?
  2. Does it mean that TRC curves will be normalized by this luminanceTag range?
  3. How the ranges will be connected when linking to SDR?

For the most part, I have absolutely no idea whatsoever what you Techie-Types are talking about when youā€™re down in the detail ditches. I do, however, ā€œfeel the loveā€ :heavy_heart_exclamation: now emanating from this ā€˜discourseā€™.

Itā€™s very reassuring that, seemingly, weā€™re well on our way to avoiding the Colour Chaos which may have occurred had this collaborative discussion not taken place. Bravo and thank-you!

3 Likes

Agreed it was just and example that ICC is not the only way to do it.

Agreed, if we can get ICC to support HDR that would simplefy quite a lot. It is actually quite interesting that ICC is in one way, way to powerful (supports color space with up to 16 channels, while we are only interesting in 3 channel RGB) but on the other hand not powerful enough (no HDR support).

Did MIR had any colour management code? Could any of that be used?

Even if it had it wouldnā€™t really be useful for this since this isnā€™t about any particular compositor implementation of CM but how applications can communicate with those compositors.