Wayland color management

Actually device links are rather more system independent than device profiles, since they simply represent a pixel color conversions.

Hmm. I think that may have been in response to my suggestion. But that was made on the assumption that the client has no knowledge of which output a surface will be mapped to. If it turns out that clients do have some knowledge (as implied by the Wayland HiDPI support), then things get simpler, and I don’t think device links or encodings other than RGB are needed.

Right, that’s what I meant by a proofing workflow. General CM allows on the fly conversion of colorspaces rather than setting up something fixed. i.e. you can display images with disparate colorspaces all on the same display.

Agreed, but removing this from the compositor would depend on clients being able to know which parts
of a surface are on which output, and being notified if the mapping is being changed, and being given the chance to update the corresponding buffer as it was being moved. The impression I was given by the Wayland Devs. though was that it was a feature of Wayland that it was possible to move surfaces around without the client having to re-render it.

Right, but the I think the Wayand implementation is looking for zero processing as a best case, so mandating colorspace conversions as part of compositing (even if executed efficiently on GPU) is going to look unappealing. In my sketch the client can render in the display space (same as HiDPI aware client), allowing the compositor to do a direct copy with zero processing, not making their best case any worse. (But perhaps I’m not completely understanding your suggestion ?)

I think this was touched on in a previous thread ? I agree this is not caused by OS X.

Ah that make sense, yes theoretically OCIO can do that but it is indeed setup to do a Origin → to composite/render → to output (so always go to the render/composite color space in theory that is, in practice the OCIO lib will take some shortcuts it it can do an Origon → Output transform in one go)

Indeed and in hindsight I would say that is a unfortunate decision and might need to be revisited in light of CM. Although most of the time I don’t split an image over multiple monitors so only an update when the majority of the view plane changes would be necessary, so I think we might be able to reach some sort of compromise here.

I think in light of HDR/High gamut monitors becoming mainstream compositors will need to do something like my proposal anyway (at least the first part) or else all the legacy stuff will look rather wonky, in the past this wasn’t a problem since high gamut monitors where expensive non-mainstream so everything non-CM looking off was an acceptable trade-off for the people needing those monitors, this is not an acceptable trade off for the mainstream! HDR/High gamut aware apps will already be rendering in something like (linear) rec2020/DCI-3p/ACEScg so making the compositing space comparable to that will cut down on unnecessary transforms.
Add to that that monitor space might not be linear (especially for cheap monitors) in which case compositing in that space might not be the best idea. Also I think the only thing that can be directly copied will be fullscreen apps, since most things will be rendered as overlapping windows so first need to be copied to an intermediate compositing buffer (bliting), before copying that buffer to the framebuffer (although this is pretty cheap in modern GPUs); adding a shader to that won’t add much especially on modern HW (yes even Intel integrated stuff, and with modern I mean last decade or so)

Thanks for your response by the wat, it helps me explain these things clear up these things and is definitely a learning experience!

I think interesting here is that Android started to use scRGB too. So Windows and Android are using it now. Also it is compatible with sRGB mich makes it attractive :slight_smile:

1 Like

scRGB is nice for storage but horrible when compositing, you do not want to deal with negatives due to how colors are defined (see here for all the gory details)

Probably a stupid idea, but anyhoo…

If the Wayland folk don’t want to disturb the simple abstraction of the surface, could an alternate approach that respects that be a “shim” layer in front of the device drivers that does the CM transform?

Edit: Would probably be far more palatable to the Wayland architecture to petition for acceptance of the profile data for passing, rather than asking it to include the transform…

1 Like

I totally missed the emergence of Wayland. :blush: Have been following this thread. It has done a good job in filling us in on what is involved in colour management.

On its Wikipedia entry, the word “color” is only used once but at least it is paired with the word “management”. :stuck_out_tongue:

Agreed, but I was given the impression that this aspect was totally non-negotiable, and they weren’t even prepared to talk about it.
(At least, not for something so unimportant as Color Management, which they’d never even heard of :slight_smile:

The thing is that this aspect is typically a property of the window manager. If a window manager presents multiple displays as one large virtual surface, then users are free to position windows however they please, including straddling displays. So a modern graphics system that is properly architected and implemented will work flawlessly in this scenario.

As best I can gather (and I’d like to find the key quotes from the originators), Wayland has at least these three basic principles:

  1. Clients don’t know where their surfaces are, so that window positioning can be done by the window manager independently and asynchronously to the application.

  2. (Unlike X11) clients applications do all the rendering, and the Wayland server does the compositing of buffers/surfaces together.

  3. High fidelity display is vital - no flickering (“Every pixel is perfect” mantra).

My current conclusion is that due to the nature of reality, these three principles are in conflict (“pick any two”), so one of them has to go, or at least be weakened. But the Wayland developers aren’t at this point - they still believe that displays are interchangeable enough that they can cleanly separate compositing from rendering, and client from display location, and not have to do too complicated things in the compositor to fudge over the gaps in this assumption. Now I also think that the cracks in this assumption are beginning to appear with HiDPI (Which is “important”, unlike Color Management :slight_smile: , so they’ve fudged all three principles to paper over the problem: 1) HiDPI aware clients do have some idea what output their surfaces are on, 2) the client is doing some of the compositing work by re-rendering at the output DPI and 3) if the client is unable to re-render due to lack of implementation or knowledge/arrangement of buffers making up the surface, then you get compromised fidelity on some outputs with the compositor doing scaling.

Maybe. But will they just paper over the problem with hacks, or will they face the reality that the architecture currently has fundamental flaws ?
These are people who don’t seem to know anything about Color Management, and when faced with wide gamut and HDR seem likely to attempt to painfully re-invent it from scratch, making all the same mistakes that were made 30-40 years ago (i.e. hacky fixed tweak curves, “just calibrate everything to be the same”, and maybe by the end "Oh - if you profile each display you can then mix and match colorspaces on the fly!)

But you are right - HDR like HiDPI is another reality that will challenge their assumptions. (They seem to have successfully ignored wide gamut up to now - not a surprise when there seem to be no Linux/X11 desktops that are color managed, so I’m guessing people either put up with garish, tweak their themes, or set their wide gamut display to emulate sRGB).

I’m not so sure that Wayland is intended to work that way. My impression is that the Compositor is not intended for application use, it’s intended for window manager use (or rather, that it’s intended for window manager like application use). Implicitly an application renders its graphics into output (display) space, and the compositor then does composition in output space (but I could be wrong about that - I haven’t delved into Wayland deep enough yet to be sure, and they have a bunch of compositor support for dealing with color encodings - i.e. RGB, YCC etc…)

Agreed - but this is a level of refinement. Many systems (and even photo editors) have put up with compositing in display space for a very long time, and got away with it. And I seriously wonder if the Wayland compositor needs to know anything about the display chromaticities - if the application were to take care of rendering into the display space (which it really has to if Wayland is not to take on rendering like X11 does), then all the compositor needs to know to correctly (i.e. the same as light mixing) composit is the translation to and from linear light for each channel, and this could be an option. (“HiFi compositing or fast ?” switch).

Yes, but no. What I mean by directly copy is that the pixels don’t need to be re-scaled, rotated or color transformed. They still need compositing - i.e. window overlaps and blending resolved. This is something they have advocated for “HiDPI aware applications”.

2 Likes

Yes, but you don’t want this. This is equivalent to “making all the displays the same”, and has the usual problems of gamut handling. i.e.

You make the abstraction space small (i.e. sRGB) and then you can’t take any advantage of wide gamut or HDR.

- or -

You make the abstraction space large (i.e. ProPhoto etc.), and then applications have no idea what gamut to map to so you can only clip (and you would need to clip well - i.e. maintain hue), and your desktop will still be garish.

i.e. you are permanently cripple the color handling and preventing implementation of intents, and you are introducing a fixed performance overhead. Calibration and profiling needs special handling, since the application device value limits don’t correspond to the actual display device limits.

I suspect in that context “Color Management” just means dealing with different color encodings - i.e. pixel depth, RGB vs. YCC etc. It doesn’t mean Color Management in the accepted sense.

1 Like

My point is that it looks more like an afterthought compared to what is being discussed here; hence, this thread’s relevance.

This approach makes a lot of sense.

(Might reply to some of the other stuff as well later but need to go to work soon, but this is something I have been thinking about)

I think this is the biggest one and it all boils down to that their definition of “Every Pixel Perfect” is somewhat different from mine, I would accept the occasional flickering (and I suspect it would be quite minor and only for applications that do the CM mostly themselves) in return for proper colors.

And yes I know that is anecdotal evidence but we also know that lots of professional artist have used High Gamut monitors where only CM using applications showed the correct colors with the rest of the stuff looking garish[1] so accepting trade-offs in favor of CM is long and proud tradition.


[1] We really don’t want to do this for the mainstream tough so some other trade off has to be made

My thought was to find a way for the assigned profile to follow the image through Wayland to the individual display shims, where it would be used to convert the image to the display gamut/tone. Two things would be gained: 1) keep the Wayland effort out of color management, and 2) defer the transform to the segments of the surface that were allocated to the individual displays, each with it’s particular profile.

No worries, as I still don’t know enough about the specifics of compositing to probably fully realize the inviability of my noodling… :slight_smile:

Dear god, and these people are designing and building the “next generation” unix display architecture?

I hope they can be convinced to see why they need it or unix graphics arts tools are in trouble.

Maybe asking the devs of such tools to contact them would help?

I just posted this to the mailing list:
https://lists.freedesktop.org/archives/wayland-devel/2019-January/039826.html


Hi,

I can’t use Wayland because it is missing colour management. As I’m
not a developer I can’t fix it myself. The only thing I can do is to
invite you, the Wayland developers, to the Libre Graphics Meeting
2019:

https://libregraphicsmeeting.org/2019/

The Libre Graphics Meeting (LGM) is an annual international convention
for the discussion of free and open source software used with
graphics.

LGM 2019 is taking place from May 29 to June 2 in Saarbruecken, Germany.

Here you could discuss with the Gimp, darktable, Skribus, Krita, …
developers what they needed to make this programs work with colour
management and Wayland.

Regards,
Tobias

5 Likes

Nice, Tobias. Is there any way that you could issue an addendum to that post? It might be effective by adding a few “what’s in it for me (Wayland Developer)” points. Currently there doesn’t seem to be a clear compelling reason for any of these folks to show up. Give them something to be excited about so that they want to attend.

The reason this came to be is that the skills needed for making a GUI system that takes advantage of modern hardware capabilities is not entirely the same as understanding color science and until fairly recently the first could ignore the second to some extend so long as some basic facilities where provided[1]

  1. answer to the question “one which screen am I running?” and
  2. Please load this LUT into the graphics card

Neither of which are possible under the current design of Wayland, add to that soon HDR/Wide gamut display will become mainstream and you have the current mess.


[1] As can be seen on how it currently works under X11/Xorg and Windows (but not MacOS unless using GTK/Cairo) this isn’t that great either but is workable

From here Wayland and Weston 1.2.0 released [LWN.net]

 - Color management: Richard Hughes worked on color mangement for
   Wayland and implemented two schemes in Weston: a simple cms plugin
   that reads a profile from weston.ini and a more advanced plugin
   that integrates with colord.  Here's a screenshot of how that in
   turn integrates with the GNOME control center:

I’m not a dev, a colour expert or even a Linux user. I just recognised “colord” and was hoping that is good news. Sorry if I’m just adding noise.

If you permit a pure amateur into this thread: would this be of use when describing the importance of CM?
https://fedoraproject.org/wiki/Features/ColorManagement

Have fun!
Claes in Lund, Sweden

Must be the same guy as the one I referenced to two posts up :slight_smile:

1 Like

AFAICT that is Gnome specific only and is effectively only taking care of LUT loading (putting the VCGT information into the GPU, not even sure if it includes actual wayland bits) for a lot of tools (Argyll among them) we need a DE/compositor agnostic Wayland protocol.

1 Like

I am myself just an amateur as well to be honest