Wayland color management

@swick, @Dmitry_Kazakov thanks for your input!

I am not going to reply directly to any comments since this post will be probably already be long enough as is :wink:

First things first is that compositors should include some form of color management how they do it (use LCMS, OCIO, hand coded shaders or whatever) is something that should be left to the compositor creators (of course we can give advice to that but implementation is not what this discussion should be about). Secondly a full CMS system includes the capabilities to calibrate and profile, this includes telling the compositor what the calibration curves are and which profile to use, so even in the case that we bypass the compositor to measure, it would still be necessary to have a protocol to tell the compositors this information and even if this protocol isn’t in wayland (but dbus) it should be developed in tandem with the wayland protocol or else the wayland protocol will be a lot less usefull.

Also for various reasons it is preferably to not do any compositing more complex then bit blitting in a display color space (it is not guaranteed to be linear or well behaved), but as both @gwgill and @Dmitry_Kazakov have pointed out to me it is not the best way to demand a certain color space for this (since maybe the compositor is only bit blitting thus can do so directly in display space).

For certain applications we will need to have ‘direct’ access to the display color space - I think an example here is when doing soft proofing where only proper soft proofing can be done if the output space is known and is space rendered to - the problem here is that we currently don’t know which screen we are running on and that this might actually be more then a single screen, I think the best solution here is to declare one of the screens as ‘primary’ and either give apps the capability to provide a secondary buffer that will use the lazy/basic path or let the compositor do a “best effort” display to display transform.

So lets go to my new proposal

  • Legacy/sRGB applications, buffers that are in sRGB either due to not needing more or being legacy
  • Basic (effectively what @Dmitry_Kazakov calls lazy) the applications tags their buffer with a color space, probably needs a way for the compositor to tell which are supported (and probably need to mandate a certain minimum set but that is more of an implementation detail)
  • Advanced app (this might be implemented as some sort of ‘pass-trough’ tag on the above basic buffer) something that needs direct access to the display space this main buffer will be blitted into the ‘primary screen’ buffer, can optionally provide a ‘basic/lazy’ secondary buffer for advanced compositing (shadows, alpha transparent overlays, etc) and other screens. If this buffer is not provided the compositor should do a best effort display to display transform for any non-primary displays (and probably render a black rectangle underneath for advanced compositing purposes)
  • Profiler/Calibrator: Need a way to put color data directly in display space for measuring and need to be able to update compositor on any changes made to calibration curve and/or display profile

This is pretty actually pretty close to the wayland dev proposal by Nielsen with the exception that it includes profiling/calibration (to some extend) and doesn’t use device link profiles. The main reason no device link profiles are used is that not all applications can use them, for example Blender and Natron (since they are build on OCIO instead of ICC) and (if I understand that proposal correctly) any advanced stuff the device link will do (like soft proofing) will only be visible on the ‘primary display’ which will show up as a difference between displays (which we all agree we want to minimize). This besides the factor that ICC profiles are kinda complex binary files and I think most libraries/apps able to read them haven’t been fuzz tested or investigate for security issues.

If I understand it correctly it should be “trivial” to add HDR to the above (please correct me if I am wrong)

The problem is that you cannot properly describe HDR color space with a normal ICC profile. From technical perspective, the problem is that ICC cannot describe values higher than 1.0, which are a norm in scene-referred color spaces. From theoretical point of view (I’m not a pro in ICC details, so I might be wrong), the problem is that ICC does not operate with “nits” values. Instead, it expect diffuse white be the maximum available color brightness.

For example, PNG-HDR standard provides an ICC profile for p2020-pq color space (for backward compatibility), but if you pass this profile to a color management system and will try to convert colors to e.g. p2020-linear, all the color values will be clipped by 1.0. So the implementer should recognize this profile by a special tag and shape the colors accordingly.

So, if you want to implement a “color management engine” for a compositor, you should define a term “profile” in a bit broader way, than just an “ICC profile”.

EDIT:
There is also a problem of “feature completeness” of this engine. ICC pipeline has quite a lot of options and features and I am 99% sure there will be people who would report wish-bugs for some nice things like color proofing, blackpoint compensation and so on. I think that the main requirement here is that compositor at the very least must not prevent people from implementing these features inside the apps. Pass-through mode might help with it at least partially.

EDIT2:

Yes, unless ICC is the only means of describing color, then yes, HDR can be implemented quite easily. Though the devil hides in details: there is no commonly standard for describing HDR color :slight_smile: Microsoft used enum tags for that, which is extremely bad: one cannot choose a custom color space for the buffer. There is even no tag for passthrough mode (DXGI_COLOR_SPACE_RGB_FULL_G22_NONE_P709 is technically a pass-though, because DXGI does no color management after compositing).

1 Like

Yes, passing a null profile might work. Though the term"profile" should be defined quite carefully (see my other post)

My understanding of ICC profiles comes from using LittleCMS, which apparently allows more ‘room to maneuver’ than the V2 or V4 ICC specifications assert. ICCMax might be the useful consideration space for this.

ACES workflows also probably require some recognition of a null transform, because they want to bake in all the ‘make pretty’ scaling transforms.

Well we both know that ICC is not the only way to describe color (just see OpenColorIO for on different example). Note that I don’t think we should use any form of ICC profile to communicate with the compositor (with maybe the exception of communicating the display profile of the ‘primary’ screen to an advanced application and even that could be done by defining a ‘display’ tag + something like colord to query the profile)

So I think we should define a set of tags, although I don’t think that hard coding the meaning of these (and number) is a good idea since that is one way to end up in the Windows situation (not flexible enough)

EDIT: Not communicating with ICC profiles does NOT mean that the compositor itself can’t implement the proposed workflow with LCMS internally (in whole or in part) it just means we need an other way (possibly out of bound of wayland, maybe in a spec document) to communicate the colorspace information. It also does NOT mean that the compositor isn’t color managed.

PS. Sorry for the double negatives in the edit

I can contribute nothing to the technical aspects of this discussion.

This post from @Hevii_Guy and the quote of @asn, though, is super important I feel (and @Hevii_Guy did an outstanding job of enumerating steps and a path forward). A successful outreach and engagement is vital to helping frame the problem in real-world terms and benefits for users and we should help raise awareness and understanding.

I am hoping that some of these things are happening on this topic now and welcome any further discussions on how we can publicly reach out further (we can work on a new topic to not derail this one).

1 Like

Fair 'nuf. At risk of putting Marti Maria, the LCMS author, out on a stake for grilling, I think there might be room for discussion of “LCMS3”, where color profiles are handled generically, and tools developed for including DCPs, and what we’re discussing here. I personally like the dcamprof JSON convention…

Whatever the convention, IMHO a consistent definition of a generic color profile format that can accompany an image array from start to finish, including both display management and file embedding, is needed if color management is to become more widely implemented across all the programs written. I think this information is as important as the width and height of the image; all are needed to properly interpret and manipulate a digital image.

1 Like

This sounds like it has potential actually, probably still want to use names/tags in the wayland protocol but then either have a well defined location for the json files (e.g. ${XDG_CONFIG_DIR}/color/profiles/${TAG}.json ) and/or a dbus protocol. But pushing arround a json file over wayland might or might not be a bad idea either. (s/json/xml/g if that is better idea)

Fully agree although no idea how this would work in practice currently

I hope you all will be at the LGM!

Would love to, sadly for various reasons that are way outside the scope of this thread I can’t

Done. Feel free to improve on the opening post. :slight_smile:

1 Like

You’re reading things that aren’t there. It isn’t about X11 vs. anything else (it’s a long time now since I did X11 server development - most people who use my current software are on MSWindows, OS X or Android, but my familiarity with UNIX and X11 is what motivated me to support Color Mangament tools on Linux), it’s about the fundamentals of device dependent color, and the possible technical tradeoffs in dealing with it.

I’m not quite sure what you mean by this. I guess most people on this forum are fairly familiar with the user cases, and the application developers in particular have an understanding of how they translate to technical implementation, so the focus in this discussion has been on the Wayland situation.

i.e. don’t assume that somehow the focus here is on blindly re-producing color management the “way it’s always been done” - this is very far from the truth. As far as I’m aware, all technical possibilities are on the table, and have largely been discussed or can and will be discussed again. If you think something has been missed, then suggest it, and we’ll see if it is really novel or not :slight_smile:

Yes an no. Yes, it’s good to brainstorm without too many constraints, but at the end of the day it’s also pointless to propose things that are technically infeasible or would require heroic levels of work or cooperation.

Yes. But there is more to it. A high quality conversion takes account of the users intent. “Intent” in the Color Management sense means what is best for the users application in terms of the tradeoffs made in converting from one colorspace to another. Things like how the white point should be handled, how any black level differences should be handled, how differences in gamut should be handled, (for HDR) how differences in maximum brightness/dynamic range should be handled, what speed/quality tradeoff should be made. There is no fixed list of what these intents are or how they are performed - they are specific to an application and the march of technical development.
(i.e. See ArgyllCMS collink options for a hint of the range of this - and this is just one application.)

In all this, the source of the color is just as important as the destination. From an application point of view, the source colorspaces can be quite varied. i.e. imagine a PDF that contains many different graphical elements that end up composed together. PDF has a rich graphical description language, and one that is Color aware. Source colorspaces can be multiple different RGB’s, CMYK, Multi-channel print spaces (i.e. CMYKOG etc.), device independent (XYZ, Lab*), named colors (i.e. the Pantone pallete) etc.
So creating a high quality conversion from any and all of these source colorspaces with regard to possible intents and application purpose is not something that you can hand off from an application to something else, at least not without it being just as capable and executing the same way. (And this is putting aside the amount of work required to re-architect current applications.)

Now technically the execution (rather than setup) of such conversions could possibly be handed off to some other element (i.e. via a device link type specification), but if we think about the PDF example again, this is pretty unwieldy. You’d be having the make the PDF renderer hand off the composition of the whole PDF output to that other element, in all the complexity of meeting the PDF document semantics. Sounds possible, but why would you want to do it, when you already have a working PDF renderer ?

The other approach that is then often suggested is to try and break apart these two aspects by introducing some sort of intermediate representation. But this ignores intent, particularly the issue of disparate gamuts. To have the normal range of gamut mappings available, both the source and final destination gamuts must be known at the point of creating the color transform. It could be done, but color fidelity would be compromised in ways that aren’t normally necessary.

So bottom line, (as far as I am aware), is that if high fidelity rendering is desired, applications have to be the element that makes the conversion between the source color spaces/descriptions and the final destination (display) colorspace.

[ This is highly analogous to the HiDPI case. If the compositor isn’t doing rendering itself, then the application is the element that has to render at HiDPI. ]

Specifically which proposal ?

I’ve seen a few floated.

I’ve been through this discussion before on the Wayland Dev. list. Take it from over 20 years in dealing with this sort of thing, that it isn’t the right approach. Firstly, Color Management is hard, and one aspect that is really hard is figuring out what’s going on. Having different workflows for calibration/profiling and use of the profiles inevitably leads to the problem of one not matching the other. Secondly, calibration/profiling is just another App. It will have a full on GUI, and will want to access the display in both special and normal ways (calibrate, profile, verify).
This is an area ripe for more discussion though - if Wayland was to have the API for accessing display and other RGB profiles for instance (because it needs that info. itself to do conversions of buffers that are not already in the output colorspace), then installing the display profiles is part of the API. If part of the responsibility of the composer in installing display profiles is to ensure that the display ‘vcgt’ tag is honored and that threfore the profile is a valid for the display (and it can implement that any way it likes), and it does null profile conversions (i.e. skips conversions when profiles match), and there is some mechanism for displaying a surface on a specific output, then I think that calibration and profiling would be OK via Wayland.

See my above discussion on a PDF application. Are you prepared to expand Waylands raster spec to include up to 15 channels ? Is Ghostcript really going to re-architect their RIP to hand color conversions to Waland ? (I suspect not - instead they would reluctantly switch to an intermediate colorspace approach, with the subtle color glitches that introduces.)

- but since Wayland seems to have eased the restrictions on clients knowing which output they should be rendering for (due to HiDPI), then I think with a little finessing there is a middle ground possible, where Wayland only has to support RGB (and other similar 3 channel) color conversions.

I’m afraid, this just make me laugh. You have to draw the line somewhere between reasonable security, and making a system so “secure” that it is hostile to the user, and they won’t use it. I guess Wayland doesn’t support full screen games then, because imagine what they could do - a malicious game could take over the whole screen and hold the user to ransom!

But more seriously, Wayland can’t have it both ways - either it has no API for establishing credentials to change system settings and it has no responsibility for any system settings, or if it’s responsible for some system settings it must provide a mechanism for establishing an application/users credentials for making changes to them.
[ Big picture - Linux has a shaky application programming interface at the best of times, and throwing away X11 (which has some level of coherence and completeness, even if it is becoming archaic) and then only partially replacing it with something else doesn’t improve things. ]

Sorry, it comes across as if there has been little progress two years. Happy to keep explaining things (as I see them of course), but less happy about feeling like I’m not being taken seriously.

That’s not an architectural problem though, that’s the problem that the GUI itself isn’t color aware. The “big hammer” approach that OS X takes is simply to assume a default source colorspace when apps. don’t specify one when transferred to a display, so that even color un-aware apps. get a rudimentary level of Color Management. Nothing is architecturally different - Color Aware apps can render to a specific display.

If Wayland ended up working in a similar way, that might be a good outcome.

Yes it does - in fact it has two CM systems, the legacy ICC based one, and WCS.

See above. Yes, app. developers are meant to apply some basic Color Management if they want their apps. to display correctly on wide gamut displays. Yes, not enough developers (even the Microsoft ones) know anything about Color Management. Yes, Apple made this more convenient than Microsoft.

I agree that giving the compositor some Color Management capabilities has many advantages. Architecturally though, it does’t have to. A system in which the compositor composes separately for each display and where applications render for a specific display and are color aware (and therefore know that they need to convert each source to the appropriate output, i.e. sRGB to HDR) is perfectly workable. Without additions it mightn’t meet the goals of fully taking advantage of display capabilities (i.e. video overlay planes etc.), and doesn’t meet a goal of being able to use the same raster on more than one display.

Sorry, I’m not aware of the context you are working in here. It wouldn’t surprise me if Microsoft have made a mess of HDR and Color Management. They had a burst of enthusiasm for Color Management when they create WCS, but since then enthusiasm seems to have faded, and I’m not sure how many (if any!) color scientists remain there to guide their system design.

I tend to agree, although I am not aware enough of the hardware capabilities of current display hardware, or their intended usage, so I’m not sure how that feeds into the compositor requirements.

Hi,

Display profiles may be Luts, so you may need a Lut to do the conversion.

While you can support simple intents (i.e. gamut clipping) using an intemediate space, you can’t properly support more complex intents (i.e. “perceptual”) where both the source and destination colorspace gamuts need to be known for conversion setup.

Thanks for taking the time to answer.

I just want to make something clear: nobody is suggesting that the compositor takes over all color management tasks. The client is free to do whatever it wants as long as the surfaces in the end are in a specific color space and the compositor is told about it.

In your PDF example you could still render everything into a floating point frame buffer with e.g. CIE LAB color space. The compositor would only convert it into the display color spaces in the end.

Thanks, that’s helpful. I’ll take a look at collink. However this just sounds like we need policy decisions for the compositor. That’s something we can handle.

I guess that’s what I was proposing earlier. I’m not entirely sure why the fidelity would be compromised and you don’t expand on that.

Well, that’s the point we disagree on. It would help if you expand on why you think that’s the case.

This is just weird. Are you now arguing that the compositor should somehow interfere with the color profiling step? The color profiling is best done when you have full control over all the hardware so you can do the best possible measurements. If the compositor then doesn’t make correct use of that measurement it’s a bug in the compositor.

I completely disagree here.

Secondly, calibration/profiling is just another App. It will have a full on GUI, and will want to access the display in both special and normal ways (calibrate, profile, verify).

That would all work just fine. You would press on “calibrate” button, the compositor asks the user to temporarily give control over the display to the application, you draw whatever you need and in the end the compositor takes back control and everything looks like before.

I want to see actual arguments here and not “I’ve done this for 20 years like that”.

I honestly don’t care if you laugh or not. This is a fundamental design decision and so far we have found solutions to all problems without breaking it.

That’s simply wrong. You just have to take a look at existing compositors to know that what you’re saying is not the case.

This goes both ways.

In general displays aren’t “like” any particular colorspace - they are what they measure to be. i.e. if they are color managed, they are always defined by a profile.

That would mean an application could never take advantage of a wide gamut display, and could never do a good perceptual mapping from the source space to the display.

Perhaps you meant that you would like a default conversion that assumes sRGB source space for non color aware applications or the GUI elements that you render ?

Why should it be special ? Shouldn’t there be any number of color managed applications and windows displayed on a screen at once ?

But the GUI may want to use transparency on color managed applications for various transition effects etc. Yes the window should be opaque for correct color.

Agreed that a color aware application needs to know the display profile to create a high quality conversion.

I’m not clear on what you mean by that. You are using a color managed API and then using the null profile trick to disable color management for MSWindows ?

Right, but Krita is an application - you aren’t responsible for the system GUI rendering.

That is the implications of constraints 1) that the application doesn’t know which display it is rendering for 2) High quality color output is desired.

Please re-read what I wrote. No this isn’t workable, because how will the conversion from L*a*b* to the display space know how to execute the different intents for different pixels in the raster ? How will it know the source gamut for perceptual intent conversions ?

I don’t think you grasp the complexity of this. You really don’t want something like collink in the composer if there is any possible way of avoiding it, and even if you did, it wouldn’t satisfy other application requirements.

But in any case this, or the more practical support of general device link conversions is not needed if the client can be given the hint as to which display profile it should prefer to render to, similarly to the HiDPI case.

I expanded on it at length. See the explanation of intents and how you need both the source and destination gamuts.

I’ve already explained this. If you don’t understand the explanation, then please indicate what you don’t follow.

It’s about the color workflow - how the colors values get transferred and transformed on their way to the viewers eyeballs. What I’m worried about is the whole processing pipeline, and making sure that when a particular buffer of pixels is declared as being in the displays colorspace as defined by the given profile, that in fact the buffer is actually processed in exactly the same way for display as it was when it was profiled.

So saying that application colors are processed through the compositor but that the calibration and profiling tools should take control of the raw display interface, and take it on trust that the compositor won’t differ in how the pixels are processed, is inviting unnecessary disaster. For instance: lets say that my calibration and profiling tools stay as they are, and setup calibration using the hardware VideoLUT and then profile the display. Then the user switches to Wayland, but Wayland doesn’t load the VideoLUT from the ‘vcgt’ tag in the display profile (or that Wayland has some other rendering tweak/feature.) The profile is then not valid. How can I verify it’s not valid ? - I need to run the color profiling tools in verification mode through the Wayland compositor to do this.

To put it another way - any valid color calibration & profiling tool has to have the capability of changing the system calibration and profile, because that’s it’s final task for the user. This capability means that it can set the color workflow up correctly for calibration and profiling, and gives it the extremely valuable assurance that the color processing for profiling is the same one that will be used for rendering, meaning that the profile will be valid, and it is trivial to verify that the profile is correct.

Summary :- from a Color Management point of view, suggesting that the calibration and profiling tools should access the display using a completely different mechanism than the workflow the profile will be use in, is the exact opposite of the best way of doing it.

You should really take a look at one of the commercial profiling applications. No, they don’t issue instructions and then switch to a blank screen - they have instructions on the screen, and graphics of exactly where to place the instrument, etc. No I don’t think it is normal for color profiling tools to have to provide their own GUI rendering library to display their GUI, just because they want the color management set to in a particular way.

I’ve expended a lot of time with detailed explanations, but you don’t seem to want to spend the time researching or understanding.

[ Strangely enough, I’ve not spend 20 years doing the same thing over and over again, I’ve spent it in a constant search for better ways of doing things. ]

Which is perfectly fine - here is another problem requiring a solution :- find a way to let color calibration and profiling applications install calibrations and profiles, so that they can perform the function users expect of them.

I see it as fundamental. Please explain why you think otherwise.

I’m confident HDR spaces can be characterized successfully with ICC profiles (as successfully as is possible, given than many HDR displays do too much processing and therefore make any static characterization a compromise). A display profile usually records the absolute brightness in the ‘lumi’ tag.

Yes. An ICC CMM will need some tweaking to cope with linking HDR profiles. For SDR → HDR there needs to be a brightness intent parameter (where SDR 1.0 maps to). For HDR → SDR there needs to be suitable tone reproduction operator.

[ I experimented with this a little, some time ago when playing with creating an scRGB profile. ]

Please stop getting personal (and I mean everyone participating)! We are all here with best intents. This is a complex topic and a lot of people want to learn from this thread. Getting personal drives people away and this is counter productive!

Please keep explaining things to each other, don’t point fingers. If something is unclear explain it again!

6 Likes

That’s not how it is coming across from some people.

Sorry, I’ve spent my whole working day doing nothing but composing emails about Wayland & Color Management. I have explained the same points about half a dozen times in half a dozen different ways, and I have reached my limits for a while, unless we are able to move on.