They are also expensive, the point is more that these things exist and was just given as an example of an option (to be fair only an option to people who have way to much money to spend) and yes for 99.9% of the people it is complete overkill and an EIZO monitor would be a much better way to spend their money
Rather than critique the proposal in detail, let me point you to this, and you can check it off against each requirement and see how it scores. How does it compare to MSWindows, OS X and X11 ?
The only thing I’ll add is that there seem to be some unnecessarily complexity to it too - I don’t see a need to specify arbitrary additive colorspaces in the protocol itself. If an application needs/wants to do that, you can pretty easily use lcms to construct a profile on the fly to do it.
I’ve looked it over, but I don’t see the point of commenting much on it. You think you are right, and are pursuing an implementation regardless of my opinion or the reasons behind my opinion.
It would be far more rewarding for me to have someone with a better knowledge of Wayland and a reasonable understanding of what color management encompasses to critique and improve my proposed/sketch of a protocol. It appears that such a person doesn’t exist though.
If my intention was to insult people you would know it! : - )
But as always, I’ll call things as I see them, and that may hurt some peoples feelings I guess.
Tone mapping is none of you business, HDR screens use either Dolby PQ OETF or HLG OETF, just provide something to apply a LUT and fill it with those curves. If output supports 10bits, write SDR between 0-255 and HDR between 0-1024. Get that working and we will have some ground to fight surround light/Bartleson-Breneman effect with a proper CAM, which is much more difficult than applying a profile. Meanwhile, people have a hardware backlight dial.
That was my initial thought as well, but: https://www.collabora.com/news-and-blog/blog/2020/11/19/developing-wayland-color-management-and-high-dynamic-range/#qcom1061
Thanks a lot for feeding me information
I totally get if you or anyone else gets fed up entertaining my curiosity, given I am only trying to understand the current state of things and my only contribution (at best) is reproducing that understanding in a way that’s (hopefully, maybe?) accessible to someone that’s not followed all of the past discussions on this. I am still continuing hoping to keep the constructive discourse going, but obviously am not demanding for anything (how could I?).
I still feel like I am missing something here: For all I see the wayland color management proposal fulfills your requirements (or they are explicitly out-of-scope for the proposal) and the summary of your protocol sketch pretty much matches wayland’s.
The requirements about being able to set the display color profile is not covered by wayland’s protocol. It explicitly states that it does not cover that process, but assumes display profiles are already in place. To me that doesn’t mean there’s no way to set display profiles, just that this proposal doesn’t define how that happens. Basically an implementation is free to handle that however they want respectively that may get standardized in a separate proposal.
As to your sketch: 1) is not covered, at least I didn’t see any provision in wayland for a compositor not doing color management. However that is basically the current, undesired “baseline” behaviour, right? 2) and 3) seems to be basically the same as the wayland proposal to me. And 4) is not fully covered, as in there is not explicitly such a mode, but the application can set their output color space to the display color space, which gets you essentially the same behaviour. Except that the application needs to handle multiple displays itself, i.e. needs to split it’s output into two surfaces if it spans multiple monitors to set different output profiles. As far as I understand that mode is mostly necessary for profiling though, so I’d expect that to target a single monitor only at a time anyway.
Right now the discrepancies/critique I did get are: 1. wayland’s protocol doesn’t cover it all (especially how to configure/change display profiles). 2. Complexity of adding color spaces to the protocol. To me 1. is orthogonal: It is obviously fundamental to a profiling software, but nothing in the current proposal prevents a compositor implementation or another proposal from providing a solution to do that. On 2. My understanding is that the compositor needs to be able to convert input of various sources into a common working spaces for blending, thus it needs to be able to handle the input color space and the protocol lists a few of those that a compositor should be able to handle. Is your point that there are too many of those, as there’s already tools which can convert between them, so the compositor doesn’t need to be able to handle all of them itself? I am out of my depth here clearly - I would expect you could even remove that from the protocol alltogether and let the compositor announce which color spaces it can handle to applications instead - however I am probably missing something there.
This is BAD. Not only does profiling app need access to it, but also advanced graphics app needs it. This not being part of standard cripples the standard.
Not really - client needs to have full access and knowledge about surface with no transforms, because doing 3 conversions as described by you may lead to great loss in precision (and in some cases HDR <-> SDR headaches). Personally I’d also love if application could define important region of window to be color managed and leave rest for default color management
As for things “missing in standard/left for compositor to decide” - if there’s no standard way of doing things, you might imagine krita (qt based) to have nice colours on KDE, but totally shit on Gnome, while other way would be true for Gimp. You don’t leave crucial part of standard to “to be defined later” or “let’s implementations do their thing”.
In the current protocol you can get the (current) display profile from the compositor via the protocol if you then set the surface (aka your window or part of your window if using a sub-surface) to that profile the compositor must do a no-op identity transform. Other options (using a predefined color space, or assigning an ICC profile) are there mostly for e.g. web browser, videoplayers, simple image viewers, etc where the app doesn’t care to much about 100% accuracy but still might deal with non sRGB images.
All of the above should be possible including defining just part of a window (via sub-surface).
 See last paragraph of the section linked: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst#wayland-color-pipeline
 there is some extra langue around it in regards to alpha blending and multiple monitors but any area that isn’t covered by another partial transparent window that is on the “main” display should not be touched by any transform)
Sorry - I don’t currently have a lot of time (or energy) to replay past discussions on this and other lists at any length. One of the key problems it seems, is that a lot of the statements relating to this subject have little or no meaning to someone lacking the requisite life experience.
Then it seems to me that you are not reading what I wrote
Key point is that you can’t make a problem go away by saying they’re out of scope.
Anyone who has actually used a color managed system in anger knows that being able to color profile it and keep it in calibration is essential. It is not an option that can be waved away. The fact that all existing desktop application environments support this (and hence all existing applications that depend on color management expect this, and users expect this) is a hint as to the reality of the situation.
And while it’s perfectly reasonable to expect that implementation of a new facility will be incremental, anyone who has architected and participated in standardizing moderately complex systems knows the issues that result by not looking far enough ahead when architecting something. Too often you end up with a mess when you realize that early assumptions and simplifications are wrong, and that foundations need to be ripped up and redone, and/or horrible hacks have to be lived with because of momentum and sunk cost. If this was brand new territory, and everyone was on a learning curve, then a stumbling path to “perfection” is how it has to be, but this is not the case.
And that is a big problem. You haven’t got a modern Color Management system if there is no standard way of doing Color Management things. (Converting pixels from one colorspace to another is not a Color Management system.) Saying “do it some other way” is not going to happen. As the author of the only profiling system readily available on Linux, I’m 100% sure that I’m never going to hack into my code, distro or compositor specific code to load profiles or attempt calibration and profiling, even if there were a way to do so. Just like any other application author, I want a well thought out standard API to do these things, so that it can be written once and will work on all Wayland based systems. I get that on all the existing operating systems I support.
Wrong. I could digress and point out that the major difference between Wayland and X11 is that X11 does the rendering on an applications behalf - that is the way it worked well as a remote display protocol, and that in contrast Wayland assigns that responsibility to the Application, instead providing a smart “dumb frame buffer” to the application. So logically it would follow that the Application should have the responsibility for Color Managing the pixels, since that is part and parcel of the rendering process !
But most Wayland Color Management proposals have included a Compositor pixel color transform capability for two reasons:
It’s highly desirable in terms of making it easy for non color sensitive applications to be written that will work across the modern range of wide gamut and/or HDR etc. displays, without the authors having to be experts on, and clutter their code with extensive color management.
It’s forced by the wrong assumptions made in the initial architecture of Wayland, that pixels on different displays are interchangeable, and that the application can be kept ignorant of which display they are rendering to. This was the very basis of how Wayland handles compositing, so it’s not easy to fix/work around. Wayland can move an applications window around between displays without the applications involvement, so it is a problem when the application is doing/has to do the pixel color conversion.
But let’s come back to why the application has to be able to manage pixel color conversion itself. And once again, this is something that becomes evident when you have written color managed applications, such as a Postscript RIP, or soft proofing application etc., but may not mean anything to you if you haven’t:
The format of source color specifications is open ended. There are many standard ways of specifying a colorspace, but there are an infinite number of ways that aren’t standardized, and people keep inventing more. As I have previously mentioned, for a taste of this, go read the Postscript or PDF specifications related to color. Even something like ICC profiles allows for up to 15 channels of color information, so to support the subset of ICC based color specifications within Wayland would mean that Wayland has to have a raster format capable of handling floating point rasters with up to 15 channels, named color “rasters” etc.
The details of converting from one colorspace to another is open ended. There are some commonly used ways of doing this conversion, but when it comes to color sensitive applications, it can be very application specific, and people are constantly inventing new ways to do this. No clever technical tricks can hide this problem. In general, color conversions involve dealing with gamut differences between colorspaces, and the conversion needs to know the details of the source and destination colorspaces to setup specific flavors of conversions.
So a modern Color Management architecture has to allow applications to do their own pixel color conversion if it is to be comparable to existing systems. Making this possible while allowing Wayland to do the Compositing is one of the challenges. If Wayland had been architected with Color Management in mind, a lot of problems may have been avoided. (And Color Management was a well understood thing at the point in time that Wayland was created.)
See above. You can’t assume that the Compositor can handle the pixel color conversion except in very carefuly controlled situations. And once source colors have been converted to a particular displays colorspace they can be blended as they always have been. It does offer the possibility of improving the blending operation by allowing it to compose in a linear light mixing space - knowledge of the EOTF would allow simple per channel curves to convert to a linear light mixing space and back, improving transparency behavior and anti-aliasing. I guess that may imply a change to Wayland implementations, if they are currently doing blending before windowing into particular displays, but then it’s hard to take advantage of display hardware acceleration if they were doing that.
For those who are a bit lost in the nuances of color, a strong parallel in the Wayland world is how it is coming to grips with displays with widely different DPI’s. When the application is doing the rendering, trying to hide which display the application is rendering for becomes difficult when different displays have different DPI’s. So some of the “pure vision” that Wayland started out with, about how applications didn’t have to know what screen they were rendering to, and “every pixel is perfect” have had to give way to reality. Many of the same compromises are needed for Color Management. People don’t need to measure and re-measure their screens DPI though, since it doesn’t drift or change, and it’s relatively straightforward to measure.
So is this already functional (usable/compilable) software (a “fork”) that only needs to be merged resp adopted/packaged by distros? Has anybody tested it? Or is it just an idea?
As will be evident from perusing the link, it is a sketch of a protocol. It’s the starting point for an implementation.
If it were at the point of merging, the discussion would be completely different …
Ubuntu is considering defaulting Wayland for the 21.04 release. Would it be the time to move to another distro?
Or flood Canonical with bugs or feature requests
But I assume that it won’t work until some ‘business partner’ starts complain…
Oh, I’m betting that they will
Or stick to a LTS based distro, like *Ubuntu 20.04
There are multiple aspects to take into account:
- Xorg is not actively developped anymore; the only this that get regular commits it mostly the XWayland part (Fedora is changing the packaging of that for the next release).
- Ubuntu, like Fedora has already done, defaults to Wayland. It does not mean that Xorg is not available anymore. It just means that on a default installation, and with the default desktop environment, it will start a wayland session. But it’s just a switch to do in the login screen to get an Xorg managed session
- Color management is not used by everyone, and so proabably not that problematic for a lot of people.
- If you want people to start moving and acting to a new system, you have to set it as a default, to start getting more feedback ,bug reports and make it evolve
I agree with all of that, but AFAIK, the proprietary nVidia driver does not work well with Wayland, and the open nvidia driver does not work with openCL. So, many of us are stuck with continuing to use X.
That’s my understanding too.
But I’ve never had to do anything. I guess X is still the default on Fedora?
I run Fedora on one of my hosts, and I thought it now defaulted to Wayland. I’ll try to check…
And, Wayland is the default for Gnome, but X is default for KDE. KDE plans to change to Wayland default in the next release (Fedora 34).
In fact Wayland on Nvidia EGLStream is already possible since Fedora 29, but not activated by default.
And there are development in progress both on the NVidia and Xorg/Waylang side to get this sorted out. See for example that news from Phornix