Wayland color management

What layers in Xorg? colord for example only provides information (and includes a LUT loader) it doesn’t do any color management NOTHING does on Xorg, Oraynos I think tries to be a full color management system but AFAIK nobody uses that. Or to put it differently on Xorg the only way is to push pixels with hopefully a CM lib (since I trust the developers of the CM lib a bit more then the other developers involved). Also with a MacOS style thing developers still need to do the right thing or otherwise everything falls over anyway (it is effectively still a library that is just part of the OS) (also since it is closed source it is fucking opaque to begin with)

Yes it would be great to have a unified color management system but that would need to be build on top of this Wayland protocol, the biggest reason for this is that for security reasons we don’t want to do a lot of color management inside the compositor (remember the wayland compositor is also the root of trust from a security perspective!) (There are probably other reasons as well)

You still need to educate every compositor creator at least and convince the wayland devs to do the right thing (current proposals don’t allow for profiling or calibrating for example and rely on ICC device link profiles)

Sorry I was afraid I was misunderstanding you here, note that your beloved MacOS is build around ICC, right? (see here for documentation also still not sure what you meant with your original remark

So we should just trust some other devs to do the right thing™, although I agree LCMS needs to be improved/replaced but currently (besides OCIO) it is the best we have. Note that as it currently is every compositor dev is going to use LCMS to implement this anyway, yes even your full system management stuff.

You know that is effectively what my proposal boils down to except that for security reasons I want to limit the amount of information the compositor needs to process from clients. (so everything that doesn’t want to be clever sends their buffers as linear rec2020, we should have an easy to use fast library to do this for them; advanced apps can go directly to display space which needs some extra communication to figure out what display space actually is)

On top of this we can build a proper system color management system but we don’t want to push that inside the compositor!

You have the ICC profile with matrix and TRC handled by the CMS lib. Then the video card and the VCGT adding more corrections. What is taking care of the VCGT ? The GPU driver ? The DE ? I haven’t figured out this part, but it’s not just a matter of one CMS, there are at least 2 layers.

It seems we are going to have a very secure unusable compositor.

I’m not asking for more ICC wizardry, I’m citing Apple as an example of an OS which has a built-in CMS deep inside. The original remark is: when I load an app on a multi-screen setup, I cannot check if what the app considers as the display profile is the display profile of the right screen. Same thing when the app moves from one screen to another. I have seen weird stuff going on, debugging color profiles is nearly impossible.

Sure, but it will be easier to monitor only one project and one team, instead of tracking every little project out there.

I don’t care if it’s in or before or after the compositor, I just want a single CMS/pipe doing all the color management for the whole OS in a predictable, reproductible, unified, debugable, documented and parallelizable way. And it seems to me that the closest to the GPU you do it, the more simple it becomes.

I partly agree. Yes, it sucks, does not use SIMD or GPU, but you can call it from multiple threads in parallel, means it supports multithreading. I can point you to the rawtherapee code where we do this, if you are interested.

Actually I am :slight_smile:

Ok,

If you use the flag ‘cmsFLAGS_NOCACHE’ to create the transform as for example here, you can use the transform from multiple threads.

1 Like

cmsDoTransform() lets you pass each stride (row) of an image. If you loop through the strides, you can multithread the for-loop. See here for a straightforward example:

1 Like

VCGT is done by the LUT loader, Xorg only provides a protocol that gives access to the video card LUT. (I think argyllCMS LUT loader is called dispwin, colord-kde has their own and so does gnome) It actually amazed me that you didn’t know this, this is basic information that everybody dealing with color needs to know! (MacOS includes its own LUT loader, I think windows does as well but is notoriously buggy so most people use the one by Adobe)

Bring that up to the wayland devs, that is how they designed it. (for example this is also a use issue for streaming and screendumps just for some more security examples)

With a proper protocol this should become quite a bit easier to debug, since I do agree it currently is a mess on Xorg, also since a) the MacOS system is build arround the ICC spec copying it would include ICC wizardy so copying would be off the table b) ICC is an industry standard that we need to support, we also need to support OCIO and be future proof for anything new that might yet come.

Except wayland is only the protocol specification and weston the reference implementations, what we are talking about is a protocol that each and every compositor should implement so we are NOT dealing with one team! If we want to be dealing with one team we first need to have a wayland protocol that this one team can use to build up a system wide color system, so this system can talk to each and every compositor unambiguously.

(also quite amazed you didn’t know this one)

We all want that but to get that we need to understand what we are working with and realize that multiple teams are involves (the waylands devs for the protocol, the compositor devs to implement said protocol, the graphics community to create a CMS that uses said protocol and the graphics application devs to use said CMS correctly)

1 Like

The only thing I know is what needs to be done to the pixels code values I want to push. I understand bit stuff whenever I see an equation, or at least a block diagram, but I’m pretty much dumb outside of maths and physics. So, yeah, the mysteries of protocols and implementations are new to me, but on the other hand, I find that software engineers often prefer to bury themselves under dozens of pages of specs to avoid dealing directly with basic maths concepts that would need less paper but more thinking.

Thanks, I didn’t know any of this… :smile: You’ve summarized assertions from a lot of different places, that can be the challenge sometimes, how to round it all up.

Personally, I find LittleCMS to be one of the most capable abstractions of a complicated domain I’ve worked with in my software career. I’d much rather color folk work color transforms than for GUI folks to try to figure it out from scratch.

Question about the Apple architecture: If I were to take a window with a color-managed image render and split it across two displays, would the Apple CMS colorimetrically handle the pixels for each display upon which they rendered?

@gwgill

Even Android noticed that color management is important.

Enhance graphics with wide color content  |  Android Developers

The linked video might be exactly what you’re looking for …

There is also:

https://source.android.com/devices/tech/display/color-mgmt

As a physicist I do agree that software engineers often make it more difficult then is strictly necessary but it isn’t always completely without reason and from my work (as a small e engineer in a large manufacturer of semiconductor equipment) I can tell you that physical items will have pages and pages of specs as well (from technical drawings, to how it needs to be tested, to what materials it is made of, etc, etc). Anyway the reason I assumed you knew this was that you seem to be an extremely capable developer (see the work on filmic) so I thought you had an passing interest in this sort of thing. So to make it clear for you and everyone involved lets discuss what wayland actually is.

Lets compare wayland to X for this exercise in X we have X11 the protocol and Xorg the reference implementation, due to the size of X11 and the flexibility of the protocol everybody just uses Xorg to run their stuff on top, so on top of the Xorg server runs a window manager (kwin, mutter, etc) which can potentially include a compositor or can use and external compositor (compiz). In time (some of) the developers of X11/Xorg realized that it would be much more efficient if the server included the window manager and compositor so they designed the Wayland protocol and the reference implementation Weston, since in the Wayland world the window manager and the server are one and the same different DEs couldn’t use a central server anymore and each implemented their own Wayland server (kwin, mutter for KDE and Gnome respectively)

tl;dr
Wayland → X11
Weston → Xorg + compositor + window manager
Kwin → Xorg + compositor + window manager
Mutter → Xorg + compositor + window manager
etc.

4 Likes

Not just the applications discussed here. Art and graphics tools in general like Krita, Scribus, Inkscape and many more will be impacted.

Do they realise they’d be driving off pretty much all of the pro level and many amateur graphics and art people?

Hi,

in principle I agree with you - other Color Management (CM) systems could be substituted
for ICC based ones. But let’s consider the situation here:

  1. This is about CM of displays, not about application CM in general.

  2. There are only two widely deployed CM systems supported in operating systems: ICC and WCS (Windows Color System). I don’t know of any applications that make use of the latter, all Color Managed applications I am aware of use ICC for display CM. (I don’t count proofing workflows such as video in this, where a display is calibrated to emulate a single specific colorspace, since this is not managing color - it is a fixed workflow.)

  3. Correct me if I’m wrong, but OCIO is aimed at color workflows (i.e. film rendering and compositing), and while I’m sure it could be adapted to work as a system device profiling system, it is not optimized to do so (CMYK etc. ?), and is poorly supported in this role. (i.e. I know of no widely used display calibration and profiling systems that output OCIO profiles.)

In terms of Wayland, I don’t think that the aim should be to put a full, general purpose color management/workflow system into it - in fact this rather goes against the Wayland idea of making the client (i.e. application) responsible for rendering.

Now you could propose that a new “neutral” system be invented to implement CM in Wayland, but this is a huge task to create, implement and maintain, and would make porting existing applications CM much harder. (Also see the appropriate xkcd about standards!)

In contrast, systems that have CM already have ICC display profiles, and applications already deal with them, and there is already a read-to-go ICC CMM implementation that is widely used and supported to drop into a Wayland implementation (lcms2).

The ICC spec is rather large and complex so libraries/programs that can read devicelink profiles suddenly have a larger attack surface, this is especially a problem for compositors since those are the root of trust in the wayland world

Agreed - but this is a reponsibility that Wayland takes on in insisting that it does the compositing
in a fashion that divorces the client from knowlege about what output it is on (but I have some reservations about this conclusion - it needs more investigation.)

For this and other reasons I think it would be mandatory that people from the image editing/creation software are involved in this discussion. To frame this discussion I will try to put down what I think such a protocol should look like.

I’m not sure this is the pertinent angle. What’s more pertinent is getting across that CM is vital to certain applications, and CM means support for both CM tools (to setup CM environments) and CM enabled applications.

In my proposal the compositor internally should composite in linear/scene referred color space with the rec.2020 primaries, legacy/sRGB would be converted to this color space, while basic applications would render directly to this color space. Then the compositor would use a shader to render this composite down to the screen color space and in this space would composite in the advanced applications. In this case profiling can just use the advanced application route and only for the calibration we need to add a way to change the calibration LUT[2] from the wayland protocol.

I fear this is far too complex and prescriptive. I don’t think that any particular colorspace should be baked into the composer, and that it is not actually necessary to tie CM to compositing space (see my suggestion at the end.)

But I have been thinking about Wayland, and it occurred to me that there is a highly analogous display attribute that they must have had to come to grips with :- display resolution (aka density). And indeed, yes, they have a means of dealing with HiDPI. It’s a little clunky, but it’s what they have adopted, and so provides a path for CM that (hopefully) would be less hard to resist.

The way Wayland works allows for a (spatial) transform between pixel buffer and surface. The Wayland compositor uses that definition to rotate & scale pixels before they are composited together.

For this situation with a set of mixed resolution outputs and legacy applications, the application will render to fixed DPI buffers, and the compositor then scales things so that (say) windows end up being similar sizes as they move or cross from one output to another. Naturally this is not optimal in regards to quality, so the approach a HiDPI aware application would take is to create buffers with an orientation and DPI that results in a null transformation from the compositor, and the pixels can then be directly composed to the output.

This translates pretty directly to CM, if you substitute a device color profile for DPI.

Assume the compositor has a profile assigned to each output.

A legacy application won’t label its buffer colorspace, and so the compositor could either assume the source is the same as the output (CM Off mode for speed and the same as current Wayland behavior) or assume that unlabelled sources are sRGB (CM global On mode - yay - a color managed desktop!)

CM aware applications would tag their buffers with the appropriate color profile, and allow the compositor to do the CM conversion, or if they want to take charge of the CM themselves (because they need more control over CM details, or if they are converting from color spaces like CMYK etc. that aren’t supported by Wayland), then they would do the conversion themselves, and label the buffer as the same as the main output it resides on, resulting in a null transform.
(i.e. if HiDPI applications are able to get the information about what DPI the buffer mainly resides on, then the same path should be possible for CM to know what output profile the buffer lies on).

[ I think the space in which the Wayland compositor composes pixels for each output could possibly be managed by a distinct extension, perhaps one that simple provides a devices space ↔ linear light curve. ]

4 Likes

No, I don’t think so. Typically a CMM will compute a link (using floating point maths) and then implement the link transform as efficiently as possible. Even using integer conversions, a link doesn’t loose much precision (typically about 1 bit of rounding). But in any case, a CM aware application should be able to do the conversion itself using whatever color system it likes, to whatever precision it likes, and have the Wayland compositor transfer the pixels unchanged to the display.

1 Like

I think your sketch is fine, but I suspect it won’t fly in the Wayland world from a performance perspective alone.

I was suggesting an ideal case. I think in practice such an idea would need enthusiasm from both organizations on a much wider basis than just color management, and would be something that at best
would take over 12 months to organize (how far ahead do each of these meetings commit to a city and venue ?).

I have had similar thoughts after my attempt on the Wayland Dev, list, but let me refine such an approach a bit:

  1. Develop the Wayland protocol extensions, and test them with (private) patches to Weston.

  2. Pick one or two popular Graphics App friendly Linux distro’s, and create patches for their Wayland composers.

  3. Port color tools and as many color managed applications (i.e. UI libraries) to use the extensions to Wayland.

  4. Persuade the Distro’s to adopt the Wayland extensions with their distribution of the graphics applications.

It’s largely up to the application users and application authors to apply pressure to the Distro’s.

Whether the extensions are officially adopted by the Wayland project becomes irrelevant. Distro’s would be the ones to apply pressure in that direction though.

lcms does support multi-threading. GPU is a bit out of scope for a general purpose library, but is certainly in scope for a Wayland composer implementation. But remember “premature optimization is the root of all evil in programming” :-).

Yes, that’s how it works on all the current operating systems - MSWindows, OS X and Linux. It’s all done at the application level. Yes, OS X smooths things a little by letting you just label the colorspace so that it does the link/transform for you, but that’s it. If someone really wants to take the pain and mistakes away from App. programmers, then they need to build good CM support into the GUI libraries like Qt etc.

The color management setup is taking care of that, i.e. ArgyllCMS dispwin, colord etc. It’s the same as on MSWindows and OS X - the VideoLUT tag is read from the display profile and sent to the hardware. The ICC profile is registered with the operating system. The display calibrate & profile took care of coordinating the VideoLUT calibration curves and profile. That’s it.

Yes, there is a proposal for a Wayland Security extension, but last time I checked it was just a sketch of a proposal, and hadn’t been implemented. Something like that would be the path to providing VCGT access in a way that could be acceptable to the Wayland devs.

Yep - Marti has done a great job with it.

I think so, yes, since it has all the info it needs to correct each portion of the raster appropriately.
Nothing to stop any of the GUI toolkits or Wayland composer doing the same if they are provided with the same information (i.e. tagged pixel buffer and color profile tagged outputs.)

Interesting. I got the impression that there was some thought about running Android display output via Wayland, and if so I wonder what they will make of no color management support in Wayland. (Of course if they assume a single display, they can bodge around it.)

Thanks, display color management was less clear to me this morning than now, mainly from reading the post you’ve been constructing…

No - they thought I was joking or out of my mind when I told them that.

Exactly!

Incidentally, CAIRO under MacOS is also preventing applications to directly push pixels to screen. Pixel buffers are assumed to be sRGB and then converted to the display profile by the system CMM. It is not clear to me if this was intentional or a bug due to a mis-interpretation of the MacOS API.

The direct consequence is that programs like DT and RT are forced to always assume an sRGB display as output device…

Hi,

Thanks for your detailed response

With regard to device link I was more talking about this proposal by the wayland devs that uses device link profiles to communicate with the compositor which I think is a poor idea for my stated reasons. Regarding OCIO this is a point where OCIO does show its origins in the film industry where in that it is mostly used on expensive monitors/projectors (with built in LUTs) so the display profile will be sRGB or DCI-P3 and the designers out no interest in outputs other then monitors/projectors, for cheaper monitors the profile for the main output is baked into the config . The short of this is that most OCIO enabled SW has no idea what an ICC profile is. (I can think of only Krita that has both OCIO and ICC support)

That is probably a better way to put it

That could work, so long as we can get a way to properly tag (so not alone color space, but also render intent and I think black point compensation) but I think this pushes way to much of the CM into the compositor.

Why not? Doing your above proposal would mean doing a color management operation per buffer not all of which can be pre-computed. In my proposal the sRGB to Rec2020 (or other compositor internal color space) can easily be turned into a shader and thus run on top of the GPU similair for the Rec2020 (or other internal compositor space) to display space, since this will only be updated once in a while (after profiling) it should be possible to compile it down to a shader/LUT. True we could JIT compile color transforms to shaders to speed things up in the DPI like workflow but that would eat more resources then what I would want to do, or am I missing something?


Pretty sure this is a defect in CAIRO under MacOS, having skimped some of the documentation MacOS does have the ability to just push pixels to the screen if you want do that, although it is not recommended.

1 Like

Incidentally, I am including support for OCIO configs in my ICC-based PhotoFlow editor: PhotoFlow/ at ocio · aferrero2707/PhotoFlow · GitHub

2 Likes