Wayland color management

Have sketched out my ideas for a calibration protocol it is probably not finished but contains the core of my idea wayland-colormanager/cm_wayland_calibration.xml at master · eburema/wayland-colormanager · GitHub

Using EDID values maybe a good fallback, but no displays are actually sRGB. They may be approximately so, but only for non-critical work.

And you simply can’t test a color management extension without being able to fully excerise its capabilities, which you can’t do without being able to run calibrations, create and install profiles, and run verifications.

[ Wayland is a couple of decades behind other graphical display systems in this area, and yet rather than fully addressing it, half measures are being suggested. ]

It may operate that way, but nothing guarantees it, and to make such an assumption is making the system fragile. And there is no need for this. But from a purely practical point of view the idea is a non starter - if applications were easy to write directly to drm/kms, then Wayland and the GUI toolkits built on top of them wouldn’t exist.

I disagree. It’s half a protocol.

I’m not sure what you mean by that. Wayland must support display calibration if it is to be on par with the systems it aspires to replace, and there is no technical reason why it cannot support it.

Maybe I’m missing something, but the way the current color management protocol is laid out (as I understand it) already needs to support applying calibration, embedded in a display profile’s ‘vcgt’ tag, on a per-output basis (i.e. whole display area), otherwise existing display profiles will be invalid. So wayland or a compositor already internally need a way to apply the calibration (‘vcgt’ contents). What is needed to enable calibration is simply exposing this functionality in a well-defined way. Profiling should already be supported, as I understand it, via a ‘null’ transform.

What does that mean in light of what I said above?

1 Like

Exactly. This is a clear improvement and the best we can do for the 99% who don’t even know what an ICC profile is. It’s a complete solution for them.

I get that you have different requirements but I don’t know how often people have to tell you that work is done incrementally.

Nothing of that has the be programmatically. I can do reasonable tests without having to implement another wayland protocol.

Even if there are bugs that we’ll uncover after adding calibration support: they’re bugs. We can fix them. You make it sound like we have to deliver a perfectly working system at once or it’s all worthless.

It also has a couple of decades less work put into it.

And yes, half measures, quarter measures, whatever-fraction measures. That’s how we build things. Everyone. Literally everyone.

You’re fine with a system which literally can change at any moment without warning or ways to detect them. Without any specification describing how different features interoperate. Where new versions suddenly behave differently.

Yes, nothing in the spec that I wrote says that this is a guarantee. I would still consider every normal desktop compositor which doesn’t guarantee it broken and it should get fixed instead of just working around it like your software does at the moment by measuring whatever the fuck X does instead of measuring the display.

I don’t know how often I have to repeat myself: nobody is suggesting that we don’t write a wayland protocol. This is a workaround.

It gives 99% of the users the best experience they can expect. You really should stop pretending that your use case is the only relevant one.

What exactly has to be calibrated from the software side of things? The VCGT? That’s under control of the compositor and as I said a normal desktop compositor should make it look like it doesn’t exist.

That’s actually a good point. The protocol currently doesn’t specify what to do with non-standard tags like vcgt. As they are not standard they should be ignored. The compositor will keep the vcgt under its control and on a working desktop compositor it will not affect anything from the outside.

There is no way around this. Profiles you generated on X could already be broken in a number of ways (well, broken in the sense that they only work correctly on X). Replicating what Xorg does is impossible because Xorg changed the way gamma and vcgt is handled in different versions and the version is not encoded in the profile.

Theoretically, yes, but what happens when the compositor is doing some effect like adding trasparency or some redshift-like thing is active? All a profiling protocol would have to do is set a special “profiling” mode for a surface so the compositor knows that it must not add those effects.

@dutch_wolf, that would have been my idea for a profiling extension.

Note that while they are not part of the ICC standard, ‘vcgt’ is the de-facto standard (since around two decades, originally invented by Apple) to store the calibration that is needed by a profile alongside the profile itself.

This is not strictly related to X. It means when someone buys a colorimeter and uses the stock software that comes with it under Windows or macOS to calibrate and profile, they cannot simply copy the profile over to their Linux partition (running on the same machine obviously) and have the profile work as expected, because the vcgt will never get applied.
[ As a side-note, I don’t think calibration under X is as broken as you make it sound, which is imo specifically due to not many programs other than calibration software interacting with the respective APIs to set the calibration, although please understand that I am completely dispassionate towards X ]

Yep.

If we also let it set the video card lut (haven’t really added that since I am not sure how that would be properly formatted we could also just use the profile for that letting it load via the VCGT) it should also be able to be used for calibration.


For non standard tags like VCGT the compositor should ignore them from the normal color management side and only use VCGT when loading a new profile for a monitor (either via the calibration/profiling protocol, via the compositors user settings or on startup) to set the video card LUT

You’ll have incremented Wayland into the '90’s :frowning:

The point is that this is not a set of brand new ideas that have to evolve much - color management of graphic computer systems has been around 20 years, and there’s plenty of experience been had with existing systems. I’m trying to relay some of that information to you, from the experience of being at the coal face and having worked with graphic display systems for the whole of my professional career. Yes, it needs adapting to Wayland, but crippled steps take almost as much effort as full steps, and designing a holistic scheme will end up with a much better result than stumbling through tiny steps blindly.

Really ? And how much more effort would it be to add the Wayland protocols for this and implement it the way it should be, right from the start, rather than writing hack code and having to throw it all way at some stage ?

[ Hint - if there was an attempt to do this properly, then it’s worth me co-developing the calibration and profiling tools for cross testing everything, and at the end the whole thing is done, and everyone that needs color management can use it! ]

I’m not sure what you mean by that.

All I know is that a workable system needs to be able to make a list of display profiles available to the clients, and implement the ‘vcgt’ calibration curves so that the profiles are valid. So you need at least a Wayland implementation and a prototype client that makes use of the profile to exercise that functionality. But you can’t really test it, since you need to make a variety of profiles and calibrations and verify that they are being implemented correctly. Yes, you might be able to hack some testing up using tools from other systems, but from a color sensitive end users point of view, and and color sensitive application writers point of view, this is basically useless, since end users are not in a position to calibrate and profile their systems. And why waste development effort with hack tool testing ? For about the same effort you could add the Wayland protocols to upload profiles, and then your testing is far easier, and you have a system that users can actually make use of.

Only if you are trying to figure things out. That’s not the case here. This is more like proposing that Wayland should support a pointing device rather than just keyboard, and proceeding to add mouse support for horizontal movement only. How much work is it to add both horizontal and vertical pointer support at the same time ?

I’m not sure what you mean. All I’m saying is that no-one guarantees what processing a system version does or does not implements between the point at which the CMM supplies pixel values, and those colors that are emitted by the display device. Agreed that in practice many display systems the CMM pixel values land in the frame buffer unchanged, are translated through per channel CRTC LUTs the same way and sent to the display cable. But nothing guarantees that, and nothing guarantees what the display does with it.

For instance, say a company has been selling graphics computers for many years with an assumption that displays have a gamma of 1.8. But at some stage it’s realized that in fact most displays have standardized on a gamma of about 2.2-2.4, and this mismatch is causing issues. One way they might have considered “fixing” the issue would have been to apply a gamma 0.4 “correction” curve in their display pipeline, so that applications get colors they expect. So every CRTC LUT uploaded as a calibration would have been modified by an additional hidden 0.4 gamma curve. The display system is perfectly consistent in its behavior, and is perfectly calibratable and profileable, but such profiles would not be interchangeable with a system that didn’t have this hidden 0.4 gamma correction curve.
This happens all the time with display controls and display setup. It’s even worse in the printing world, where a printer will have different rendering characteristics for each paper type printing mode. So it is just a “happy accident” that many different graphical systems have frame buffer to display cable characteristic that is the same. Nothing guarantees it, and a calibration and profiling system is safest if it doesn’t rely on such assumptions. And it doesn’t have to, as demonstrated by current such systems.
(This example is not so hypothetical, but since the company in question had implemented a color management system, they didn’t have to take that approach. Instead they could rely on simply setting appropriate default source colorspace profiles.)

Yes of course the behavior has to remain consistent for profiling to be useful. But nothing guarantees exactly what that behavior is, or that it be the same between disparate systems.

And that’s why people who need good color get the tools to calibrate and profile their displays - displays drift and age, controls get changed, color requirements change.

That’s always possible. It creates disruption. For instance, changing the behavior of a API’s in an old system so that CRTC curves are a combination of from all the API’s rather than being the last one set, would create disruption!

The aim is not to measure the display, because that doesn’t characterize what is actually happening. The display is just one element in the display pipeline, and to work reliably a color management system has to characterize what’s actually happening from the point it has control (where the pixel values come out of the CMM) to the point the light from the display hits the retina of the users eye. The less assumptions one makes about that, the better for the reliability of the end result.

Why though ? It just makes more work in the end, when the path forward is not too difficult to see.

It doesn’t though. All users expect that it is possible to get better results than a default profile from the EDID, because for years it’s been possible for them to profile their displays on the top 3 most popular desktop systems, and have color aware applications make use of the profiles.

I’m not pretending anything. If Wayland wants to cater for things other than Photography, Publishing, Video Editing etc., then that’s fine. It should give up the pretense that it is the future of Linux graphical systems though.

Even though it’s not absolutely necessary, it’s highly useful to put a display in a good state before profiling it. By “good state” that means setting things such as:

  • Setting it to native gamut
  • Its white point
  • Its brightness
  • Its EOTF
  • Its black point for print proofing

In an ideal world this would all be electronically settable, and attached in some way to the profile, so that when a particular display profile is chosen, the display is automatically set to the corresponding calibration state to make the profile valid.

Many of these adjustments can be made in the display itself, and often the only practical means of making these adjustments is manually. (Technically they may be electronically settable, but the protocols are not often well documented and there are no standard APIs to access such things.)

One mechanism that is almost universally available, and electronically settable for many of these adjustments, and has standard APIs is the CRTC VideoLUTs. So the de facto approach is to create per channel LUTs during the calibration phase, and embed them in the ICC profile. It doesn’t matter how this is implemented in the Compositor - it could load the ‘vcgt’ tag into the CRTC, or it could implement it with GPU shader code. Either way, it is an expected aspect of a modern color managed display system.

1 Like

It’s not a “profiling” mode though. If something is applying a redshift like post processing in a way that operates independently of the calibration curves, then the profile measures the effect of that processing. In relative colorimetric mode it wouldn’t change the white point of the result from what redshift is changing it to, but it would counteract any other effect such as a relative hue shift, change in contrast etc.

After all, that is the purpose of profiling.

But the reality of color accuracy is that you don’t turn “gimmicks” such as redshift on when you want the most accurate and consistent color.

[ If something like redshift has a colorimetric definition, then it seems possible to add an API that would let it apply a colorimetrically consistent effect such as changing the white point of the display in a way that doesn’t interfere with relative colorimetric reproduction. ]

Hello Sebastian,
One of the reason why the compositor needs to detect the HDR curve name / identification is, to set the HDMI AVI info-frames which would be passed to the monitor, which will help it decoding the curve.
The kernel KMS module expects the UI manager to set DRM properties like output color space and HDMI HDR metadata (which includes the EOTF parameter). This HDR metadata along with the EOTF curve identification will be send to the monitor in form on AVI infoframe, which is prepared by kernel. Without this information, its impossible for a monitor to interpret the data and display the buffer as it was meant to.

@dutch_wolf @swick, The kernel needs to finally set the information about HDR curve : HLG/PQ in AVI-Infoframes to the HDR monitor. You are right, in case of HDR and SDR or even two HDR clients playing different content, we need to tonemap + color convert. The idea is to have a target HDR curve, which the compositor can send to the kernel, and that can be finally sent to the monitor.
I think, if we get this curve information explicitly, from the client, compositor can take the call which curve to be finally fed to the kernel for a given display monitor.

So how would that work if you have 2 sources playing at the same time one with PQ and the other with HLG which the compositor needs to blend? Or when needing to blend in SDR content? Or applications like Blender, Natron, Krita, Nuke which deal with unmastered HDR content (so no curve has been determined yet)? Yes I understand the compositor needs to tell if it is using PQ or HLG to the monitor but I don’t see the need for an application to name the curve, if you really want something like this i should be fine to add something like a hint but there are many cases where the compositor will first need to linearize the data anyway and then it doesn’t matter if the curve is named or not, so long as we have the curve.

So how would that work if you have 2 sources playing at the same time one with PQ and the other with HLG which the compositor needs to blend? Or when needing to blend in SDR content? Or applications like Blender, Natron, Krita, Nuke which deal with unmastered HDR content (so no curve has been determined yet)?

Actually compositor can pick one of the HDR curves as output (Say PQ) and will tone map H2H the other one (HLG) to match PQ curve, and to be blend. Also, If it needs to blend SDR, there is one option is HDMI AVI infoframe to indicate SDR too, and compositor will tone map both the buffers using H2S tone mapping.

Yes I understand the compositor needs to tell if it is using PQ or HLG to the monitor but I don’t see the need for an application to name the curve

Honestly, if it’s not in an app, it would be very difficult for a compositor to detect a curve accurately using a curve’s samples. Whereas an app can easily do it (for example it can use FFMpeg API which its using for decoding the buffer, and it can return the type of encoding from the container.

Probably don’t want to do it like that blending stuff in non-linear color space will lead to artifacts so you will need to convert both to linear and then blend, this is also the only correct way (from a color management perspective) to blend in SDT content. THen afterwards a new curve needs to be applied to the new content and that can be send to the screen.

Since you don’t detect what curve you use the curve to linearize and then reaply (a potentially different) a curve since that is the only way blending will work without any artifacts. Also not all application have a nice source think for example Blender which can create HDR content but it will be unmastered and probably stored in EXR files which don’t support this kind of metadata.

I’m out of ways to tell you that the compositor guarantees that the pixel values coming out of the CMM are the ones arriving at the display. It doesn’t make the system more brittle. You wouldn’t double check every calculation a CPU is doing for example either. Guarantees are there so you can depend on them.

Sure. There is no way to do any of those things from software (except for the vcgt but that is in “good state” always. the compositor guarantees that) so what magical protocol do you want? Set the vcgt to identity if you want to set that tag. The compositor guarantees that it’s always in that state.

That’s just horribly wrong. The user obviously turned on the redshift effect for a reason. If the profile measures not the display only but also the redshift effect, you counteract the users redshift configuration. The correct thing to do is profile the display (and the display only, not the redshift effect) and when you want to do color sensitive work, turn off redshift. Not to mention that the profile would only be valid for one specific time of the day because redshift changes over time.

The user turned on redshift because the users wants the colors to shift. You’re saying that the user turned on redshift and doesn’t want to actually have redshift. I’m not sure how you would arrive at that conclusion.

Agreed. So why would you profile those “gimmicks” and not the display? When you turn off the “gimmicks” what’s left is the display (and again: that’s a guarantee).

Thanks for that. Let’s just hope we can arrive at something that seems “proper” to you.

Given the authority with which you keep claiming that I’m completely wrong on all these things, I’m afraid I’m perplexed at being so unaware of the display color calibration and profiling software that you have written, and is so widely used across different operating systems and display hardware. Can you enlighten me ? Tell me also, how often do you calibrate and profile your own displays - what measurement instrument do you use, and what profiling software are you using to do it ?

I’m only claiming that you’re wrong on one specific aspect. Instead of trying to argue otherwise you’re telling me that you know how things work and I should just trust whatever you claim.

So can we stay on topic and you tell me why it makes sense to measure “gimmicks” like redshift instead of the display only when the compositor guarantees that what comes out of the CMS ends up on the display?

So the only ‘vcgt’ that would be supported is the identity one? As already pointed out, this would break a number of things (for no good reason I might add):

  • Interoperability with existing calibration software and profiles created by them, this includes some major vendors software (e.g. X-Rite, DataColor) as well as my own (DisplayCAL, making use of ArgyllCMS), so also no profiling under different OS (i.e. Windows/macOS) and having the resulting profile work under Linux with Wayland (something that did previously work fine)
  • No way to set a different (and colorimetrically accurate) whitepoint than native on e.g. Laptop displays (they lack display controls)
  • No way to ensure R=G=B results in a neutral color with respect to the whitepoint before profiling, so more profiling patches (and time) needed to arrive at a comparably accurate profile on most, especially ‘cheap’ displays (they usually have blue cast in the midtones and shadows with respect to their native whitepoint)
  • No way to linearize the signal to light output response (tone curve, ‘gamma’) before profiling, again necessitating the use of more profiling patches (and time) to accurately characterize the display response if it’s not very linear

Adding to that, when there’s a de-facto standard (like the ‘vcgt’ tag), that is widely and almost exclusively used among the industry that produces ICC-compatible display calibration and profiling software, then it seems like a bad idea to just ignore that, because it will lead to a proliferation of workarounds and hacks to provide the same functionality in place, thus the wheel being re-invented with nothing really gained in return (other than a lot of headaches, and personally I’m really not looking forward to having to explain to people why the profile they created doesn’t work).

Not necessarily. The main use of redshift is to reduce the blue component of white light for people who work late, making it easier to go to sleep (the blue component has an ‘activating’ effect). It’s also not entirely true that redshift ‘changes over time’ - the timeframe is usually limited to a relatively short period between the beginning of sundown and darkness. There’s nothing wrong if someone deliberately wanted to characterize the effect of redshift after it’s applied fully, and the resulting display profile (when used) would not affect the redshifted whitepoint (but saturated colors to some extent).
That said, probably not something that many people would do, although it could be interesting for color research reasons.

Short answer to your question is: 1) It doesn’t make any sense, it’s just a consequence of the sort of hack that “redshift” is. 2) That’s what happens to anything inside the profiling “loop”. Anyone who understands color management will get this. The corollary is that you achieve such effects as “redshift” by working with color management rather than against it, by either changing the input profile or inserting a PCS transform (i.e. say an abstract profile) into the CMM link. An effect of this is that such transforms have the possibility of being much more (display) device independent.

[ And I’m not going to continue trying to explain in detail to someone that the world is round when they insist it is flat - it just feels like I’m being trolled and there are so many other things I could expend the time on. ]

They will need changes for wayland anyway and removing a variable should not be a problem.

Sure. The other way around should work though. I honestly don’t care about that. Adding gamma controls to X was a huge mistake, I’m not going to repeat that just to be compatible with other operating systems.

That’s exactly why you profile the display and assign a profile to the output. Maybe I’m missing something here?

The next two points are essentially the same. Where is the advantage of manually adjusting the vcgt when it can be automated by profiling?

I don’t see why anything would change. You simply should not assign profiles you created on other systems. That seems like a terrible idea either way.

Yes there is. If you do that and assign the profile to the display the whole display will get corrected, negating the redshift effect on the whole display. When the effect is undone, you suddenly have a blueshift.

What would be the point of that? It’s literally the same as turning off redshift with the added “benefit” that the colors are wrong when it’s not in the night.

This is not X where there is only a few color managed applications and everything else is sRGB. Everything will be corrected.