Wayland color management

When you remove core functionality, then either that core functionality is not important so no one will care (haha, good one!), or things will break, users will notice and start complaining. Honestly, which do you think it will be? Maybe I should forward the complaints I will be getting to the wayland-devel mailinglist when the time comes, to drive the point home?

Unfortunately, Linux doesn’t exist in a vacuum. Sounds like a really bad case of “not invented here” syndrome to me. What is the point of re-inventing (or rather, scrapping) the wheel? Heck, even Microsoft decided to support ‘vcgt’ (years ago)! Maybe I should just suggest to the ICC to add ‘vcgt’ to the official standard, this could be done in an amendment, then you no longer would have an excuse to ignore it (I’m seriously considering this approach, and if it was a joint effort between me, Graeme and some other experts this would probably get approved easily - it is a de-facto standard already, after all, although it would take time for an updated standard to be released).
Also, no one is asking for “gamma controls”, but a well-defined way to set calibration, and for the system to apply existing calibration (by reading the ‘vcgt’ tag).

You are missing a lot, it seems.

  1. Profiling a display does nothing to change it’s whitepoint, nor does using the profile.
  2. Iterative gray balance calibration is not a manual process! And no, making R=G=B neutral, and changing the EOTF to a well-defined curve, is not the same thing, although calibration software usually does both.

Despite that “it just works” (verifiably so! Also see the respective comment made by Chris Murphy on the wayland-devel mailinglist), you honestly think you or I have control over what other people do?

Again, this is a prime example that you seem to have no clue how ICC color management works. ICC display color management is whitepoint relative! If you profile the display while it happens to have a very reddish whitepoint, for example, then the profile will do nothing to counteract that. What it will do (when used) is what Graeme tried to explain above (chromatic adaptation of colors with regards to the whitepoint, but this will do nothing to change the gray balance). Should I attach an example picture for illustration purposes?
[ Btw, I was not even arguing that you should profile the redshift, or that it was a good idea, just that technically there is nothing wrong with it, if it’s truly what someone would desire. I personally don’t care for it, at all. ]

That cannot be right. Profiles can be per channel LUTs and that’s exactly what the vcgt is. I also know for a fact that some redshift implementations use ICC profiles to achieve the effect.

Manual was a poor choice of words. Still, I really struggle to see how just another per channel lookup table (the vcgt) could not be represented in an ICC profile.

That was really helpful and I think we can resolve the disagreement.

Some profiling software lets you chose a target white point and setting the profile does change the white of the display. That was the scenario I head in mind with my explanation. If the profile uses the white point of the display and redshift is applied then yes, you and @gwgill are right.

edit: is there an actual requirement in the ICC spec that says that the profile cannot change the white-point of the output device and everyone is using the vcgt tag to get around it?

I think you are mixing up calibration and profiling since most calibration/profiling SW does both often at the same time[1]; calibration can change the whitepoint using the calibration curves[2], profiling can’t since the resulting profile only describes what the screen is capable of[3]. Strictly speaking you can profile without calibrating but the screens response is often much better after calibration making it more useful.

Does this make sense?


[1] Strictly speaking calibration will happen before profiling but a user won’t necessarily notice that. Note that since this often isn’t properly explained it get confusing fast since there is a lot of wrong information out there.
[2] Which are often stored in the optional VCGT tag of an ICC profile but don’t have to and normal software must ignore it (only LUT loaders and calibrators are interested in the VCGT tag)
[3] Of course it is always possible to lie here and achieve all kinds of weird effects like red shift but this is a pretty bad idea to do it that way (for one not everything supports profiles to begin with)

Seriously? Have you looked?

I know that f.lux, for example, uses the ‘vcgt’ to achieve its effect.

In theory you could apply the calibration to the profile directly, but in case the whitepoint does not lie at R=G=B 255 this is not well supported (incidentally, especially littleCMS forces white to R=G=B 255), and I would consider this a hack, because it is non-standard (the ICC specification mandates for the TRC tags, for example, that “[…] the last element represents 100 % colorant”, so it would certainly be unexpected if that would map to a colorant value less than 255 for display profiles).
I hope you’re beginning to see what I meant by “re-inventing the wheel”.

I know, because guess what, I’m the author of one of said profiling softwares (yes, Graeme is the author of the tools that do the actual calibration/profiling work, and I’m “just” providing a nice GUI with quite a bit of added functionality and comfort, but I still understand intimately what each part does and how it does it) :slight_smile: They do so by letting the user adjust the display via its OSD controls, and/or employing the ‘vcgt’ for calibration. @dutch_wolf has addressed it quite succinctly in the post above I think.

The question is: To what other whitepoint should “the (display) profile change the whitepoint” (the profile alone can’t do anything btw, only a transform, involving at least two profiles)? Where should that come from? A source profile? What if you have several sources, e.g. one window with an image or UI elements in sRGB colorspace, D65 whitepoint, and another window with an image in ProPhoto, D50 whitepoint? Should it match each one? Then you have the problem of mixed adaptation, which will throw off your color perception, and it still doesn’t solve the problem that you might want to target a different whitepoint altogether (e.g. D58 equivalent).

I agree, and that’s another reason why we need to know the curve, so that we apply the right degamma/eotf to make it linear.

As the standard color spaces have their standard curves, knowing the curve will help the compositor to apply a standard degamma/EOTF to make it linear before blending the frames, then it can apply standard gamma/OETF curve in the end.

If the app/content wants to apply a custom (non-standard) curve, it should provide its standard degamma/EOTF as well as gamma/OETF curves in pair, so that, the compositor can blend it the way it was intended to be.

Not sure about this. I have received replies on that comment, so until these folks are too damn good in extracting the meaning of a blank comment, I guess it was not blank (No pun intended either :smiley:)

Most of those only have 1 non standard curve so in practice that first curve will be linear, still don’t see any reason for named curves on the application side.

Again, Let me explain this from a complete stack point of sight:

  1. A HDMI monitor / display needs to know what curve you have applied on the frame coming out of the source / gfx card.
  2. CEA and HDMI bodies have named and number-coded the standard HDR/SDR curves, and mentioned in the spec that while driving HDR, set the number of the curve, in desired field of AVI infoframes. This is the only way a HDMI monitor can know about the curve applied on the data coming out of source. ( CEA-861-G spec for UHD displays)
  3. Kernel/driver prepares the AVI infoframes, and in order to set the curve identification number properly, it depends on a KMS property, which needs to be set by compositor.
  4. In order to set the KMS property for HDR curve, the compositor is supposed to know the name of the curve, so that it can set the corresponding curve identification number.
  5. The compositor will know about the curve, only when the client app sets the curve name/identification properly.
  6. Most of the content available over Netflix, Amazon, 4k-vidoe.com etc is ST-2084 video, which has a standard curve. I have tested this content over many Samsung/LG/Vu monitors, which show the best HDR content only when the curve id is set in AVI infoframes properly.

Hope this helps to understand the real need for the curve id.

I have argued that it might be a better idea to simply set the display into hdr mode, chose a curve and stick to that the whole lifespan of the output. I think that’s a better idea than changing the curve depending on what’s shown on the output because it would effectively change the color space at seemingly arbitrary points in time and the colors and/or brightness will suddenly be a bit different.

Obviously changing the curve depending on what’s being shown could improve the accuracy but I’m not sure how bad it will be. The display also has to do some kind of conversion if it supports multiple curves so why should the display be a better place than the compositor?

Thank you and @dutch_wolf. That was really insightful. It also settles that the compositor must honor vcgt tags in the ICC profile. I’ll update the protocol to reflect that when I send in the next RFC.

With all that cleared up: what is it that we actually need for profiling and calibration then that the color management protocol doesn’t do yet? Let’s just assume that compositors use colord to assign profiles to outputs.

You can set the vcgt tag to do calibration. You can do a “null transform”. Anything particular missing?

As discussed internally, I still think this is not a good idea, as this means, if you plug-in a HDR capable monitor, regardless of the content is HDR or not, you will always be driving HDR output. Which means even when you set a SDR desktop wallpaper, you will be always doing SDR->HDR tone mapping, and wide gamut color conversion. This means heavy load on GPU per frame, heavy power consumption, and loaded system etc. Is this worth just to maintain validity of a color profile ? I don’t think so.

Need basis switching to HDR output color profile, while playing a HDR video, makes better sense to me at-least. I have given this example before also, quoting it again here “Its like driving the car in 5th gear always, just because it has 5 gears”.

We’ll have to do color space conversions either way because pretty much every client is sRGB and no monitor really is. Having to do tone mapping won’t change much there. Hopefully we’ll be able to offload most of that to the display engine on the GPU.

Colors and brightness changing slightly depending on what content is being shown? That sounds horrible tbh.

Currently, if you pick DRM KMS layer, no colorspace conversion is happening at all. The KMS colorspace property is added few months ago only, so I dont think thats the case.

Colors and brightness changing slightly depending on what content is being shown? That sounds horrible tbh.

And why is it so ? When you apply/playback a SDR content, you see SDR output, and when you playback a HDR content you see HDR output, why is it wrong ?

You’re mixing HDR with SDR content most of the time. It’s not as easy as “everything is HDR or everything is SDR”. Suddenly having to do another CSC and tone mapping because a small portion of the screen has HDR content will result in slightly different results on the whole SDR area.

Well yeah, as I said, I hope that we can use the display engine, I didn’t say that it currently does. Currently we’re not doing any CSC or tone mapping in weston at all.

Ok, sounds good I think. As said, the main thing that I would think needs to be included in the CM protocol is a way to set calibration (temporarily) on-the-fly during the actual calibration procedure (it should probably be made clear in the docs that it is for use of calibration software only). I’m hoping @gwgill will chime in on this.

Just my 5 cents on mixing HDR and SDR:

I would tend to agree. You would need different color profiles for a display’s HDR and SDR modes anyway, and the nature of current HDR implementations in displays makes accurate profiling in HDR mode challenging (I can elaborate on that if needed, also see below).

Hmm, not really. HDR content is currently limited to HDR videos, movies and games, typical “fullscreen” use cases.

It’s not as easy as “everything is HDR” either. HDR-capable displays in HDR mode (when appropriate metadata is received) do their own, compared to SDR quite extensive processing, including tone mapping, which is often not of the highest quality (e.g. simple per-channel roll off in RGB, maybe even just clipping, and not necessarily very linear especially if dynamic) and depends on current picture content, meaning at the very least the light output is usually going to be adjusted on-the-fly by the display. Despite that, in HDR mode those displays will also draw (probably considerably) more power.

Just an idea:
Only switch a display to HDR mode if HDR content (video, games etc) is played fullscreen. Otherwise, leave display in SDR mode, and apply HDR to SDR tone mapping if a HDR source is played.

Can ICC profiles even be used for HDR displays? Is that in the spec?

HDR in fullscreen only definitely gets rid of the problem but seems like a big cop out to me.

I get that current HDR displays can’t really be profiled and my concern isn’t absolute color accuracy but apparent changes where there shouldn’t be changes. If I can see that the color and/or the brightness of parts of the display change without an obvious reason I’d be under the impression that something is wrong. Just like flickering or glitches when moving or resizing windows under some X configurations it really doesn’t make a good impression and wayland is trying to avoid all of those graphical glitches.

Either way, I really don’t have a strong opinion and it’s one of those things where actually testing both ideas might be required. Might be that something that you thought is a problem turns out to be no problem at all and the other way around.

Might also be a good idea to figure out how others solved the problem. Unfortunately I don’t think any open source software has done that already.

ifaik HDR displays are theoretically no different from SDR displays from a color management perspective so it would surprise me if there was anything in the ICC spec specifically about HDR displays but in practice right now their behavior makes them unsuitable for color accurate work.

Hmm, not really. HDR content is currently limited to HDR videos, movies and games, typical “fullscreen” use cases.

The latest version of Krita is able to edit images in HDR:

One use case of this creating content for HDR TVs. Here is an interesting blog from Netflix and their problems to create HDR images:
https://medium.com/netflix-techblog/enhancing-the-netflix-ui-experience-with-hdr-1e7506ad3e8

I don’t know if ICC can even work for a HDR screen. Check out the official spec to make sure. But remember the whole point of this is colour accurate work to a point.