Wayland color management

The Linux Kernel 5.3 will have HDR support for some Intel chips:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=417f2544f48c19f5958790658c4aa30b0986647f

ToDo:

  • Wayland support for HDR
  • Toolkit support for HDR
  • Application support for HDR

Any documentation on how the DRM HDR bits work? (if I ever get myself an HDR screen might be interesting to play with the HDR modes from the console directly to try stuff out regarding measuring/profiling)

Anyway this reminds me I did sometimes back found some documentation how apple is planning to do HDR it can be found here: https://developer.apple.com/documentation/metal/presentation_objects/displaying_hdr_content_in_a_metal_layer

Some observations:

  • There are three options
    1. Mastered content (already in rec2020-{pq,hlg}
    2. Non mastered content but want to use systems tonemapping
    3. Non mastered content but want to do own tonemapping (e.g. when speed is important or when using a reference display)
  • its heavily tied to the build in color manager of MacOS
  • option 2 and 3 are only available when using Metal, option 1 can also be done with AVFoundation (I think)

I think that a wayland protocol should at least provide option 1 and option 3 with the caveat that option 3 should only be available for when HW is capable of doing so (e.g. when it provides the FREESYNC2 HDR display modes). Option 2 would be nice to have but can also be provided by toolkits and/or a Vulkan extension in which case it can be either mapped to option 1 or 3 depending on HW capabilities (and maybe user choice). Above all else it seems clear it needs to be tied into any color management protocol (can either be a version 2 or a protocol that builds on top of the core color management protocol)

I think you need to contact Uma Shankar <uma.shankar-AT-intel.com> from Intel to get more information.

Glancing over the above commit, it’s clear that what is meant by “HDR support”, somewhat vaguely, is the ability to pass HDR10 metadata (infoframe) to a connected HDR10-capable display, so that it may switch to its HDR10 mode.

So pretty basic then, well it is a start, hopefully they implemented in a future proof way, since although currently most screens that support HDR are consumer focused I do expect the productivity focused screens do begin dropping in price as well.[1] And although those screens are not totally useless with only HDR10 support it would be much better if those could be driven directly.

Now of course this is with my rather limited understanding of HDR10 infoframes maybe those are already flexible enough to do the above?


[1] For example Apple’s new productivity focused HDR screens seems expensive at 5k but as a screen that can be used as a reference display it is competing with screens that go for 10k to 20k. Non-reference displays for productivity work should become cheaper as well

Note that I was only commenting on the kernel commit, the Apple implementation looks to go beyond that (mixing of “SDR” and HDR on supported displays), it will be interesting to see how sophisticated their approach really is.

If I am right the mixing happens on the MacOS side of things (inside the build in color manager) and the display will “always” be in HDR mode. I suspect this due to how EDR value behaves as described here for normal tonemapping and here for some notes on reference displays and that it is required to set (some) of the color management bits. Now of course this is only the software side of things (and even then only the parts visible to application developers) how it exactly is implemented is currently unknown.

Yep, that much seems to be clear. It will be interesting to see how they handle a HDR10 display’s own (potentially and likely undefeatable) tone mapping in HDR10 mode, maybe they’ll rely on the display to be intelligent enough to not do any of its own tone mapping if the HDR metadata doesn’t exceed the display peak luminance capabilities, but who knows.

Probably has to, although they may (have to?) limit it to those that can have HDR (or rather, high luminance) on/off on a per-pixel basis?

My suspicion is a special HDR mode that isn’t HDR10 similar to what AMD seems to be doing with Freesync2HDR it might be in fact be the same technology! Apple is known to only support AMD officially so this wouldn’t be a surprising thing to be honest especially since the documentation mentions one reason for doing your own tone-mapping is latency which is the same reason AMD has for introducing Freesync2HDR. Of course your option would is also be valid although probably only implemented for screens that don’t support the special display modes (since those screens still have the latency issue).

Quite likely that is why I put ‘always’ in quotes since there probably is some smart thing going on in the background.

1 Like

Just FYI I have a working prototype of the color management in weston and the plan is to evaluate DRM leasing for calibration/profiling but that also requires implementing a WIP wayland leasing protocol and some more protocol design around how to handle input for leased out desktop outputs.

1 Like

This one is interesting. I thought about making the user’s brightness setting just change the tone curve instead of dimming the backlight. It has the drawback that it the power draw will be the same. Their solution seems to be a hybrid where under a certain threshold it backlight is getting dimmed.

That is good new. Do have some question regarding using DRM leasing for calibration/profiling

  1. We need to be able to lease the primary screen evem on a single screen system and on a multi screen system it would be preferable to not see it as disconnecting the monitor (as current experiments in compositors do) at least in this case in other cases it might be preferable. I think this is possible but don’t think the current lease protocol allows for this possibility.
  2. Are all the DRM/DRI/KMS interfaces the same with regards to setting the calibration/gamma curve (we don’t want to have to implement a different way of doing things for every graphics card) and can we take over the screen withouth needing to redo modesetting?
  3. This would eliminate the compositor from the drawing path completely and although I don’t think it would be a huge problem I do worry that if a compositor decides not to use the gamma/calibration curves but for example a shader this might change the output which would invalidate the profile, I do think this is unlikely but until further testing can’t be completely ruled out.

Either method would work I think to a certain extend this could be implementation specific

Not sure if I understand you. The video doesn’t show desktop outputs getting leased. Sway doesn’t support that at all right now.

What the compositor does when the DRM resources is leased out is up to the compositor. It could act like the output got disconnected or pretend that it’s still there but you can’t see it temporarily.

The interfaces are common between different hardware, yes.

I think so, yes. You can specify in the atomic commit if you want to allow modesetting to happen or not (but I also don’t see why this would be a problem anyway).

If it applies the calibration curve incorrectly then it’s broken. If it applies it with too little precision, it’s broken. If it applies it with a higher precision nothing should change.

Obviously you’re right, we have to verify that everything works as we expect it to work and that’s why I want to do this.

Second paragraph under " wlroots & sway implementation" header

So what would normally happen to a compositor if we disconnect the last screen? Which in the above scenario could happen on single screen setups that want to calibrate/profile, currently I suspect that many a compositor would crash or exit. So it might be acceptable for two plus display setups (not sure what the best user experience would be, disconnecting will move all applications to the other screen(s) but on the other hand probably won’t restore everything when the screen comes back) but it probably won’t be for single display setups. So in the case of single display setups the protocol will need to specify what will need to happen (even if you disconnect since you want everything else keep running so you need some backbuffer/virtual output in that case anyway).

Does that make more sense?

Okay that is good! Had heard that not everything is the same with regard to different HW, although I think that most has to do with the 3D/compute engines.

It is a bit more of a nice to have, that way a calibrator/profile can just focus on the calibration/profiling without also needing to modeset.

You are right, we need to keep an eye on this but if we can make it work it would be acceptable (I think).

Mh, for our use case that’s not the best behavior at all.

The resource is only gone temporarily, you can just stop presenting to it but otherwise pretend it’s still there. It would seem very much like fullscreen.

Maybe. Not sure yet.

Yeah, I got it.

Well, the hardware itself can and will be different, the interface to the hardware will be the same. Hopefully all hardware uses enough precision but it’s one more reason why we should have access to all the hardware: verify that the per plane and the pipe color pipelines don’t screw up.

Btw, thanks for writing the mail to the wayland-devel list. It reminded me that I have to get back on that.

1 Like

I reached this forum after weeks trying to get all my graphic applications to use the same color profile. “colord”, “dispwin”, “oyranos” 
 simply, a madness 
 I was betting for a new “color management” era coming with Wayland, but after reading here the planning of Wayland, I’m certainly frustrated. I work basically photos to large prints, using AdobeRGB, with Darktable, Digikam, Krita and GIMP, basically, but it’s a headhache to get they all working with same ICC profile in my AdobeRGB calibrated monitor. We are in 2019, and as someone said among this posts, if Wayland doesn’t consider color management priority, I’ll definitively abandon Linux, despite I have been for years fighting for Linux.

3 Likes

What’s the TL;DR on that now ?

1 Like

That’s a bad idea at 2 different levels:

  1. brightness/contrast adjustments (compensating for Stevens & Bartleson–Breneman effects) need to know the surround / display luminance ratio, and as such need to be fully separated to the white luminance scaling,
  2. artificially limiting the peak luminance through the OETF means you will loose at least half of your encoding bits bandwidth. That might be hidden using a clever encoding such as the logic behind the PQ tone curve (https://www.smpte.org/sites/default/files/2014-05-06-EOTF-Miller-1-2-handout.pdf), but that assumes the end display can decode it properly, and as such disqualifies regular desktop monitors. With no such encoding, beware the quantization issues.

You want your backlighting to stay an analog thing.

@swick is trying to interpret the Mac OSX documentation here, I think the documentation is speaking about the brightness level of non-color managed/non-HDR[1] sources there (note that it talks that even if the user set brightness level is lower then max, the max might be still available for HDR applications). I am not sure what happens with color managed applications that are non-HDR since as you point out it can play havoc on all kinds of things. Currently Apple is rolling out their first HDR screen so only time will tell how it all works.

About your earlier question it is really slow going, Chromium seems to want it but instead of implementing one of the actual proposals (so that we can push the compositors to adopt it) they just expose the internal Chromium color manager over a wayland protocol (which means having primaries and matrix as 2 separate entities, I don’t know whoever though that was a good idea, also due to the way error work in wayland anyone implementing that is in for a bad time)


[1] According to the docs I found (see links earlier) all HDR content will be color managed to some extend on Mac OS

Usually people who don’t make a living pushing pixels for demanding clients.