Any documentation on how the DRM HDR bits work? (if I ever get myself an HDR screen might be interesting to play with the HDR modes from the console directly to try stuff out regarding measuring/profiling)
Non mastered content but want to use systems tonemapping
Non mastered content but want to do own tonemapping (e.g. when speed is important or when using a reference display)
its heavily tied to the build in color manager of MacOS
option 2 and 3 are only available when using Metal, option 1 can also be done with AVFoundation (I think)
I think that a wayland protocol should at least provide option 1 and option 3 with the caveat that option 3 should only be available for when HW is capable of doing so (e.g. when it provides the FREESYNC2 HDR display modes). Option 2 would be nice to have but can also be provided by toolkits and/or a Vulkan extension in which case it can be either mapped to option 1 or 3 depending on HW capabilities (and maybe user choice). Above all else it seems clear it needs to be tied into any color management protocol (can either be a version 2 or a protocol that builds on top of the core color management protocol)
Glancing over the above commit, itâs clear that what is meant by âHDR supportâ, somewhat vaguely, is the ability to pass HDR10 metadata (infoframe) to a connected HDR10-capable display, so that it may switch to its HDR10 mode.
So pretty basic then, well it is a start, hopefully they implemented in a future proof way, since although currently most screens that support HDR are consumer focused I do expect the productivity focused screens do begin dropping in price as well.[1] And although those screens are not totally useless with only HDR10 support it would be much better if those could be driven directly.
Now of course this is with my rather limited understanding of HDR10 infoframes maybe those are already flexible enough to do the above?
[1] For example Appleâs new productivity focused HDR screens seems expensive at 5k but as a screen that can be used as a reference display it is competing with screens that go for 10k to 20k. Non-reference displays for productivity work should become cheaper as well
Note that I was only commenting on the kernel commit, the Apple implementation looks to go beyond that (mixing of âSDRâ and HDR on supported displays), it will be interesting to see how sophisticated their approach really is.
If I am right the mixing happens on the MacOS side of things (inside the build in color manager) and the display will âalwaysâ be in HDR mode. I suspect this due to how EDR value behaves as described here for normal tonemapping and here for some notes on reference displays and that it is required to set (some) of the color management bits. Now of course this is only the software side of things (and even then only the parts visible to application developers) how it exactly is implemented is currently unknown.
Yep, that much seems to be clear. It will be interesting to see how they handle a HDR10 displayâs own (potentially and likely undefeatable) tone mapping in HDR10 mode, maybe theyâll rely on the display to be intelligent enough to not do any of its own tone mapping if the HDR metadata doesnât exceed the display peak luminance capabilities, but who knows.
Probably has to, although they may (have to?) limit it to those that can have HDR (or rather, high luminance) on/off on a per-pixel basis?
My suspicion is a special HDR mode that isnât HDR10 similar to what AMD seems to be doing with Freesync2HDR it might be in fact be the same technology! Apple is known to only support AMD officially so this wouldnât be a surprising thing to be honest especially since the documentation mentions one reason for doing your own tone-mapping is latency which is the same reason AMD has for introducing Freesync2HDR. Of course your option would is also be valid although probably only implemented for screens that donât support the special display modes (since those screens still have the latency issue).
Quite likely that is why I put âalwaysâ in quotes since there probably is some smart thing going on in the background.
Just FYI I have a working prototype of the color management in weston and the plan is to evaluate DRM leasing for calibration/profiling but that also requires implementing a WIP wayland leasing protocol and some more protocol design around how to handle input for leased out desktop outputs.
This one is interesting. I thought about making the userâs brightness setting just change the tone curve instead of dimming the backlight. It has the drawback that it the power draw will be the same. Their solution seems to be a hybrid where under a certain threshold it backlight is getting dimmed.
That is good new. Do have some question regarding using DRM leasing for calibration/profiling
We need to be able to lease the primary screen evem on a single screen system and on a multi screen system it would be preferable to not see it as disconnecting the monitor (as current experiments in compositors do) at least in this case in other cases it might be preferable. I think this is possible but donât think the current lease protocol allows for this possibility.
Are all the DRM/DRI/KMS interfaces the same with regards to setting the calibration/gamma curve (we donât want to have to implement a different way of doing things for every graphics card) and can we take over the screen withouth needing to redo modesetting?
This would eliminate the compositor from the drawing path completely and although I donât think it would be a huge problem I do worry that if a compositor decides not to use the gamma/calibration curves but for example a shader this might change the output which would invalidate the profile, I do think this is unlikely but until further testing canât be completely ruled out.
Either method would work I think to a certain extend this could be implementation specific
Not sure if I understand you. The video doesnât show desktop outputs getting leased. Sway doesnât support that at all right now.
What the compositor does when the DRM resources is leased out is up to the compositor. It could act like the output got disconnected or pretend that itâs still there but you canât see it temporarily.
The interfaces are common between different hardware, yes.
I think so, yes. You can specify in the atomic commit if you want to allow modesetting to happen or not (but I also donât see why this would be a problem anyway).
If it applies the calibration curve incorrectly then itâs broken. If it applies it with too little precision, itâs broken. If it applies it with a higher precision nothing should change.
Obviously youâre right, we have to verify that everything works as we expect it to work and thatâs why I want to do this.
Second paragraph under " wlroots & sway implementation" header
So what would normally happen to a compositor if we disconnect the last screen? Which in the above scenario could happen on single screen setups that want to calibrate/profile, currently I suspect that many a compositor would crash or exit. So it might be acceptable for two plus display setups (not sure what the best user experience would be, disconnecting will move all applications to the other screen(s) but on the other hand probably wonât restore everything when the screen comes back) but it probably wonât be for single display setups. So in the case of single display setups the protocol will need to specify what will need to happen (even if you disconnect since you want everything else keep running so you need some backbuffer/virtual output in that case anyway).
Does that make more sense?
Okay that is good! Had heard that not everything is the same with regard to different HW, although I think that most has to do with the 3D/compute engines.
It is a bit more of a nice to have, that way a calibrator/profile can just focus on the calibration/profiling without also needing to modeset.
You are right, we need to keep an eye on this but if we can make it work it would be acceptable (I think).
Mh, for our use case thatâs not the best behavior at all.
The resource is only gone temporarily, you can just stop presenting to it but otherwise pretend itâs still there. It would seem very much like fullscreen.
Maybe. Not sure yet.
Yeah, I got it.
Well, the hardware itself can and will be different, the interface to the hardware will be the same. Hopefully all hardware uses enough precision but itâs one more reason why we should have access to all the hardware: verify that the per plane and the pipe color pipelines donât screw up.
Btw, thanks for writing the mail to the wayland-devel list. It reminded me that I have to get back on that.
I reached this forum after weeks trying to get all my graphic applications to use the same color profile. âcolordâ, âdispwinâ, âoyranosâ ⊠simply, a madness ⊠I was betting for a new âcolor managementâ era coming with Wayland, but after reading here the planning of Wayland, Iâm certainly frustrated. I work basically photos to large prints, using AdobeRGB, with Darktable, Digikam, Krita and GIMP, basically, but itâs a headhache to get they all working with same ICC profile in my AdobeRGB calibrated monitor. We are in 2019, and as someone said among this posts, if Wayland doesnât consider color management priority, Iâll definitively abandon Linux, despite I have been for years fighting for Linux.
brightness/contrast adjustments (compensating for Stevens & BartlesonâBreneman effects) need to know the surround / display luminance ratio, and as such need to be fully separated to the white luminance scaling,
artificially limiting the peak luminance through the OETF means you will loose at least half of your encoding bits bandwidth. That might be hidden using a clever encoding such as the logic behind the PQ tone curve (https://www.smpte.org/sites/default/files/2014-05-06-EOTF-Miller-1-2-handout.pdf), but that assumes the end display can decode it properly, and as such disqualifies regular desktop monitors. With no such encoding, beware the quantization issues.
You want your backlighting to stay an analog thing.
@swick is trying to interpret the Mac OSX documentation here, I think the documentation is speaking about the brightness level of non-color managed/non-HDR[1] sources there (note that it talks that even if the user set brightness level is lower then max, the max might be still available for HDR applications). I am not sure what happens with color managed applications that are non-HDR since as you point out it can play havoc on all kinds of things. Currently Apple is rolling out their first HDR screen so only time will tell how it all works.
About your earlier question it is really slow going, Chromium seems to want it but instead of implementing one of the actual proposals (so that we can push the compositors to adopt it) they just expose the internal Chromium color manager over a wayland protocol (which means having primaries and matrix as 2 separate entities, I donât know whoever though that was a good idea, also due to the way error work in wayland anyone implementing that is in for a bad time)
[1] According to the docs I found (see links earlier) all HDR content will be color managed to some extend on Mac OS