Wayland color management

Considering the level of experience and industry involvement that many contributors to this discussion have. (myself not included) I think that the earlier comment that @fhoech made regarding vcgt being added to the ICC, may be a way forward.
If there is a need to go, even mildly out of spec, or push the ICC spec maybe it should be added to the official ICC specification.

Netflix does have their own way of doing things, as I am sure they believe they are the specification.

There is already such a tag that currently nobody uses and both the Netflix blog and @gwgill referenced that one, so I think that the solution will look pretty similar between those.

True it was just an example that ICC profiles can be made to work for HDR.

However made to work may not be the best idea. What if ICC decides to add to their specification, or change it for HDR use case. No-one wants a situation where the early bird get the poison.
Regardless, as the ICC is such a large industry body, what exactly are they doing about HDR? Is there even one press release?

Considering how important this subject is, should not at least one representative from the ICC be able to comment on the direction of ICC in regard to HDR?

However if Netflix has decided to use ICC and push the limits of the spec for HDR, then most probably the ICC will follow. I am under no illusion of the power that Netflix has to shape the industry.

1 Like

This is also one good way of doing it, we can give user an option to be always in HDR mode, or to pick it on need basis. If user’s choice is to do it only when required, we can create an output profile for need basis HDR conversion, or else, if user picks up to be in HDR mode always, we can create an output profile like that, and then, the compositor will obey the output profile. Does it sound acceptable to you @swick ?

and probably should be under display power settings.

Or may be another tab (HDR mode) in display ? We can discuss more about it.

I don’t know how often I have to say it until you start to listen: weston does not allow glitches.

Sure, a setting to enable or disable HDR just like resolution, frequency etc. is possible and a very good idea but you cannot do dynamic switching depending on the content period.

Well, then in that case, fix HDR color profile is also not going anywhere, because we don’t want to punish end user for buying a HDR monitor, so we can probably pray for a third solution !!

Or you know, use your brain and think about it. If all the content on an HDR monitor is SDR you should be able to compose to SDR and do tone mapping with the planes’ color pipelines.

You also should simply fix the commonly used toolkits and applications so they provide their contents in HDR when they’re being shown on HDR monitors.

Yes, it’s hard, requires lots of changes to lots of different code bases but that’s the reality of HDR if you want to be backwards compatible.

Ah, right, let me use my brain 

If everything on HDR monitor is SDR:

  • It can support SDR straight forward by doing nothing, as the monitor is backward compatible, and the monitors in the world have been doing this since ages.
  • I could go and change the rest of the world, several apps and tool kits, half of the ecosystem, and also produce a tone mapped result, which would still look different on a SDR monitor, and force an end user to buy a separate SDR monitor if he wants to get a non-tone-mapped output.

Anyone sane will know which option to pick, so Pal, looks like I am not the one who should start using his brains, its you !

Again, that’s not reality of HDR, its your interpretation which is anyways complicated and probably wrong.

I’m done discussing this. You want to do something that results in glitches. It won’t get merged into weston.

Sure, but so does the always HDR profile !

That might also work, best place will probably depend on the compositor/desktop in use tough, we can only give suggestions.

Apart from tone mapping, anything that dynamically regulates luminance (ASBL adaptive static brightness limiting, local dimming, auto-dimming etc.), white sub-pixels (WOLED, some LCDs) which are added to gain more peak luminance but distort the gamut at the top end. In some cases you may be able to disable dynamic luminance adjustment in a monitor’s or TV’s service OSD.

Black level is purely related to the panel technology (e.g. [W]OLED), HDR displays thus not necessarily have better (lower) blacks than non-HDR ones.

Other way 'round. SDR is relative luminance, HDR (at least the SMPTE 2084 “PQ” variant that’s used in HDR10 and Dolby Vision) has a direct relationship between nonlinear signal and absolute luminance.

It’s no problem to encode SMPTE 2084 (PQ) in a ICC profile, but for highest precision it should probably use the v4 multiProcessElementsType (which supports floating point encoding).

2 Likes

The ICC just specifies a file format, and the file format already seems to contain all the elements that are needed to properly support HDR, even in a mixed HDR/SDR context. What’s not specified is how a CMM (or system making use of such CMM) should support these things, but that typically hasn’t been the focus of the ICC, and the way that HDR is specified already limits your options (which in this case is a good thing) of how things would have to be implemented (or used) to arrive at a workable solution (i.e. in a mixed SDR/HDR environment, SDR max relative luminance needs to be mapped to an absolute value, which can be done via a scaling factor. The latter should probably be user configurable, because it depends on the viewing environment, e.g. say in a dim environment, map SDR peak white relative luminance to 100 cd/m2. How this absolute luminance then maps back to the display ICC profile code values which are also encoded as relative luminance is decided by another scaling factor, namely the ‘lumi’ tag in the ICC display profile which encodes the absolute luminance of peak white).

That Netflix has decided to use ICC profiles for HDR is big. I wouldn’t worry too much about dilution of the standard currently (just imho), the problems with HDR when it comes to color accuracy lie elsewhere.

1 Like

Of course it wasn’t a full technical description but it if I read the Netflix blog correctly they seem to be doing exactly what you describe here. And I think it should work with the new Freesync 2 HDR display modes that (should) give quite a bit more direct display access.

I’ve not expressed my self very clearly
 my point is that, in SDR mode, scene-linear R=B=1.0 (“diffuse white”) is mapped to the maximum display brightness, and R=G=B=0.18 (“mid-gray”) is mapped to 18% of the maximum brightness. That is, there is a direct (or linear) relationship between display brightness and scene-linear values. On the other hand, as you point out the absolute value of the maximum brightness is device dependent.

In HDR mode, “diffuse white” does not map to maximum display brightness, because there is additional headroom for specular and/or direct highlights. Moreover, this additional headroom is mapped non-linearly, somewhat similar to a film curve.

I actually have a doubt about the SDR case, because I’ve read in some article that even a standard sRGB monitor applies some power-like OOTF. That is, the input data is encoded with the sRGB OETF, but the display applies an EOTF which is different from OETF^-1. Hence, a non-identity OOTF with a power-like shape is applied, with an exponent around 1.12. As far as I understand, this is to account for the dim reference viewing condition assumed in the sRGB standard.

Is that correct?

Anyways, according to this article a consumer HDR display that accepts PQ-encoded data, but is not capable of delivering the expected peak luminance (1000 cd/m2, 4000 cd/m2 or 10000 cd/m2 depending how the content was encoded), the it applies some tone mapping in hardware. That is, starting from some threshold luminance the display output is compressed non-linearly, so that at least part of the details in the luminance range exceeding the display capabilities can be recovered. Again, this is somewhat similar to the effect of the shoulder in the characteristic curve of good old film


There are however some details that I was not able to clarify:

  • suppose I have an HDR-capable display with peak luminance of 700 cd/m2. What would be the corresponding peak luminance when setting the same display in SDR mode?
  • Is the display luminance corresponding to scene-linear mid-gray the same in HDR and SDR modes?
  • the peak luminance in HDR displays seems to be only guaranteed in either small areas, or during short “flashes”, or both (like bright specular reflections or explosions in a movie). How does this relate with the static content of a desktop environment?

This is not correct, firstly scene linear should never ever displayed directly to a screen[1], this is irregardless of the screen is HDR or SDR, what you mean here is display linear which is not the same as scene linear. Secondly most screens for legacy reason emulate the sRGB transfer curve this means that R=G=B=.5 maps roughly to middle grey (.18 of max luminance).

This is correct

AFAIK it isn’t because of dim surrounding (although that probably pays a part) but because that was the rough behavior of CRT displays so that in a non-calibrated case you could just push out the pixels and be OKish.

True besides a tonemap it will also generally do a color transform, this is pretty bad if you want color accuracy (since generally speaking you can’t adjust these internal processes, especially on consumer monitors). Luckily these things also are bad for latency so AMD introduced some new monitor modes with freesync 2 that will give more direct access ostensibly for games but are also pretty usable for us.

Probably like now with normal/SDR monitors since there is no standard every manufacter will do their own thing, although in general it should be lower then 700, probably

Unknown, see previous answer as well

Also currently unknown since we are really the first trying to do something like this[2], although generally speaking you don’t want to look at 600+ cd/m2 continuously anyway so in practice it probably won’t be an issue

Hope this answers your questions!


[1] Remember the quite “Friends don’t let friends view scene-linear imagery without an ‘S-shaped’ view transform.” (quote is on page 22)
[2] IIRC MacOS currently doesn’t support HDR and the windows support is pretty basic (turn on when displaying something HDR like a game or video not caring to much how the desktop looks)

The Cinematic Color VES.04 document, linked above, states that:

Unlike other color management solutions such as ICC, OpenColorIO is natively designed to
handle both scene-referred and display-referred imagery.

Forgive me if this suggestion is nonsensical, but why not simply use OCIO, as opposed to ICC?

OCIOV2 is also just around the corner, (release late 2019) which boasts amongst other achievements CPU/GPU output parity.

FWIW, what I myself do is that I have two sets of calibration+profile: one at my monitor’s native white point (~6350K), and one with a white point of 2700K, to which I switch once it gets dark enough outside that I turn on my (also 2700K) lights. (I use a script for each, which calls colormgr device-make-profile-default <the profile> as well as ddcutil to apply the appropriate monitor settings.)

In that way, I effectively achieve a Redshift-like effect without Redshift and, hopefully, without sacrificing too much color accuracy. Of course, for things like photo editing, I much favor the profile with the native white point (even if it means waiting for the proper time of the day), but e.g. for browsing the Web and looking at friends’ pictures on Facebook, it’s nice to get more or less accurate colors (relative to the white point, as already mentioned) in the evening too.

If I try the same thing on my Windows laptop (where I cannot use monitor settings to get closer to my desired warm white point), for some reason Windows does not like the calibration curves and straight out refuses to apply them, so what I do instead is exactly what Florian described: I enable Windows’ night mode (from what I have read, possibly from Florian, doing so truncates the LUT precision to 8 bits until the next reboot, but I don’t think my integrated Intel GPU has more than that anyway), and then I profile that. (Actually, calibrate+profile, since it seems that Windows then merges the curves applied by the DisplayCAL loader and the night mode curves, just like X.Org 1.19+. In fact, maybe that’s where the 8-bit truncation originates from.)

I hope the amount of parentheses in my comment did not make it too hard to follow. :smiley:

OCIO has no display profiling afaik. Like most other solutions mainly geared towards motion picture production, display color management is done via proprietary 3D LUT formats either stored in a monitor itself (if it has the capability) or via external LUT boxes, or the monitor is adjusted to meet one specific standard (e.g. sRGB).
ICC also has the ability to support scene-referred encoding, in 99% of cases on the desktop, you’re dealing with display-referred RGB (R’G’B’) anyway though (my estimate, not some hard number).

1 Like

In an email exchange with Phil Green of ICC, it was suggested that iccMAX be considered. This is predominantly due to the difference in how white point is handled in SDR Vs HDR
 Any thoughts?