Wayland color management

and? as I said the display will get their info frames, it just might not be the infoframes the content started with since we remapped/blended/etc

How ? How would the display get their info frames until compositor sets a value in the KMS property ?

And my point here is that the compositor doesn’t need to know the name of the curve here,
only that it is HDR and that there is a curve, apply the curve, pottentially apply another curve for the monitor and push the pixels with that AVI info frame. Remember that for a desktop there will be situations where we are dealing with mixed content so we need to be able to do something like this anyway!.

In this case, what do you suggest compositor/kernel should set in the AVI infoframes, and how ? Wrong AVI infoframes will anyways spoil the output, as monitor will not know how and what to do with the non-linear/encoded data coming out of graphics source.

Set it the curve the compositor uses in its final composited output of course, do you think movies are directly made in HLG/PQ/whatever? I will tell you the answer and it is no, movies are made in scene linear (so no curves at all), the curve is only applied (a well known one at that) after everything is composited together (from HDR sources, like digital renders and SDR source like mat paintings), after any color grading (at least in a modern workflow) and after any tonemapping.

So since we have scene linear (since we converted everything in to that for easy compositing) it should be trivial to output convert to any curve we want, and so set the right infoframes. No need to know what the content was at all! (Also I will bet that some screens will work better with HLG while other will work better with PQ, while other will work best with something else, having the ability to be agnostic here can make things a lot easier in that regard)

Ah, I am getting your point now, you mean linearize and blend using the clients curve, then let the compositor choose and apply the best curve as per the display/HW/SW capabilities and set that as output curve in AVI infoframes. I agree, if the client sends both degamma/gamma curve which is being used by it, there might not be a need to know the name of curve. This is almost what we are doing, in the patch-set we published in weston/wayland community, but with the curve name.

Sure, right now there is little HDR content but that’s a huge assumption that most of the time there is only SDR content being shown. What if the shell starts displaying the background image in HDR? What if games start doing HDR properly? What if GUI toolkits start doing HDR properly?

Those kinds of assumptions are almost always bad.

Right now yes. With the color management extension there always will be CSC.

That’s again a huge assumption. You have your use case of “playing an HDR video” stuck in your head and it’s not helping.

No, the idea is to do CSC and tone mapping to whatever the color space and HDR properties you chose for the display.

To extend on that: do you think all monitors actually have native support for HLG/PQ/whatever? They also do tone mapping and CSC internally. Ideally we would get rid of that by using the native EOTF and native CS but the standard doesn’t seem to allow for that (but the freesync 2 hdr one apparently does).

The ICC profile encodes the gamma/degamma curves implicitly afaik. No need to send any other information there.

Sure, right now there is little HDR content but that’s a huge assumption that most of the time there is only SDR content being shown. What if the shell starts displaying the background image in HDR? What if games start doing HDR properly? What if GUI toolkits start doing HDR properly?
Those kinds of assumptions are almost always bad.

No that’s your interpretation. Our design is pretty clear, do what is required, on need basis. If there is an HDR wallpaper, and there is HDR content, we will change the color profile to HDR on the fly, and will start the color conversion when its “required”. But instead of doing it when its really required, always doing it because at some point in future something like that might come up, and forcing rest of the world to comply, is a bad design.

Right now yes. With the color management extension there always will be CSC.

shashanksharma:

Why? Again this should be need basis. If I am playing SRGB content on SRGB monitor, why would I want to do CSC and unnecessary waste power and load the CPU/GPU ? Compositor will do it only when its required.

That’s again a huge assumption. You have your use case of “playing an HDR video” stuck in your head and it’s not helping.

Well the same applies to your assumption when you commented: “because a small portion of the screen has HDR content will result in slightly different results on the whole SDR area.” How was that helpful? At-least I have data to back my assumption, that, this setup is available right now. I am not sure which setup are you talking about, where there is a small probable HDR screen ? for which you want to force HDR color profile and always enable color correction. So I guess we both are getting stuck with some ideas in our head, and neither of us is finding other’s assumption any useful.

Changing the whole display/graphics pipeline and spending huge power and workload, just to keep some color profile valid, isn’t sane at all. Color profile’s job is to provide information from client to compositor, and then compositor can take a call on what to do with that, as it has the best view of all the clients, the output, the HW and SW ecosystem. If it’s so important to keep some color profile valid, the compositor will create a runtime profile which will reflect the current output.

That’s not an assumption about the default mode of operation but an explanation of a visual glitch! We don’t allow visual glitches in weston and the wayland design must be able to ensure that compositors can be glitch free.

You still have to show how you would be able to change from SDR to HDR mode without glitching. I don’t think that’s possible and that makes all other points moot.

Again, try this: put up an SDR window, then open an HDR window. The SDR window will slightly shift in color and/or brightness. That’s a glitch.

When the display is in SDR mode it does tone mapping to it’s native HDR. When you turn on HDR mode the compositor does SDR to HDR mapping for the SDR surface and the display then does HDR to its native HDR tone mapping. You would have to know the internals of the displays tone mappings and counteract them in the compositor with the SDR to HDR tone mapping for the pixel values to be the same in both modes.

If you can proof to me that this doesn’t happen I’m open to consider changing from SDR to HDR on the fly.

How high is the cost of using HDR internally all the time and convert to SDR if you end up on an SDR display? reasoning: HDR might become more common in the future than SDR and given we discuss a standard for the future we could use HDR as default.

I don’t think it would be HDR internally, rather a linear scene referred working space that can be converted to anything. Also means that when / if HDR move to super HDR or whatever it will be called there will be little issue. (talking about HDR now, not color space)

Of course I may be 100% wrong. :slight_smile:

that sounds even better. I just wanted to avoid we design something that is SDR by default :slight_smile:

Exactly the gamma/degamma curves will be needed anyway for color correction (with the exception of certain creative applications that need to be in display space directly but since for those no conversion is needed we don’t need to know their curves either just that the output is in display space)


Regarding the HDR discussion I don’t think that the HDR processing on the GPU will be that more intensive then processing SDR color conversions since from a color management perspective there is not that much of a difference. So from that perspective we could drive displays all the time in HDR without needing more power. On the other hand most HDR displays do require more power in HDR mode, so in that regard this might become a problem especially on laptops. I see a couple of different scenarios

Examples:

  • Creative work on desktop/workstation - Probably needs to be in an always HDR mode due to otherwise an potential issue between a difference in look between SDR and HDR content
  • Desktop/workstation for other work - Can probably get away with only being in HDR mode if their is HDR content shown
  • Laptop for creative work - in docked mode or on external power needs to be in HDR mode, needs to switch when on battery
  • Laptop for other work - switch when docked/external power, force to always SDR on battery?

So in some cases I think we would want to be always in HDR while in others we probably want to switch only to HDR when needed (or even not at all and tonemapp the HDR content!)

If we do indeed need switching we will need 2 profiles one for when in SDR mode and 1 for when in HDR mode, even then I think that screens that don’t support the new freesync2 display modes (or something similar) will need to pretend their display color space is rec2020 with PQ or HLG curves since the displays will do internal tonemapping and color conversion that can’t be disabled.

You’ll likely get the biggest gains by simply changing the clients so they provide their surfaces in the correct color space and the right dynamic range and those kinds of changes should be made for GTK, Qt, Firefox, Chrome, Libreoffice and the other big applications.

The hardware part is kind of interesting. Do HDR monitors actually draw more power just because you put them into HDR mode or do they only draw more power when they show brighter content?

Either way those kinds of optimizations can wait until things actually work correctly. We might be able to be the first desktop system to support HDR and color management properly.

I’d assume that one of those problems is the tone mapping the monitor is doing itself. Freesync 2 hdr gets rid of that. What other problems are there to profiling HDR monitors then?

Theoretically yes, but in practice that will be quite a long and uphill battle and even then we will still have some legacy stuff to deal with. Luckily most of that legacy content will be in sRGB so only a single transform will be needed to cover 99.9% of that.

Good question, most information I could find was that HDR mode does draw more but no hard numbers, if this is not an issue we can indeed just as well drive the monitor in HDR all the time.

That is also a good point, although I think that will only be truelly possible with the Freesync 2 display modes (or something similar)

Dear all, I have been silently following this topic with great interest!
Regarding the idea to keep the display always in HDR mode, I have a doubt… suppose I am a graphic designer that is preparing an image for printing: I am pretty sure I would not want an hdr output, because my destination media is not HDR. So all the extra highlights detail that HDR can show will be lost in the final product… what do you think?

I would also propose to stop talking about HDR vs SDR, because I have the impression that those are mostly marketing terms. The main difference, if I understand correctly, is that SDR has a direct relationship between linear RGB values an the light emitted by the display, while “HDR” involves some nonlinear tone mapping to compress the highlights… is that correct?
And regarding the image buffer for composing, if it is truly linear then there is no SDR/HDR distinction, but just pixel values. There the only meaningful anchor is IMHO mid-gray, and it only matters at which display light output level it should correspond to. Does this make sense?

Also, there is AFAIK no way to use icc profiles to linearize an “HDR” image stream, because the corresponding OETF cannot be described by the icc standard. So if you want to linearly compose an HDR stream with some other desktop component, you either need to go beyond the icc standard or require the source application to provide linear RGB values…

To my simple self, a HDR display means it isn’t SDR. :slight_smile:

Diffuse white is mentioned a lot.

No, honestly I can’t guarantee a glitch proof switch on run-time, doesn’t matter how good the tone mapping algorithm is. So I am in agreement with this problem, I just don’t agree with the solution, that we should be driving monitor in HDR profile always, regardless of the content.

What if someone wants to see SDR content deliberately for the accuracy with actual image ? In our current solution, we will always tone map it to HDR, and will never be able to show something in SDR mode, just because the monitor is HDR capable. This means the user will need to have 2 different monitors (one for accurate SDR content, other for premium HDR content).

Please note that the monitor’s are made backward compatible, ie, if you read the EDID, and parse the capabilities quoted (in their CEA-861-G HDR metadata block) by HDR monitors, they support both SDR traditional gamma as well as PQ-ST-2084 EOTF. Similarly, they support sRGB SDR colorspace as well as BT2020/BT2100/DCI-P3 HDR colorspace. This means the monitors are made in a way that they are backward compatible, to maintain and preserve accuracy, and that’s why I support need basis profile switch, even if it causes slight glitch in color reproduction of SDR content, when you mix it with HDR content.

This is the exact requirement I am talking about, and that’s why I think we should switch the color profile on need basis, instead of always driving the fixed HDR output.

If you calculate the tone mapping requirement only, it might not be very high. But considering the fixed HDR profile, if you combine complete pipeline (degamma + CSC + tone mapping + gamma) , that too on each buffer from each client, per frame, I believe its considerable. Also, this might add load the GPU (which can be avoided), which might be already under load while processing a heavy game / movie / streaming scenario. Isn’t this too much in order to avoid one case, ie slight glitch in SDR color reproduction, when we mix SDR/HDR content ? I know it’s not accurate, but is it worth all this trouble ?

So in some cases I think we would want to be always in HDR while in others we probably want to switch only to HDR when needed (or even not at all and tonemapp the HDR content!)

Is there any reliable way to differentiate between a creative task workstation Vs general PC ? AFAIK modern systems are capable of doing mix-and-match of everything, also the gaming and video playback scenario make it more complicated. We might end-up with Mac vs Non-mac kind of comparison I am afraid.

I see your point but in practice it might not be so simple most true HDR screens also have much better black and it wouldn’t surprise me that with a good setup might even be better at print simulation even when not using the highlights. But this is something that has to be experimented with in practice, it is currently a bit of an unknown.

No not really, in practice both have some non-linear “tone mapping” (the sRGB TRC is also a form of this effectively and it is not really tone mapping it is just a curve to get more dynamic range out of a limited bit budget) and in the end an HDR image does store significant more dynamic range then an SDR. Now I consent that HDR is a confusing term since it is also used to describe scene linear which has effectively infinite DR which the HDR we are talking doesn’t have (from that perspective the HDR we are talking about still is display referred). All in all there is a big difference between HDR capable monitors and those that don’t so terms to distinguish those is useful.

Display linear is not the same as Scene linear and there might need to be operations to map one to the other (display to scene is probably simpler), and in this case diffuse white is a more important one then mid gray although if the mappings are done correctly it shouldn’t matter to much.

@gwgill believes otherwise and above someone pointed to a blog by Netflix where they say they are using ICC profiles for exactly this kind of work! So it seems to be possible.

Except that is the exact same pipeline needed to for example put sRGB to the screen with the exception for tonemapping, since the important color transforms need to be in linear space (so detrc the sRGB data → color transform to display → apply display curve). So since tonemapping doesn’t add to much (should be possible to use a pre-calculate the curve) I don’t think it will be a problem, also AFAICT from a GPU perspective these are some of the simplest shaders possible (most if not all could be a simple LUT)

Only way to differentiate would be user settings since there is no way to do so automatic, it would only require a setting to force HDR always on I think, and probably should be under display power settings.