Wayland color management

Because the client doesn’t know the LUT? With the sole exception of the calibrator (which creates the calibration curve) there is no need for any other client to know the calibration curve and yes this does include the profiler.

And also this:

Since only the compositor can know that this is possible it is the compositors responsibility and once it is the compositors responsibility it should always be in the hands of the compositor so no exceptions for profiling (or even calibration IMHO)

Sigh. That was only an example. Just replace “software” with “method A” and “hardware” with “method A, B, C or D”, and it should be obvious there is no consistency if the only situation where “method A” is always chosen is during profiling. I don’t even care that much, it just seems like an obvious awkward inconsistency.

1 Like

The whole point is that consistency is very likely unachievable for a performant implementation which is why you should use whatever method for calibration and profiling that gives you the best accuracy to give the compositor a chance to do the best it can. Even if the compositor decides to put the vcgt exclusively into the hardware gamma LUT at all times you have consistency albeit at a lower accuracy.

We are talking about calibration and profiling though, aren’t we?

Just because the profiler as a concept doesn’t have to know the LUT doesn’t mean that it’s not a valid implementation to have the calibration LUT applied in the same process.

You can create a higher quality LUT by putting your values in the frame buffer with dithering and all LUTs turned off. Since the profile should work across compositors which might apply the vcgt differently (even ignoring that they might apply it differently depending on the scene) having the highest quality to work with seems like a good idea.

I get what you’re trying to say: if all compositors apply the vcgt exactly the same all the time, doing calibration the same way would make sense. That just won’t be reality which is why I’m arguing that you should try to get the highest quality calibration LUT possible.

And I would say that is a flawed implementation, yes it works but it is just stupid

Sure you might get the highest quality calibration curve, but not sure if you get the highest quality profile (and if the compositor applies it differently depending on scene I would say there is something wrong with the compositor, dynamic is not always a good thing)

And my argument is that to get the highest quality profile we should profile in the same way as the profile gets applied yes that might sometimes mean you need to re-calibrate/profile when you switch compositor but that is just reality (also I tough we just determined it didn’t really matter where it we apply it). And IMHO doesn’t happen that often.

Now that Apple have implemented a HDR solution, does their implementation shed any light on HDR in Wayland? I think that “not developed here” needs to be buried, and the best solution to the problem needs to be found.

(Obviously failing is an option, but I am trying to be positive)

Depends on what they are actually doing, if it is the same or similar to Windows then no since that is effectively a quick hack to get games/video working[1]. If they are doing something different (which is quite possible otherwise why come out this late) we might indeed be able to learn something. So do you have any links to documentation?

Of course a 1 on 1 copy wouldn’t work (the display tech of MacOS is quite different) but some of the stuff in the CM protocols is somewhat inspired to what MacOS is already doing.


[1] IMHO, although you can get it to work (as shown by Krita) it isn’t the best thing for creative / content creation applications. (And even for games it is a bit questionable since those would also have to guess the info frames, since those assume fully mastered content)

I don’t know how Apple it is doing HDR, but I am very interested. Very thankful that the Windows HDR solution is accepted to be a hack, hopefully that is known throughout the GNU/Linux Dev circles.

As there are more and more HDR, Wide Gamut displays coming out every day, (MSI have a new HDR600, DCI-P3 for example) I can see HDR becoming standard.

This is a water shed moment for Wayland. Apple supporting HDR (with a remarkable entry) only cements the pressing need for HDR to be supported seamlessly, and correctly.

Unfortunately without such support GNU/Linux will further fall behind in the creative fields. Which would mean there would be no creative professional users.

Creative users are immensely important to computing. Without HDR support, and Color Management it would make it impossible for creative users to use an already under performing (in the creative fields) operating system.

As there doesn’t seem as there are any patents stopping propper HDR support, there is little explanation as to why a solution can’t be reached.

Checked the docs and I don’t think Apples had update those yet.

Agreed

I wouldn’t bet on that, for the creative markets Linux is currently used professionally they bypass all of this using expensive LUT boxes/screens/etc. The problem here is that equipment is currently targeted at movie studios and such whit prices to match (in comparison the recently revealed Apple HDR capable screen that costs >$5000 is cheap in comparison to this)

Having said that I do agree that we will need something preferably soon to attract also prosumer and smaller professionals. Although I prefer to get this right then being to soon (if we get this wrong it might be harder to change)

As I said Linux is actually quite well used in the creative space, it is just in the creative spaces that aren’t often visible, often with software and hardware prices to match (think multiple application by the Foundry, Autodesk Maya, etc). Now there are spaces where Linux is under served but that is mostly photography, and prosumer or small scale professional video (although with davinci Resolve that is also changing)

Patents aren’t the only issue and designing a future proof protocol for HDR on Wayland is not an easy task. Especially since we need to consider multi context support (and not only SDR + HDR, but also HDR in one format and HDR in another format at the same time potentially also including SDR), and needs to properly integrate into color management.

Note also that HDR screens often have multiple brightness levels that need to be considered (with local dimming, without, what is possible in bursts and what is possible continues) and with the current consumer technology (except Freesync2/HDR capable displays) can’t bypass the tone-mapping/color-management of the screen itself.

1 Like

One view would be that Linux development is focused on servers, since that is where the majority of companies that employ Linux developers see it supporting their real products. “Desktop” is owned by MSWindows, Android, iOS and web browsers.

My own view is that it seems basically impossible to create a cohesive operating system (i.e. a set of cohesive application programing interfaces) using ad-hock groups of often under-resourced developers that both cooperate and fiercely compete. (Gnome vs. KDE etc. etc.)
Initiatives to move in such a direction seem to fail (Linux Standard Base and other API standardization efforts.) With no well thought out, consistent API’s and ABI’s, application support is limited mainly to free applications that are distributed as source. So there is no real application ecosystem (i.e. no healthy mix of commercial and non-commercial applications).

Commercial operating system vendors don’t have these obstacles - if they see that wide gamut or HDR is coming along, they know that there are commercial users and applications that will need to be able to work with the new tech, and they have the money to recruit suitable qualified color scientists, programmers and other specialists to architect, coordinate with commercial partners (i.e. vendors of the new technology), implement and field test what they need to add to their API’s. In contrast, trying to do all this on a shoe-string with no support and little common vision is a herculean task. (Herding cats would easier.)

While commercial involvement has down sides, money brings both discipline and resources to system and application development. It focuses minds, and is able to bring expert resources and experience to bear.

1 Like

Before this discussion starts to revolve too much about (perceived or actual) problems with HDR support, let’s quickly examine the status quo and what already works in color managed applications today:

  • Any color managed application can support HDR, today, without requiring any changes whatsoever to application code.
  • Any display, whether natively HDR capable or not, can support HDR content, using color managed applications.

Let that sink in for a moment. All of the above works already (all that is needed is an appropriate source profile, which can be created using open source tools since 2015). Even mixing HDR and non-HDR content is already possible, but requires manual intervention (applying of a scaling factor - e.g. using typical imaging application’s “curves” tool - to SDR content when mixed with HDR content).

What is currently missing (from an open source desktop perspective) is appropriate handling of (application) user interface elements which are typically not color managed (unless an application explicitly does so), because they might be far too bright on a HDR display (you can work around that to some extent by using a dark theme).

Most of the other (perceived) problems actually go away when a fully color managed desktop paradigm is embraced.

5 Likes

@fhoech thanks for the reminder that we should focus first on color management before starting on HDR, I agree that most of the HDR problems will be solved with a good color management foundation.

The only problem that remains (as far as I can tell) is that an application that wants to render HDR might want to know what the display is capable off (max, max sustained, and min brightness levels at least) but that would be trivial to add in any color management protocol, either as extensions to the profile (ICCv5 anyone? this would be the most flexible to be honest) or as extra events (that or only available in v2 of the protocol so v1 would be non-HDR and v2 would be HDR or something silly like that).

We also might (but with the above in place not strictly necessary) provide a protocol for premastered HDR content (think netflix or other video sources) which supports HLG/PQ + info frames directly and lets the compositor manage any needed conversions.

Having said that, first lets get some proper CM in wayland (and the toolkits!)

2 Likes

Max is the only one required, this can be had from the display profile (‘lumi’ tag) but should ideally be (optionally) user-overridable. I don’t think max sustained would be useful (even if it could be easily determined in the first place, which I highly doubt), and min, while optional, can again be had from the display profile (forward lookup).

I would consider all HDR10/HLG HDR content to be pre-mastered, metadata or not. From the past several years it has become obvious to me that the HDR10 info frames are a crutch that provide two distinct functionalities, only one of which is really mandatory (signaling a display device to switch from non-HDR into HDR mode), while the other is to enable “dumb” dynamic tone mapping for emerging (and technically still quite limited) HDR display devices (and it really is only suitable for that, as higher quality dynamic tone mapping basically requires analyzing the actual video stream, in which case the remaining HDR10 metadata doesn’t have much value and can be mostly ignored).

I might be missing something here but thinking about content creation I know I people will be looking at the “same” image for quite long times (think photography) in which case max sustained becomes important. (Also see this: Using AMD Freesync™ Premium Pro HDR: Tone Mapping - AMD GPUOpen especially the section about local dimming).

So all in all an application to decide how to best tonemap needs to know what the capabilities of the display are. And yes some of the basics are already in the current ICC profiles but I think it will need to be extended.

EDIT: Not necessarily saying you are wrong, I might be missing something just that from where I am standing max sustained does seem useful to have.

As I said it would be mostly optional and I imagine mostly used for fullscreen pass-trough to a supported HDR screen (e.g. linux based set top boxes). A desktop would need to re-tonemap anyway to properly composite with SDR and other HDR content in which case it would either need to re-determine the info-frames anyway (if used) or use something like freesync2_gamma22 display mode (in that case no info-frames needed) so in the desktop use case this protocol would be superfluous.

Depends. One of the main problems I would suspect is actually making sensible use of that info, even if it would potentially be available via a freesync2 API. I also am doubtful how useful such info would actually be unless it is very specific, i.e. “10% square full white window will be able to sustain XX cd/m2 over XX period of time, but only if background does not exceed XX% average picture level”, and so on (and I doubt this is even specific enough to be able to adjust the tone mapping to a monitor’s local dimming). Ideally the monitor’s local dimming shouldn’t suck, so that only really bright parts of a scene get dimmed when the bright area increases, or it should use a technology that doesn’t require local dimming (e.g. OLED, although that also usually has a power limiter in HDR mode - I’ve got high hopes for mLED, while it probably will also employ power limiting, it should solve any burn-in concerns which is important for “normal” desktop use, and probably the reason we’re not really seeing OLED desktop monitors). Just my 5 cents.

I think the best bet to figure this all out would be to have a HDR screen on hand (preferably an Freesync2 one) to see what we can do with it. Although since it will be tech dependent we probably need more then 1 HDR screen to try out all the possibilities.

All in all I think it would be indeed best to focus for now on proper full compositor color management since about 80~90% of needed infrastructure for HDR support will come with it.

1 Like

Hi,

so, can someone explain in 1 or 2 scenteces why Wayland does not support color management? Or why it is difficult to implement color management in Wayland?

Thanks in advance

b

My understanding: because they don’t want to provide an API for the profiling tool to reliably show a colored rectangle in the middle of the screen, an API to apply calibration curves globally, or an API to know on what monitor your application window is being displayed.

1 Like

The difficulty mostly lies in that the original core design doesn’t take into account any color management, so it has to be an add on protocol. For the core application protocol we have something that most people agree on but that is only half of what is needed for proper color management since a profiling/calibration protocol is also required, the problem here is that in the wayland world (at least the core parts, compositors are free to do their own thing to certain extends) there is a big no on giving applications any kind of direct HW access and that includes the calibration curves. There is currently a big disagreement on how to handle this and to be honest have been a bit burned out by the discussions (since those don’t seem to go anywhere)

4 Likes

ok. thanks.
so, I am trying to sum up: at least one application protocol is missing (whatever that is)