I was thinking would it be acceptable if the calibration curve and display profile could be set via a dbus protocol instead of via wayland directly? This should also give some flexibility in implementation and I think dbus already has some idea of privileged vs non-privileged operations. Although this would mean that we have to standardize a dbus protocol alongside the wayland protocol (we don’t want every compositor re-inventing the wheel for this).
I think this should give a better separation between configuration of CM and CM itself
 ‘simple’ compositors (like those base on wlroots like sway) could implement it inside the compositor but more complex things (where there is also a full DE running) could put it in a configuration daemon or something
I agree that having a dbus protocol for display calibration and/or rendering intend makes a lot of sense. However I don’t think that it’s necessary to standardize that protocol alongside the wayland protocol. The reference implementation in weston should probably just use the existing static file configuration machinery.
When Gnome Shell and KDE Plasma want to implement color management is the right time to get involved there.
@gwgill One thing that might prove to be difficult when measuring a display is the mapping between physical display and advertised color space of wayland output which would be required for the null transformation.
I still think that bypassing the compositor altogether is a good idea especially since the verification step should make sure that everything works right in the end.
In any case, I don’t think measuring should be a blocker for a wayland color management protocol.
I have 2 issues with that the first being that for calibration and profiling those need to be set in realtime, this can of course be done with static files (read on change) but is not ideal. after calibration; the second issue is that if KDE, GNOME and others will each develop their own protocol which will be a problem for tools like argyllcms, displaycal, etc
Anyway now that I have your attention, I would like to try to put some of this in an actual wayland protocol but have a hard time finding information on the xml format, what formats can be send over the line, stuff like that) is there any information for that?
 AFAIK calibration always happens before profiling and the profiling needs the calibration curves loaded and after profiling we want to load the new profile
Well, OCIO is not the best choice for automated color conversion. OCIO doesn’t define any machine-readable interchangable color spaces. It is good for its main purpose: do color conversion for the current display under total control of the user. OCIO is nice, but is surely not usable for predefined “profiles”.
That is the main problem We should either modify ICC standard (+ LCMS?) to support HDR, or invent a yet-another-format for profiles.
That exactly what I’m pushing for: by default the compositor should expect all the apps just render in sRGB. 99% of developers will never know/care about color management. We must not expect app developers would care about that. It will never happen.
I have just checked: it looks like Windows’ ICM system has no connection to DirectX surfaces API. I have called SetICMMode() for HDC, then created an sRGB surface with DXGI (and, later, scRGB) and DirectX didn’t do any conversion to the display profile. I don’t really know how it is expected to work. It just passes the data through directly to the display without any color conversions, even though I explicitly tell it is sRGB data.
I’ll try to expand it. Graeme tried to tell that one cannot use intermediate color space representation if one want to get good rendering quality.
The point is, when color management system converts data into the display’s color space, it can use different “Rendering Intents”. These “intents” define how the colors outside display color range will be fit into the destination display space. The simplest way to fit two color spaces is to clip non-displayable colors, basically drop them. More sophisticated approaches, like “Perceptual” compresses the source color range (basically offsetting absolute color values) to fit all the source colors into the destination space. The user will not notice the offset (due to eyes adaptation processes), but the image will not have dull flat-filled areas of not-representable colors.
So, if you use “intermediate color space” approach, then the only intent you can use is “clipping”, which creates bad results most of the time. If you want to use “perceptual” intent, you should convert directly from the source image color space to the display color space.
Yes, “sRGB-like” was just a short name for “a color space with primaries and EOTF something like in sRGB”
The point I wanted to tell was unrelated though: as far as I know, it is impossible to describe an HDR color space with an ICC profile. Though it needs some investigation.
Yes, exactly. GUI elements, like menus and buttons, are expected to be painted on sRGB surface, but the canvas with actual image data is expected to be painted on a separate surface with a different color tag.
I mean the app should have two surfaces: one with sRGB tag for GUI elements (for which the compositor does color management) and the other one for actual image data (compositor passes it directly to the display).
There is not color management for DXGI surfaces in Windows, even though MS claims there is. You create an sRGB surface, and DirectX passes the data directly to GPU without any conversions. Calling SetICMMode for the corresponding HDC doesn’t do anything.
For the most part, I have absolutely no idea whatsoever what you Techie-Types are talking about when you’re down in the detail ditches. I do, however, “feel the love” now emanating from this ‘discourse’.
It’s very reassuring that, seemingly, we’re well on our way to avoiding the Colour Chaos which may have occurred had this collaborative discussion not taken place. Bravo and thank-you!
Agreed it was just and example that ICC is not the only way to do it.
Agreed, if we can get ICC to support HDR that would simplefy quite a lot. It is actually quite interesting that ICC is in one way, way to powerful (supports color space with up to 16 channels, while we are only interesting in 3 channel RGB) but on the other hand not powerful enough (no HDR support).
I’m not sure what you mean. AFAIK no CMM has explicit provision for mixing SDR and HDR profiles. In the case of the scRGB profile, I experimented with baking in a tone mapping curve so I could use it as an input profile with a standard CMM. For full flexibility some additions are needed when specifying link options to a CMM.
(Or are you talking about the luminanceTag ? It’s a standard ICC tag.)
If SDR to HDR brightness is specified in (say) cd/m^2, then the HDR luminanceTag would be used to compute the scaling factor.
Simplest is to scale white to a given brightness. HDR to SDR needs a tone curve.
TV HDR is currently pretty messed up because to handle it intuitively it needs a known “diffuse white” reference value, but the standards the TV industry rushed out are based on a mastering absolute brightness specification. In a mastering suite you can specify a standard ambient light level and display absolute brightness, but in the real world people adjust their TV’s to suite the ambient conditions. If there was a defined diffuse white then the tone mapping can know to preserve linearity from that level down, while being free to compress specular highlights and light source levels much more aggressively. Mapping SDR to HDR is pretty simple then - map SDR white to HDR diffuse white. The way it seems to be shaping up in practice is that implementors are simply assuming something like 100 cd/^m is the diffuse white, while nothing in the standards actually specifies this.
I’ve taken some time to write a rough protocol of how I think things should work. It does ignore a few problems (they are described in the protocol description as FIXMEs).
The way the protocol works for the client is this: you listen to wl_surface.enter/leave and the preferred colorspace output event. You decide which colorspace to render to and tell that to the compositor.
The compositor has a bunch of surfaces with their colorspaces and if necessary does gamut mapping to convert them to the output colorspace.
From what I’ve gathered the general idea seems to be acceptable. Does anyone disagree?
Further, here are the issues that still have to be solved:
FIXME should the zwp_colorspace_v1::information event contain the
well-known name of a colorspace?
Right now the client has to infer from the ICC profile that
an output is e.g. sRGB.
FIXME should we accept all ICC profiles? Probably not.
FIXME how should the ICC profile get transmitted? fd passing or
as "array" (involved endianness).
FIXME should we let the client attach a rendering intent hint to
FIXME what about tone mapping?
Just took a quick look but I think it is a bad idea for clients to set arbitrary ICC profiles as color space, firstly since ICC profiles support more then just RGB but also CMYK and color space which are described in up to 16 channels. Secondly since most programs that care about accurate rendering want to render to displays directly anyway (so those just want to know the display profile and afterwards the compositor shouldn’t touch the buffers anymore). The programs that want to potentially render wide gamut but don’t care about “perfect” accuracy are much better suited by a way to tell them what the compositor is capable off and then tag the surface.
Another thing I am worried about (but this just might be my understanding not being complete) is that I can’t find any guarantee that a wl_output will map to one and only one display. (So I didn’t use it in the above)
The question I have is what a suitable subset would look like. Maybe something like RGB Device Connection Space? I don’t know enough about ICC profiles.
They can just assign the colorspace they queried from the wl_output to the wl_surface.
I’m not sure if I understand you here. They can just create an ICC profile that describes their wide gamut colorspace and assign that to the wl_surface.
The compositor already has to be able to do conversion between two arbitrary color spaces (two display with ICC profiles loaded, surface with first color space has to be displayed on second display). I don’t see why we should limit the color spaces to some arbitrary more or less popular ones.
That’s actually a really good point. There is protocols which seem to make that assumption but I’ll have to take another look.
This sounds technically feasible, but I’m not clear enough about the different related Wayland protocols (i.e. xdg_surface, xdg_shell etc.) to have a feeling as to what approach is appropriate. It would have to be a Wayland like protocol over dbus, since there will be a lot of common elements (references to outputs, color profiles, surfaces etc.)
Sure, but they are closely related. A profiler will make use of much of the color management protocol in its operation.
I don’t like the sound of using a daemon. Installing calibration curves needs a mechanism to know when they are installed, to facilitate reliable verification or high res calibration, where the calibration curves are changed dynamically with each patch measurement so as to be able to exploit the VideoLUT output bit depth.
The profiler needs to be able to dynamically load profiles & calibration curves to do its job, and there’s no point in creating a CM protocol if it can’t be configured and tested. A CM protocol without the APIs to calibrate, profile and install the profiles is only a half implementation, and simply isn’t worth doing.
This is a bad idea from many perspectives, but I won’t repeat my explanations from the Wayland list here.
That’s rather like writing a compositor, but never looking at the output on a real display - i.e. it’s an academic exercise.
Another way of putting this is that there is no point implementing a protocol extension if it is never tested, and the the application that most fully exercises a color management protocol is the profiling application.
I’ve put a brain dump on a Wayland Color Management protocol here. It’s a rough set of ideas at this point, rather than something very formal. It will need a bit more research into the “Wayland way of doing things” to turn it into an .xml.