Wayland color management

color_management

#121

Okay I have been working on this for a bit, if this works out might start on a dbus protocol proposal

@swick does this look sane? (if it does I might post this also the wayland mailinglist)


(Graeme W. Gill) #122

No, it was a request from a customer.

I’m not sure what you mean. AFAIK no CMM has explicit provision for mixing SDR and HDR profiles. In the case of the scRGB profile, I experimented with baking in a tone mapping curve so I could use it as an input profile with a standard CMM. For full flexibility some additions are needed when specifying link options to a CMM.
(Or are you talking about the luminanceTag ? It’s a standard ICC tag.)

If SDR to HDR brightness is specified in (say) cd/m^2, then the HDR luminanceTag would be used to compute the scaling factor.

Simplest is to scale white to a given brightness. HDR to SDR needs a tone curve.

TV HDR is currently pretty messed up because to handle it intuitively it needs a known “diffuse white” reference value, but the standards the TV industry rushed out are based on a mastering absolute brightness specification. In a mastering suite you can specify a standard ambient light level and display absolute brightness, but in the real world people adjust their TV’s to suite the ambient conditions. If there was a defined diffuse white then the tone mapping can know to preserve linearity from that level down, while being free to compress specular highlights and light source levels much more aggressively. Mapping SDR to HDR is pretty simple then - map SDR white to HDR diffuse white. The way it seems to be shaping up in practice is that implementors are simply assuming something like 100 cd/^m is the diffuse white, while nothing in the standards actually specifies this.


(Sebastian Wick) #123

Sorry for taking so long to answer.

I’ve taken some time to write a rough protocol of how I think things should work. It does ignore a few problems (they are described in the protocol description as FIXMEs).

The way the protocol works for the client is this: you listen to wl_surface.enter/leave and the preferred colorspace output event. You decide which colorspace to render to and tell that to the compositor.

The compositor has a bunch of surfaces with their colorspaces and if necessary does gamut mapping to convert them to the output colorspace.

From what I’ve gathered the general idea seems to be acceptable. Does anyone disagree?

Further, here are the issues that still have to be solved:

FIXME should the zwp_colorspace_v1::information event contain the
          well-known name of a colorspace?

          Right now the client has to infer from the ICC profile that
          an output is e.g. sRGB.

    FIXME should we accept all ICC profiles? Probably not.
          (colorspace_from_icc)

    FIXME how should the ICC profile get transmitted? fd passing or
          as "array" (involved endianness).

    FIXME should we let the client attach a rendering intent hint to
          a surface?

    FIXME what about tone mapping?

Any input would be appreciated!

@gwgill @dutch_wolf @Dmitry_Kazakov


#124

Just took a quick look but I think it is a bad idea for clients to set arbitrary ICC profiles as color space, firstly since ICC profiles support more then just RGB but also CMYK and color space which are described in up to 16 channels. Secondly since most programs that care about accurate rendering want to render to displays directly anyway (so those just want to know the display profile and afterwards the compositor shouldn’t touch the buffers anymore). The programs that want to potentially render wide gamut but don’t care about “perfect” accuracy are much better suited by a way to tell them what the compositor is capable off and then tag the surface.

Something like this https://github.com/eburema/wayland-colormanager/blob/master/cm_wayland_protocol.xml make I think more sense

Another thing I am worried about (but this just might be my understanding not being complete) is that I can’t find any guarantee that a wl_output will map to one and only one display. (So I didn’t use it in the above)


(Sebastian Wick) #125

I agree. That’s the second FIXME.

The question I have is what a suitable subset would look like. Maybe something like RGB Device Connection Space? I don’t know enough about ICC profiles.

They can just assign the colorspace they queried from the wl_output to the wl_surface.

I’m not sure if I understand you here. They can just create an ICC profile that describes their wide gamut colorspace and assign that to the wl_surface.

The compositor already has to be able to do conversion between two arbitrary color spaces (two display with ICC profiles loaded, surface with first color space has to be displayed on second display). I don’t see why we should limit the color spaces to some arbitrary more or less popular ones.

That’s actually a really good point. There is protocols which seem to make that assumption but I’ll have to take another look.


(Graeme W. Gill) #126

This sounds technically feasible, but I’m not clear enough about the different related Wayland protocols (i.e. xdg_surface, xdg_shell etc.) to have a feeling as to what approach is appropriate. It would have to be a Wayland like protocol over dbus, since there will be a lot of common elements (references to outputs, color profiles, surfaces etc.)

Sure, but they are closely related. A profiler will make use of much of the color management protocol in its operation.

I don’t like the sound of using a daemon. Installing calibration curves needs a mechanism to know when they are installed, to facilitate reliable verification or high res calibration, where the calibration curves are changed dynamically with each patch measurement so as to be able to exploit the VideoLUT output bit depth.

The profiler needs to be able to dynamically load profiles & calibration curves to do its job, and there’s no point in creating a CM protocol if it can’t be configured and tested. A CM protocol without the APIs to calibrate, profile and install the profiles is only a half implementation, and simply isn’t worth doing.

This is a bad idea from many perspectives, but I won’t repeat my explanations from the Wayland list here.

That’s rather like writing a compositor, but never looking at the output on a real display - i.e. it’s an academic exercise.

Another way of putting this is that there is no point implementing a protocol extension if it is never tested, and the the application that most fully exercises a color management protocol is the profiling application.


(Graeme W. Gill) #127

I’ve put a brain dump on a Wayland Color Management protocol here. It’s a rough set of ideas at this point, rather than something very formal. It will need a bit more research into the “Wayland way of doing things” to turn it into an .xml.


(Graeme W. Gill) #128

I don’t see any problem with clients uploading their own 3 channel profiles, and in fact this is expected in an environment where the compositor does default color management. It’s not reasonable to expect the compositor to come pre-loaded with every possible 3 channel source colorspace an application may want to default to. i.e. say an application was written for the old Apple MAC gamma 1.8 display profiles, and all its graphic elements, icons, images etc. are in this space. Without being able to upload this profile, it has to do color correction itself, something it may not be required to do for any other reason (i.e. if it’s not a color managed application in any other way.)


(jo) #129

just to be sure: are you proposing to upload ICC profiles (including the encoding, idiosyncracies about bit depth, D50 white point etc…) or an abstract transform in a form that has less dependencies (i.e. float* to matrix or tonecurve if really need be or some form of lut for extreme cases?). i know sometimes you don’t have an ICC but just the numbers (e.g. in an OCIO setting), and how painful it can be to construct an ICC from that.


(Graeme W. Gill) #130

ICC profiles. There is nothing else that is widely standardized and has tools available for creating display profiles. (i.e. anything else is making a lot of work for little gain.)

OCIO is severely lacking if it can’t convert an ICC display profile into OCIO format, so that it is able to do application color management.


#131

Good point

There are I think 2 issues here; one is that currently ICC transforms aren’t HW accelerated and can’t be done on opengl buffers (no possibility for shaders)[1] providing for a select few popular spaces means those can be hard-coded as shaders making HW accelerated rendering possible, withouth this openGL buffer will be slow. The second is that not everything is ICC, for example video players that want to render HDR[2] another being anything that uses OCIO although most of those should use the wl_output queried to wl_surface dance without doing anything with the queried color space themselves (for now at least)

OCIO shows its origin in the movie industry here since it assumes everybody runs expensive monitors that can be calibrated (with internal monitor LUTs) to be perfect sRGB/DCI-3P. It is possible to use display profiles but you will need to use something like collink to render a device link to a LUT (for example sRGB to display). I think they are trying to improve this for the next major version but couldn’t find any documentation as of yet.


[1] There is no theoretical limit that prevents this but none of the current libraries/systems provide this.
[2] As far as I know no video format supports embedded icc profiles,


(Sebastian Wick) #132

The way I see it we have two possibilities:

  1. use a set of well-known colorspaces (tags)
  2. allow the client to define its own colorspaces

The first possibility has the problem that clients can’t just pass in arbitrary colorspaces, might have to do color conversion on their own (e.g. a display with wide gamut and a measured ICC, compositor only advertises sRGB, client has to convert its image to the ICC if it wants wide gamut).

The second has the problem that the colorspace has to be communicated somehow. Creating our own format can result in really stupid mistakes and requires new tools and libraries, reusing an existing standard also has drawbacks (which standard to choose, which subset of the standard is valid for the use-case).

No solution seems particularly good.

I don’t think that’s an issue.

There is three levels:

  1. No conversion required (noop)
  2. The color conversion pipeline can be described by a bunch of matrices (easy for shaders and usable for the hardware)
  3. All other pipelines can be applied on some values on the CPU that are later used to form a 3d lut on the GPU (can’t be offloaded to the display hardware)

( ignoring EOTF/OETF but that’s not a problem either)


(Sebastian Wick) #133

Ignoring the whole measurement part (I completely agree with Pekka on this one https://lists.freedesktop.org/archives/wayland-devel/2019-January/039916.html) I do believe we’re generally on the same page.

I do however have a question: you’re talking about source and destination ICC intents and I have only read about a single intent for a color space transformation. Can you expand on that?

Another point that needs an answer is how to handle HDR/tone mapping.


#134

The more I think about it the more I realize we might need to do both, a set of known color spaces that a compositor should/could support (I don’t think mandating anything is the right idea) and support uploading ICC profiles for anything “weird”, if we go that route HDR would only be supported if rendering directly to the display (provided the display supports HDR) or using one of the known spaces (provided the compositor supports it in this case I would allow tone mapping to be used for LDR outputs), when ICC profiles support HDR (I suspect it is more a case of when then if here) we can change this requirement of course.

Point taken, that is why I said currently, thinking a bit more about it, it should be possible to implement this on top of LCMS (make an image LUT transform that with LCMS and then upload the LUT to the GPU for use in a shader), might make for an interesting project tbh


(Graeme W. Gill) #135

That’s up to the compositor implementation. There’s certainly hardware accelerated color transform code around (OpenEXR has some examples, “GPU Gems 2” has an example, and Kai-Uwe Behrmann implemented HW accelleration in his LibXcms and the compiz plugin), so it’s just a matter of setting it up and figuring how to implement the result of the lcms ICC link using the GPU. I’d imagine a couple of different implementations, one for where the source and destination are both matrix/shaper, and one for when the link can only be described by a cLUT.

ICC is the accepted way of describing display color response. It’s not that hard to create ICC profiles for standard display spaces (I provide a bunch with ArgyllCMS). I’m happy to provide more if there happens to be some standard space that doesn’t have one available, and lcms has code designed to make this easy as well.

The people using Academy Color Encoding System (ACES), are certainly able to import ICC profiles into their system (they were playing with some ArgyllCMS links at one stage), and if OCIO doesn’t have support, it really shouldn’t be too hard to add, and should be high up the list of priorities.

Right, but there’s nothing unusual in that, and it’s not directly an issue, because it’s up to the color managed application to wrangle the source colorspace.

Color Managed hardware video decoding under the Wayland compositor ? - well that’s a different kettle of fish, and it’s very hardware dependent. At least with a display profile for each output, the information is there to make it possible, assuming that the hardware has some provision for it.


(Graeme W. Gill) #136

I disagree. All operating systems that have color management use ICC profiles for this purpose. So all applications that are written for a color managed operating environment make use of ICC profiles to communicate the display device response. There is zero reason to create something new here.


(Graeme W. Gill) #137

We’re not on the same page then.

In an ideal world there would be a single intent for a transform (see ArgyllCMS collink Gamut Mapping Mode for an example of this). The ICC attempted to make it practical to pre-compute intent transformation to allow for fast mix and matching of these pre-computed intents. So device <-> PCS intent transformations are baked into each device ICC profile. In the real world this is only a partial success, because it has fundamental limitations (you can’t divide some intents up into two pieces where one doesn’t know about the other). Putting that aside, it means that there are two intents involved, one for the source profile and one for the destination. 99% of the time you would set these to be the same. 1% of the time there is a rendering result you would like that you can achieve by mixing the intents. Classic example - you would like to do a soft proof rendering of a print onto a display that represents the print paper color, but you don’t actually want to render the literal paper color, you want to render it relative to the display white. So you would select absolute intent on the source, and relative on the destination.

I don’t see that a great deal more explanation is needed for a general approach: If the source is SDR and the destination is HDR, scale the SDR white to be the declared diffuse white in HDR space. If the source is HDR and the destination is SDR, apply a tone mapping curve. (see for instance High Dynamic Range Imaging). Since it’s a fallback transform, it doesn’t have to be anything special, just a reasonable choice, and perhaps could be automatically adjusted for the declared diffuse white level. (An elaboration would be to allow upload of tone mapping curves. ) API required: Declaring which profiles are to be treated as HDR, declaring a diffuse white for each of them.


#138

Yes I agree now, I was overthinking this without fully understanding it (see above as well)

Agreed, in hindsight this should have been more of an argument to also support some standard (named) color space for applications that don’t care as much (certain video players, some basic image viewers) and the rest should either try to render to display or be able to provide a ICC profile

Only have information for blender https://en.blender.org/index.php/User:Sobotka/Color_Management/Calibration_and_Profiling in which it is still a rather manual process. But as I said current master supports loading ICC display profiles (see here https://github.com/imageworks/OpenColorIO/blob/master/src/OpenColorIO/fileformats/FileFormatICC.cpp )

The reason it is so hard to add is that in OCIO everything is described in reference to a scene linear color space (in contrast to a PCS as in ICC) and this scene linear space can be anything (for example sRGB primaries with linear gamma for blender or ACES2065-1 for ACES). On one hand this makes it quite powerful in describing the exact color workflow a studio needs on the other this means that any transform not described in the config is not really possible.


(Dmitry Kazakov) #139

That was exactly what I was wondering about. When I tested, LCMS2 failed to do the conversions correctly from p2020-pq to p2020-linear. So I’m not sure how many changes are needed there.

There is a small problem with such mapping: in SDR mode Windows maps diffuse white to 80cd/^m, not to 100 cd/^m. And in HDR mode this value is configurable, but is not available to normal desktop applications’ API (yet), it is present only in UWP environment.


(Dmitry Kazakov) #140

Another point that needs an answer is how to handle HDR/tone mapping.

I don’t see that a great deal more explanation is needed for a general approach: If the source is SDR and the destination is HDR, scale the SDR white to be the declared diffuse white in HDR space.

I would only add two requirements here:

  • Diffuse white level in HDR mode must be configurable by the user
  • Configured diffuse white level should be available to the applications via API.

The second point is needed in case application renders GUI and an HDR image on the same surface. Windows implements only the first requirement, and the second one is not available in normal API (yet).