Wayland color management

That’s my tentative conclusion as well. The thing is that implementing the analogous thing in CM is more complex - rather than the Compositor simply scaling pixels (something they have been doing a long time for “Chrome”), it needs to do colorspace conversions. Added complexity is that current CMMs (like lcms) don’t have the tweaks needed for mixing HDR and normal displays (I’m guessing some extra ‘intent’ params will be needed.) Successful heuristics for HiDPI mightn’t have analogs for CM. (If the surface is being displayed on more than one output the client can simply render to the output with the highest DPI and the compositor will scale the pixels down to other outputs. For CM, which is your ‘high quality’ output and which ones are not ? i.e. a mirrored display setup etc.) And it’s one thing to have an extension protocol spec., it’s quite another to get implementations to follow, especially when they are possibly using a lot of different tech. for compositing, none of which currently supports CM color conversions out-of-the-box.

I suspect this is the main technical issue. I’m gathering that the whole technical premise of the Wayland (and possibly other) compositors is that you can copy application RGB to any output with gay abandon, when we know that in reality this is not possible without wrecking color. Adding in the necessary color conversions will really crimp the style of how the compositor currently operates.

Yep - that’s the open source way. You can’t get “other” people to do stuff, you have to do it if you want it done. Either that or they have to suddenly be inspired to help :slight_smile:

I suppose that we can’t all be optimists! Yet if we were all pessimists, our ancestors would never have embraced the world beyond the maw of their caves. Balance is good.

Regardless of how daunting it may seem, this is not a futile adventure, predestined to fail. Rather, it’s pretty basic stuff which happens all of the time. For example, think of the folks out in the streets tasked with winning contracts for the products or services offered by your organisation. If they shirked away from similar obstacles whenever anybody vehemently says “no!”, you and your colleagues (and families) would soon go hungry. But you’re not hungry, are you? That’s because these competent professionals, acting on your behalf, are constantly overcoming objections and converting Luddites by employing proven basic techniques in a properly managed manner.

It’s easy. Anybody can do it. You just have to put aside fear brought on by baseless preconceptions. And then get on with it…

Can’t do it by wishing. We need to help them find that motivation :tada:

Sorry, they will be examples regarded as of little relevance, since they primarily involve printing, and we’re talking about displays. A better start is something similar to the Android O Color Management talk referred to previously.

Morning!

Er… I still believe that my suggestions are valid :slight_smile:
I did not intend someone to bring “printed products” – but in the work flows of today’s ad agencies, design shops &c there are lots of displays. And if it isn’t right there…

Sorry - I misunderstood you. Yes, soft proofing, design etc. are good examples.

1 Like

I just created an account here to join the discussion. I’m personally coming more from the wayland/hardware side of things and while I think I have a reasonable understanding of color I do not know about the practical side of color management at all.

Bringing good color management to wayland is a great goal to have and I’m sure every party is interested in getting there.

Having said that I also think that especially @gwgill is pushing really hard to maintain the X11 model for color management and isn’t open to work in the constrains that wayland has. A more open mind for new ideas and new workflows could really help.

Another gripe I have with this discussion is the focus on solutions and implementation rather than use-cases and concepts. In particular I’m never sure what part of color management would not work (well) with a particular concept.

I think it would be useful if further discussion would ignore performance, what kind of hardware is used to perform a calculation and other technical details and instead focus on what kind of pipeline (what data is required and which calculations have to be performed at what stage) would be required.

So, with having said all of that I have a few questions:

  1. What exactly is required for a color to be shown correctly? My current understanding is that the output image has to be in the same color space as the display (which includes primaries, white point and EOTF) and the (measured) color correction function (or LUT or whatever, let’s just simplify and call everything a function) has to be applied to every pixel. Is this correct?

  2. In particular which workflows/use-cases do not work with the current proposal in the wayland-devel mailing list?

Furthermore a few opinions:

  1. You should ignore color measurement for wayland. Realistically it requires full control over the display you want to measure. Wayland (and Xorg) can lease out full access to a display temporarily (this is mainly used by VR) and a color measurement application would then talk to the display using KMS. This problem doesn’t have to involve wayland at all.

  2. A color management system will be required in the compositor, period. Arguing that it is “too complex” or “to big” is not helping and not really true given that we already have huge GUI toolkits, OpenGL drivers which include a JIT compiler and often even javascript runtimes in those processes.

  3. Compromising on security is a no-go. Someone mentioned that wayland has had a security/permission/privilege protocol proposed but that never gained traction. Most of the time it made sense to not use wayland at all and have a dbus interface (like for screen recording, monitor configuration, etc) instead. I suspect that users will configure color management like that, too (iirc colord already does it that way).

I hope this doesn’t come across too combative. I’m just trying to learn more about color management and at the same time try to convey the viewpoint and constrains of the wayland side.

7 Likes

Hi, all!

I’m a Krita developer that actually implements HDR color management support for Krita on Windows atm [0]. Just wanted to add my points:

  1. The idea of offloading color management from the compositor to the apps is surely a failing approach. Windows does it and it is the reason why it is hated by most of the graphics people. The pain starts as soon as you buy a wide-gamut display: the system GUI and all non-managed apps will start looking awful. Just google “windows wide gamut display colors” and see tons of questions without answers: Windows just doesn’t support color management. Only a few clever graphical apps will show you a correct color. All other apps will have acid-looking colors. And that is the reason why Apple has become a de-facto standard for people in graphics (I’m not a fan of Apple myself, though).

  2. HDR is another reason why compositor must do color management. On a single screen, there might be different types of surfaces, like legacy SDR window might be painted next to an HDR video surface. If your display is HDR capable, then SDR surface should be converted from sRGB to Rec2020-PQ (which is the color space of the display); if the display is not HDR capable, then, in reverse, HDR surface should be converted to sRGB. This conversion cannot be done by the app itself, just because one app cannot influence the buffers of other apps. It can only be managed by the compositor.

  3. And the most tricky thing: applying display (ICC) profiling curves should happen after(!) this conversion! When an app writes to a Rec2020-PQ buffer, it has no idea about the fact that this buffer will be converted to a non-HDR sRGB-like (or AdobeRGB-like) display, so it just physically cannot prepare the data accordingly.
    The last point is exactly the thing that Windows’ DXGI “color management” API fails to do. To do proper color management on Windows, the developer should basically disable this API by marking a buffer as “sRGB” (even though the display is wide gamut) and prepare all the data himself.

  4. [technical comment on the original requirements proposal from @dutch_wolf] The compositing should not happen in Rec2020-linear always. Most of the time it is useless (and eats resources). Composition should happen in the actual color space of the display in use. If one uses an sRGB-8bit display, there is no reason in doing the composition in Rec2020. Just compose it in sRGB-8bit or (if really needed for some effect reasons) in Rec709-linear-16 bit.

[0] - dmitryK's blog: Krita Fall 2018 Sprint Results: HDR support for Krita and Qt!

5 Likes

hi sebastian,

thanks for taking the time to show up here. as a user space program developer, let me add my two cents which might partly answer your questions.

i’m very happy to ignore colour management implementation details, hardware or implementation constraints. in fact if that “just worked” so i wouldn’t have to worry about it that would be great. some of the more annoying details have been mentioned above:

  • multi screen environments and managing profiles per screen (i don’t want to do it if the compositor can handle it)
  • loading icc profiles, supporting other sources of colour space conversions, etc

about your questions:

re: 1. we can provide profiled buffers in tristimulus. let’s say that is a floating point buffer with unbounded CIE XYZ (illuminant E whitepoint) data. to convert that into display RGB you don’t need a lut. going from tristimulus to display RGB can be expressed in a matrix, this side of things is linear (camera RGB to XYZ not so much, because the original input is spectral and not three dimensional). you may need a shaper curve after the matrix, but yeah, this is by no means complex or difficult, or “too big”. it does become a lut if you do perceptual gamut mapping to compress wide gamut colours to sRGB output for instance. for my part i’m not a huge fan of this, i like correct colours.

all you need to start managing colour is any buffer in any precisely defined tristimulus space. the linear ones are all a matrix multiply away from each other, given that the input is not clamped (one of the reasons i’d prefer a way to hand over a floating point buffer). if the colour space is a tag on the input buffer, or everything has to be rec2020 or XYZ… i’ll leave that up to you. even linear rec709 (sRGB without tone curve) is fine if you don’t clamp at [0,1].

re 2. can’t speak to that, have to read the thread first.

7 Likes

Hi, Sebastian!

Replying to your call for usecases, I can tell what Krita would actually expect from a compositor:

Ideal case:

  1. Display color space can be: sRGB-like, AdobeRGB-like, p2020-pq. First two can be defined by an ICC, the last one should use so special treatment.
  2. The entire window is painted on an sRGB-8bit surface that is converted by the compositor into the color space of the display.
  3. In the middle of the window there is an overlay pass-though surface for the canvas. This canvas is explicitly color-corrected into the display color space by Krita itself. Compositor just passes it through to the display.
  4. We don’t need any opacity support for this pass-though surface. So there is no need to delinearize the colors for compositing. It can be blit directly in 8/10 bits.
  5. The compositor should provide Krita information about the actual color space of the display (ICC or a tag for PQ-style color spaces, not representable in ICC terms), so we could prepare colors for such pass-though surfaces.

What we do right now on Windows for HDR:

  1. We create a p2020-pq surface for the entire window.
  2. We patch Qt to render its sRGB GUI elements correctly on this surface.
  3. We draw the canvas directly on the window surface.
  4. Since Windows does not support any color management, this approach is not properly color managed for wide-gamut displays. If the user has a wide-gamut, but not HDR display, our p2020-pq will not be able to treat it. Windows’ compositor will automatically convert it into sRGB-8bit and will pass this data directly to the display, basically, ruining the colors.
  5. For an HDR case teh topic of “color management” is a bit fuzzy. Theoretically, display manufacturers should treat is somehow in hardware, but I don’t know any real details about that.

What we do right now on Windows for non-HDR:

  1. We create an 8-bit sRGB surface for the entire window. Basically, it disables any so-called “color management” from Windows’ side.
  2. We have a profile for the display, so we just color-correct canvas data before passing it to the surface.
  3. Technically, it means that our canvas is color managed, but the GUI is not. GUI will still have acid-looking colors on wide-gamut displays.
2 Likes

In general I think what you’d want is possible. The only problem is that in a multi monitor setup you would not know “the display color space”. Like some people have mentioned before it is possible to have an enumeration of the different display color spaces and ideally have a “primary” hint from the compositor. You would then have to choose which color space to use and your surface could end up on all displays.

I think that’s the real problem here but I also think that simply choosing the (subjectively) best colorspace in the client and letting the compositor convert it to the other display color spaces and do tone mapping would work out just fine.

Is there any reason why that would not work? Ignoring the overhead, is there any drawback?

2 Likes

I think we would prefer two options:

  1. Lazy one: just let the app select p2020-linear or p2020-pq and automatically convert it into the proper color space of the display in the compositor.

  2. Full one: let the app track display movements and select manage colorspace accordingly.

I guess we could use the second approach as a fall-back for the users who have bugs in the compositor or something like that. Or for efficiency reasons.

Normal apps with lighter demands for rendering speed/quality will always use the first approach. Professional apps will use the second one.

1 Like

I don’t follow here. Can you explain what you mean in more detail?

I mean that in some cases the app may decide to do full color management pipeline for a buffer. Like we do it right now in Krita. We track movements of the window and, when the window changes the display it is painted on, change the assigned profile manually. That is a lot of work and the process is error-prone. So most of the apps will not use it. But we have to, because of speed and quality reasons.

For example, imagine we have an 8-bit image, which is tagged with some custom srgb-like profile (e.g. p709-gamma2.2, which is not, strictly speaking, sRGB). Our display is a calibrated and profiled srgb-like device. So to render that on screen we have two options:

a) Lazy one (your suggestion):

  1. Create an 8-bit surface, tag it with sRGB color space.
  2. Convert our “custom srgb” into standard sRGB
  3. Upload the data into the surface
  4. Compositor will convert standard sRGB colors into the color space of the display
  5. Result: basically, we need 2 color space conversions on 8-bit data. It will cause both, huge rounding errors and performance drops.

b) Full one (for professional applications):

  1. Create an 8-bit surface, tag it as pass-through
  2. Convert the image data to the display color space. All the calculations are done in the color engine, which is 32-bits at least.
  3. Upload the data into the surface
  4. Compositor will just render the data directly

c) Add a color engine to the compositor (just an idea I got right now)

  1. Create an 8-bit surface, tag it with custom profile (drawback: HDR color spaces have no profiling support at the current moment)
  2. Upload the data
  3. Let the compositor do the conversion, right in a shader during rendering.

The latter approach sounds nice, but I’m not sure if it’ll be “solution that fits everybody”. I guess the the pass-through mode will should be kept still.

2 Likes

Thank you, that was helpful.

I just now understood that the accuracy of the color conversions is a problem when you’re dealing with 8 bits per pixel.

b) is simply impossible on wayland. Wayland clients do not know their position. They only know on which monitors (yes, multiple) they’re drawn on.

c) is pretty much what I wanted to suggest as a solution. Though I don’t understand what you mean with profiling support.

1 Like

In c), if the profile passed renders a ‘null transform’ profile, wouldn’t that be the same as ‘pass-thru’? And, if the compositor so desires, a null transform can be readily identified, and just not done.

I’m an aerospace and controls person, this color stuff is new to me so pardon my ignorance. But my career is full of re-factoring endeavor, and with the possibility of images being split over multiple displays, it is apparent even to me as an app developer that the compositor is the only place where per-display color and tone transform can be effectively accomplished.

@swick, @Dmitry_Kazakov thanks for your input!

I am not going to reply directly to any comments since this post will be probably already be long enough as is :wink:

First things first is that compositors should include some form of color management how they do it (use LCMS, OCIO, hand coded shaders or whatever) is something that should be left to the compositor creators (of course we can give advice to that but implementation is not what this discussion should be about). Secondly a full CMS system includes the capabilities to calibrate and profile, this includes telling the compositor what the calibration curves are and which profile to use, so even in the case that we bypass the compositor to measure, it would still be necessary to have a protocol to tell the compositors this information and even if this protocol isn’t in wayland (but dbus) it should be developed in tandem with the wayland protocol or else the wayland protocol will be a lot less usefull.

Also for various reasons it is preferably to not do any compositing more complex then bit blitting in a display color space (it is not guaranteed to be linear or well behaved), but as both @gwgill and @Dmitry_Kazakov have pointed out to me it is not the best way to demand a certain color space for this (since maybe the compositor is only bit blitting thus can do so directly in display space).

For certain applications we will need to have ‘direct’ access to the display color space - I think an example here is when doing soft proofing where only proper soft proofing can be done if the output space is known and is space rendered to - the problem here is that we currently don’t know which screen we are running on and that this might actually be more then a single screen, I think the best solution here is to declare one of the screens as ‘primary’ and either give apps the capability to provide a secondary buffer that will use the lazy/basic path or let the compositor do a “best effort” display to display transform.

So lets go to my new proposal

  • Legacy/sRGB applications, buffers that are in sRGB either due to not needing more or being legacy
  • Basic (effectively what @Dmitry_Kazakov calls lazy) the applications tags their buffer with a color space, probably needs a way for the compositor to tell which are supported (and probably need to mandate a certain minimum set but that is more of an implementation detail)
  • Advanced app (this might be implemented as some sort of ‘pass-trough’ tag on the above basic buffer) something that needs direct access to the display space this main buffer will be blitted into the ‘primary screen’ buffer, can optionally provide a ‘basic/lazy’ secondary buffer for advanced compositing (shadows, alpha transparent overlays, etc) and other screens. If this buffer is not provided the compositor should do a best effort display to display transform for any non-primary displays (and probably render a black rectangle underneath for advanced compositing purposes)
  • Profiler/Calibrator: Need a way to put color data directly in display space for measuring and need to be able to update compositor on any changes made to calibration curve and/or display profile

This is pretty actually pretty close to the wayland dev proposal by Nielsen with the exception that it includes profiling/calibration (to some extend) and doesn’t use device link profiles. The main reason no device link profiles are used is that not all applications can use them, for example Blender and Natron (since they are build on OCIO instead of ICC) and (if I understand that proposal correctly) any advanced stuff the device link will do (like soft proofing) will only be visible on the ‘primary display’ which will show up as a difference between displays (which we all agree we want to minimize). This besides the factor that ICC profiles are kinda complex binary files and I think most libraries/apps able to read them haven’t been fuzz tested or investigate for security issues.

If I understand it correctly it should be “trivial” to add HDR to the above (please correct me if I am wrong)

The problem is that you cannot properly describe HDR color space with a normal ICC profile. From technical perspective, the problem is that ICC cannot describe values higher than 1.0, which are a norm in scene-referred color spaces. From theoretical point of view (I’m not a pro in ICC details, so I might be wrong), the problem is that ICC does not operate with “nits” values. Instead, it expect diffuse white be the maximum available color brightness.

For example, PNG-HDR standard provides an ICC profile for p2020-pq color space (for backward compatibility), but if you pass this profile to a color management system and will try to convert colors to e.g. p2020-linear, all the color values will be clipped by 1.0. So the implementer should recognize this profile by a special tag and shape the colors accordingly.

So, if you want to implement a “color management engine” for a compositor, you should define a term “profile” in a bit broader way, than just an “ICC profile”.

EDIT:
There is also a problem of “feature completeness” of this engine. ICC pipeline has quite a lot of options and features and I am 99% sure there will be people who would report wish-bugs for some nice things like color proofing, blackpoint compensation and so on. I think that the main requirement here is that compositor at the very least must not prevent people from implementing these features inside the apps. Pass-through mode might help with it at least partially.

EDIT2:

Yes, unless ICC is the only means of describing color, then yes, HDR can be implemented quite easily. Though the devil hides in details: there is no commonly standard for describing HDR color :slight_smile: Microsoft used enum tags for that, which is extremely bad: one cannot choose a custom color space for the buffer. There is even no tag for passthrough mode (DXGI_COLOR_SPACE_RGB_FULL_G22_NONE_P709 is technically a pass-though, because DXGI does no color management after compositing).

1 Like

Yes, passing a null profile might work. Though the term"profile" should be defined quite carefully (see my other post)