I just want to make something clear: nobody is suggesting that the compositor takes over all color management tasks. The client is free to do whatever it wants as long as the surfaces in the end are in a specific color space and the compositor is told about it.
In your PDF example you could still render everything into a floating point frame buffer with e.g. CIE LAB color space. The compositor would only convert it into the display color spaces in the end.
Thanks, thatās helpful. Iāll take a look at collink. However this just sounds like we need policy decisions for the compositor. Thatās something we can handle.
I guess thatās what I was proposing earlier. Iām not entirely sure why the fidelity would be compromised and you donāt expand on that.
Well, thatās the point we disagree on. It would help if you expand on why you think thatās the case.
This is just weird. Are you now arguing that the compositor should somehow interfere with the color profiling step? The color profiling is best done when you have full control over all the hardware so you can do the best possible measurements. If the compositor then doesnāt make correct use of that measurement itās a bug in the compositor.
I completely disagree here.
Secondly, calibration/profiling is just another App. It will have a full on GUI, and will want to access the display in both special and normal ways (calibrate, profile, verify).
That would all work just fine. You would press on ācalibrateā button, the compositor asks the user to temporarily give control over the display to the application, you draw whatever you need and in the end the compositor takes back control and everything looks like before.
I want to see actual arguments here and not āIāve done this for 20 years like thatā.
I honestly donāt care if you laugh or not. This is a fundamental design decision and so far we have found solutions to all problems without breaking it.
Thatās simply wrong. You just have to take a look at existing compositors to know that what youāre saying is not the case.
In general displays arenāt ālikeā any particular colorspace - they are what they measure to be. i.e. if they are color managed, they are always defined by a profile.
That would mean an application could never take advantage of a wide gamut display, and could never do a good perceptual mapping from the source space to the display.
Perhaps you meant that you would like a default conversion that assumes sRGB source space for non color aware applications or the GUI elements that you render ?
Why should it be special ? Shouldnāt there be any number of color managed applications and windows displayed on a screen at once ?
But the GUI may want to use transparency on color managed applications for various transition effects etc. Yes the window should be opaque for correct color.
Agreed that a color aware application needs to know the display profile to create a high quality conversion.
Iām not clear on what you mean by that. You are using a color managed API and then using the null profile trick to disable color management for MSWindows ?
Right, but Krita is an application - you arenāt responsible for the system GUI rendering.
That is the implications of constraints 1) that the application doesnāt know which display it is rendering for 2) High quality color output is desired.
Please re-read what I wrote. No this isnāt workable, because how will the conversion from L*a*b* to the display space know how to execute the different intents for different pixels in the raster ? How will it know the source gamut for perceptual intent conversions ?
I donāt think you grasp the complexity of this. You really donāt want something like collink in the composer if there is any possible way of avoiding it, and even if you did, it wouldnāt satisfy other application requirements.
But in any case this, or the more practical support of general device link conversions is not needed if the client can be given the hint as to which display profile it should prefer to render to, similarly to the HiDPI case.
I expanded on it at length. See the explanation of intents and how you need both the source and destination gamuts.
Iāve already explained this. If you donāt understand the explanation, then please indicate what you donāt follow.
Itās about the color workflow - how the colors values get transferred and transformed on their way to the viewers eyeballs. What Iām worried about is the whole processing pipeline, and making sure that when a particular buffer of pixels is declared as being in the displays colorspace as defined by the given profile, that in fact the buffer is actually processed in exactly the same way for display as it was when it was profiled.
So saying that application colors are processed through the compositor but that the calibration and profiling tools should take control of the raw display interface, and take it on trust that the compositor wonāt differ in how the pixels are processed, is inviting unnecessary disaster. For instance: lets say that my calibration and profiling tools stay as they are, and setup calibration using the hardware VideoLUT and then profile the display. Then the user switches to Wayland, but Wayland doesnāt load the VideoLUT from the āvcgtā tag in the display profile (or that Wayland has some other rendering tweak/feature.) The profile is then not valid. How can I verify itās not valid ? - I need to run the color profiling tools in verification mode through the Wayland compositor to do this.
To put it another way - any valid color calibration & profiling tool has to have the capability of changing the system calibration and profile, because thatās itās final task for the user. This capability means that it can set the color workflow up correctly for calibration and profiling, and gives it the extremely valuable assurance that the color processing for profiling is the same one that will be used for rendering, meaning that the profile will be valid, and it is trivial to verify that the profile is correct.
Summary :- from a Color Management point of view, suggesting that the calibration and profiling tools should access the display using a completely different mechanism than the workflow the profile will be use in, is the exact opposite of the best way of doing it.
You should really take a look at one of the commercial profiling applications. No, they donāt issue instructions and then switch to a blank screen - they have instructions on the screen, and graphics of exactly where to place the instrument, etc. No I donāt think it is normal for color profiling tools to have to provide their own GUI rendering library to display their GUI, just because they want the color management set to in a particular way.
Iāve expended a lot of time with detailed explanations, but you donāt seem to want to spend the time researching or understanding.
[ Strangely enough, Iāve not spend 20 years doing the same thing over and over again, Iāve spent it in a constant search for better ways of doing things. ]
Which is perfectly fine - here is another problem requiring a solution :- find a way to let color calibration and profiling applications install calibrations and profiles, so that they can perform the function users expect of them.
I see it as fundamental. Please explain why you think otherwise.
Iām confident HDR spaces can be characterized successfully with ICC profiles (as successfully as is possible, given than many HDR displays do too much processing and therefore make any static characterization a compromise). A display profile usually records the absolute brightness in the ālumiā tag.
Yes. An ICC CMM will need some tweaking to cope with linking HDR profiles. For SDR ā HDR there needs to be a brightness intent parameter (where SDR 1.0 maps to). For HDR ā SDR there needs to be suitable tone reproduction operator.
[ I experimented with this a little, some time ago when playing with creating an scRGB profile. ]
Please stop getting personal (and I mean everyone participating)! We are all here with best intents. This is a complex topic and a lot of people want to learn from this thread. Getting personal drives people away and this is counter productive!
Please keep explaining things to each other, donāt point fingers. If something is unclear explain it again!
Thatās not how it is coming across from some people.
Sorry, Iāve spent my whole working day doing nothing but composing emails about Wayland & Color Management. I have explained the same points about half a dozen times in half a dozen different ways, and I have reached my limits for a while, unless we are able to move on.
A better approach I think is the hybrid one we were talking
about: Give the client enough information to decide which display
it should optimize color rendering for. When the compositor needs
to display the surface on some other display, it can use a simpler
bulk color conversion to do so. Optimal color rendering can at least
be achieved on one display (hopefully enough to satisfy the demanding
color user), while still allowing the compositor to handle
window transitions, mirroring etc. without requiring huge
re-writes of applications. This is the analogy to current HiDPI handling.
I was thinking would it be acceptable if the calibration curve and display profile could be set via a dbus protocol instead of via wayland directly? This should also give some flexibility in implementation[1] and I think dbus already has some idea of privileged vs non-privileged operations. Although this would mean that we have to standardize a dbus protocol alongside the wayland protocol (we donāt want every compositor re-inventing the wheel for this).
I think this should give a better separation between configuration of CM and CM itself
[1] āsimpleā compositors (like those base on wlroots like sway) could implement it inside the compositor but more complex things (where there is also a full DE running) could put it in a configuration daemon or something
I agree that having a dbus protocol for display calibration and/or rendering intend makes a lot of sense. However I donāt think that itās necessary to standardize that protocol alongside the wayland protocol. The reference implementation in weston should probably just use the existing static file configuration machinery.
When Gnome Shell and KDE Plasma want to implement color management is the right time to get involved there.
@gwgill One thing that might prove to be difficult when measuring a display is the mapping between physical display and advertised color space of wayland output which would be required for the null transformation.
I still think that bypassing the compositor altogether is a good idea especially since the verification step should make sure that everything works right in the end.
In any case, I donāt think measuring should be a blocker for a wayland color management protocol.
I have 2 issues with that the first being that for calibration and profiling those need to be set in realtime[1], this can of course be done with static files (read on change) but is not ideal. after calibration; the second issue is that if KDE, GNOME and others will each develop their own protocol which will be a problem for tools like argyllcms, displaycal, etc
Anyway now that I have your attention, I would like to try to put some of this in an actual wayland protocol but have a hard time finding information on the xml format, what formats can be send over the line, stuff like that) is there any information for that?
[1] AFAIK calibration always happens before profiling and the profiling needs the calibration curves loaded and after profiling we want to load the new profile
I think I have something, not entirely satisfied with how to communicate the supported color spaces but I think the surface part is I think mostly how it should be
Well, OCIO is not the best choice for automated color conversion. OCIO doesnāt define any machine-readable interchangable color spaces. It is good for its main purpose: do color conversion for the current display under total control of the user. OCIO is nice, but is surely not usable for predefined āprofilesā.
That is the main problem We should either modify ICC standard (+ LCMS?) to support HDR, or invent a yet-another-format for profiles.
That exactly what Iām pushing for: by default the compositor should expect all the apps just render in sRGB. 99% of developers will never know/care about color management. We must not expect app developers would care about that. It will never happen.
I have just checked: it looks like Windowsā ICM system has no connection to DirectX surfaces API. I have called SetICMMode() for HDC, then created an sRGB surface with DXGI (and, later, scRGB) and DirectX didnāt do any conversion to the display profile. I donāt really know how it is expected to work. It just passes the data through directly to the display without any color conversions, even though I explicitly tell it is sRGB data.
Iāll try to expand it. Graeme tried to tell that one cannot use intermediate color space representation if one want to get good rendering quality.
The point is, when color management system converts data into the displayās color space, it can use different āRendering Intentsā. These āintentsā define how the colors outside display color range will be fit into the destination display space. The simplest way to fit two color spaces is to clip non-displayable colors, basically drop them. More sophisticated approaches, like āPerceptualā compresses the source color range (basically offsetting absolute color values) to fit all the source colors into the destination space. The user will not notice the offset (due to eyes adaptation processes), but the image will not have dull flat-filled areas of not-representable colors.
So, if you use āintermediate color spaceā approach, then the only intent you can use is āclippingā, which creates bad results most of the time. If you want to use āperceptualā intent, you should convert directly from the source image color space to the display color space.
Yes, āsRGB-likeā was just a short name for āa color space with primaries and EOTF something like in sRGBā
The point I wanted to tell was unrelated though: as far as I know, it is impossible to describe an HDR color space with an ICC profile. Though it needs some investigation.
Yes, exactly. GUI elements, like menus and buttons, are expected to be painted on sRGB surface, but the canvas with actual image data is expected to be painted on a separate surface with a different color tag.
I mean the app should have two surfaces: one with sRGB tag for GUI elements (for which the compositor does color management) and the other one for actual image data (compositor passes it directly to the display).
There is not color management for DXGI surfaces in Windows, even though MS claims there is. You create an sRGB surface, and DirectX passes the data directly to GPU without any conversions. Calling SetICMMode for the corresponding HDC doesnāt do anything.
For the most part, I have absolutely no idea whatsoever what you Techie-Types are talking about when youāre down in the detail ditches. I do, however, āfeel the loveā now emanating from this ādiscourseā.
Itās very reassuring that, seemingly, weāre well on our way to avoiding the Colour Chaos which may have occurred had this collaborative discussion not taken place. Bravo and thank-you!
Agreed it was just and example that ICC is not the only way to do it.
Agreed, if we can get ICC to support HDR that would simplefy quite a lot. It is actually quite interesting that ICC is in one way, way to powerful (supports color space with up to 16 channels, while we are only interesting in 3 channel RGB) but on the other hand not powerful enough (no HDR support).
Even if it had it wouldnāt really be useful for this since this isnāt about any particular compositor implementation of CM but how applications can communicate with those compositors.