true 10-bit color depth

I have a GPU that supports true 10-bit color depth and I’m looking to buy a monitor that supports true 10-bit color depth.

Does darktable support true 10-bit color depth?

I want to make sure darktable supports true 10-bit color depth before buying a new monitor since the monitor is expensive and I won’t get the benefits if darktable does not support true 10-bit color depth.

Thanks for any help I can get. I tried to find the answer myself but could not so far.

many years ago i wrote a gtk3 prototype to output 10-bit colour through cairo and x11 (that’s what dt is based on). it was very hard indeed to teach cairo 10 bits. it was either secretly doing 8-bit (you could tell by banding in gradients) or it was very slow. like 30sec for one refresh. i don’t know this has changed in the mean time, my guess is that efforts would rather have gone into gtk4+friends.

i’m now using vulkan for my image processing, that does support true 10-bit output. with dithering enabled in 8-bit, i cannot attest that 10-bits make much of a difference to my old eyes.

7 Likes

Also, 10-bit support is only a part of it, these days one also has to take care of compositing for HDR (e.g. UI elements at a particular brightness level, while image surface remains unlimited…), and I don’t think anyone is quite there yet.

And yes, 8-bit vs 10-bit doesn’t really make a huge difference at 100-200nits (SDR), but it really makes a difference for >1000nit HDR displays.

1 Like

I have 10 bit support from Quadro Nvidia GPU and EIZO CG 279X wide gamut monitor with 10 bit support per channel.

I can get X11 to support 30 bits, but not the Linux aplications: GIMP, RawTherapee and Darktable.

30 bit support on linux Gnome is lost somewhere between X11 and final render in program. I suspect but do not know this is due to GNOME/GTK/Cairo. @hanatos suspect Cairo.

I am on Debian 11 with Gnome GTK3. GTK4 unknown to me

Gnome and GIMP and 10 bit

GIMP dev mail list:
https://www.mail-archive.com/gimp-developer-list@gnome.org/msg09813.html

I have not tested recently, so my feedback is a bit dated.

It seems one need vulcan or windows/osx/photoshop to get 10 bit per channel to work.

fwiw vkdt renders 10bits/ 30 per pixel just fine. have not tested on bright displays… but it seems to me a sufficiently fine resolution + dithering with 8-bit would still make it indistinguishable from true 10 bit. might test one of these days.

I just drew some experiments on my system. X11 reports 30 bit pipeline, xdpyinfo shows both 8- and 10-bit depth, xwininfo gives 30 or 32 bit depth. However the real output is still truncated to 8 bit somewhere, because in dt, gimp,… I can see ugly banding on smooth tiff gradients.
However, vkdt-generated 10-bits ramp looks smooth, so vkdt->X11->GPU->display pipe is 30 bit, indeed. I suppose one needs vulkan rendering to get real 30-bit output. Now I’m puzzled how to make X11 render stuff via zink instead of direct radeonsi rendering.
I also think that GLES2/EGL is favourable over old GLX pipeline in this regard. So far no success. and I could not find any relevant info on how it should work, either. Any ideas?

1 Like

A little update on this.

  1. After massive internet search I managed to get zink working by switching from xf86-video-amdgpu to embedded modesetting driver in xorg.conf
  2. amdgpu appears to have parameter deep_color, defaulted to… right, 0. So one needs to add a line to modprobe.d (in case amdgpu is loaded as a module) or add amdgpu.deep_color=1 to the kernel command line (if embedded).
    At the moment I can’t investigate it further due to the lack of time, but it seems that 10-bit output is an achievable target.
2 Likes

Okay, I did some more experiments (by fact driven by acquiring a new monitor :wink: ) So, 1) I went back from embedded modesetting driver to amdgpu. Now it works just fine. 2) All the windows are 30- or 32-bit according to xwininfo/xdpyinfo. Firefox, vkdt and mpv are clearly showing 10 bit output. No other graphic-oriented apps, though. DT, RT, ART, and all the viewers I have installed strip the output to 8-bit! What a shame! Beside vkdt (which is hardly usable because of very low-level interface) no app can effectively use 10-bit.
To bottomline:
– X11 does work with 10 bpc, at least on AMD hardware.
– You are still unable to benefit from it, let alone games and video players.
– Facepalm…

2 Likes

at least for the gaming part Valve is working on it.

KDE team also is. And GNOME (which I don’t care of, TBH).

BTW I found out that Krita can output 10 bpc. To make it do such you’ve to import the image with “High Dynamic Range UHDTV Wide Color Gamut Display (Rec. 2020) SMPTE ST 2084 PQ EOTF” profile. Not so obvious, is it? Interestingly, its Display section still show 8-bit srgb output, grayed out. No comments…

rawproc, my hack raw processor uses wxWidgets wxImage/wxBimap classes to display renders, and they are quite obviously 8-bit. I think most of the GUI tools (wxWidgets, GTK?, Qt?) provide an 8-bit image rendering widget by default; if you want better, they expect use of the OpenGL/Direct3D interface.

More of an issue now with high-bitdepth/gamut displays, but my “customer base” hasn’t complained, yet.

Oh, I’ve used vkdt, seems easy enough to me. @hanatos even includes a thumbnail screen to select images for editing, something I’d scrap in a minute, but it’s his baby… rawproc’s ‘light table’ is the OS file manager - suck it up, customers!! :crazy_face:

2 Likes

That’s a common design flaw, though. Most projects were started a decade(s) ago, esp. the GUI/widget libs. At that moment nobody predicted than HDR/Deep color screens would become common. And in few years SDR monitors will gradually fade away, as already TV and smartphone’s screens are.
As for vkdt, I cannot master it’s workflow to the extent I may use it on a regular basis. I mean, it takes me 10X more time to process an image in comparison with DT/ART. Maybe it’s a matter of habit and working out my own “library” of methods/presets/whatever that would allow me to speed up. I like both the idea behind vkdt and the results, but… So my main tool is still DT that is feature rich and allows to achieve decent results very quick, sometimes I use ART (mainly when Pixel Shift is involved). None of those will provide deep color/HDR features in an observable future. However I hope to see HDR working on a Wayland whithin a few months.

1 Like

“that’s like, your opinion, man” :slight_smile: https://i0.wp.com/comicsandmemes.com/wp-content/uploads/comment-reply-033-The-Big-Lebowski-Meme-yea-well-like-thats-your-opinion-man.jpg

but yeah, there’s a bit of rough docs as an entry point here:
workflow: vkdt website and documentation

gui basics: vkdt website and documentation

presets (i should expand this): vkdt website and documentation

or you can just ask if you’re stuck. not sure though whether you actually mean it.

2 Likes

@kanyck : if you want to discuss vkdt’s usability or get advice regarding it, please do it in another topic (and maybe say you have trouble using it, not that it is unusable – quite a difference). Let’s keep this one focused on 10-bit output.

No problem)) Yesterday I brought up Plasma wayland system that supports 10-bit and HDR on my monitor. All this just to find out that no apps apart from games and mpv can make use of it. That’s all you want to know about current state of 10-bit output: it’s basically there but without apps making use of it – it’s useless. Nothing to discuss.

How useful is a 10 bits/channel display when you are using files encoded with 8 bits/channel (like jpg…)?

Who says I use only 8-bit? There are handful formats that support more – up to 32-bit float.
My camera supports 12-14 stops, my monitor 10. Why shall I restrict myself to 8 in between?

2 Likes

Right, I edited the question…

On the display, the 10 bits is probably linear resolution (DAC), or so I would think. With the JPG, the transfer function (‘gamma’) means that such an image can have more than 8 EV of dynamic range. The lowest linear value of such a file is 1/255/12.92 = 1/3294.6 → -11.68 EV, if I read Wikipedia correctly.

Not necessarily. Bit depth is a bit depth, trc is another beast. More bits → technically better gradients, as a rule. Also, 10 bit usually isn’t enough when used in linear fashion. IIRC, HDR10 is technically limited to a maximum of 10,000 nits peak brightness (however common HDR10 contents are mastered with peak brightness from 1,000 to 4,000 nits). Dolby Vision standards allows 12 bpc.
But with low bit depth beside brightness issues we have color bad smoothness. Jpeg’s trc is there because 8-bit is far too few for quality image reproduction. So the developers used human vision feature (roughly logarithmic sensitivity to the light) to push the density to darker zones leaving lighter ones sparse. As a result, on jpegs images you often may see banding on the sky, because it fits in the lighter, sparse, zone. And please don’t forget that sRGB and JPEGS are standards from CRT era. Viewing hardware made huge leap since then, but at the software side we still holding onto that ancient formats…

1 Like