Oklab Colorspace

No, JzAzBz is awesome for hue linearity, but it’s a non-linear space so I use the Filmlight Yrg for a good-enough hue linearity in a scene-referred space (also, the Yuv part is merely GUI, the actual algo runs in RGB). If you want to color-grade in a way that can mimic a red or a yellow filter put in front of your lens, aka filtering light, you need linear scene-referred. That’s why I’m really not convinced by perceptual color adaptation models, beside surround adaptation and perhaps gamut mapping, in the actual image editing pipe. We have been able to produce paintings for the past 2 millennials in a physical space just fine.

Something else to consider is adaptation may not matter if the surround has the same brightness as the screen, as all adaptation models I have seen use a ratio between surround luminance and display luminance (peak or average/middle-grey). Given that HDR screens have more backlighting power, they can compete with bright surrounds too, so adaptation may be mostly achieved by simply adjusting backlighting, for HDR devices.

1 Like

That’s an argument I can get behind.

which is something that every smartphone is capable of (well, it tries to do that) but pc displays…not so much. Colorists like to absoluteley control their viewing conditions, constant light, only achromatic colors in the room (spectrometer checked).

It may not fit your ethics, but alpha compositing, interpolations, any convolution filters rely on physical spaces to work properly. Also, changing the color of some light, in a physical space, is done by filtering it, which is represented by a simple multiplication of the light emission by the density of the filter (and it works the same in spectral or in RGB). It’s fast to compute, simple, the result is predictable and looks organic.

The problem is color adaptation models are built on top of experimental data, and by “experimental” I mean put 15 undergrad students in a room with controlled lighting and ask them to mix-and-match color patches, then derivate least-squares fittings of what they sorted out. Every study is hardly reproduceable, lacks a large sample, so it’s a big pile of ifs and maybes.

Meanwhile, physics work.

Available in next G’MIC update (in a few minutes), with new commands srgb2oklab and oklab2srgb:

6 Likes

Is that starting from a “linear srgb” input with value range 0 to 1?

That’s a good question. I don’t know. What is the input range intended ?

I think it has to be input linear with range 0 to 1, because max sRGB is supposed to map to Lab 1,0,0. Some numeric instability there too perhaps (whether that’s the algorithm itself I haven’t checked though)…

I must admit I’ve just converted the code from the webpage into G’MIC commands. I’ll add some multiplication/division by 255 to match the sRGB range in G’MIC.

I am well aware of that, I rely on it and I want it’s application in many many places! In that vein, I can only highly recommend to everyone a hitchhikers guide to digital color. https://hg2dc.com/ by Troy Sobotka, some people might know his name.

Yet I see applications for ‘things’ to be done in a perceptual way. Propably more suited to the end of whatever pipeline, okay sure, but tayloring stuff to the human-visual system is not outright a bad idea. Dolby in their whitepapers for the Perceptual Quantizer EOTF and its ICtCp color model Dolby Whitepapers, make it quite clear that at the same precision/ bitdepth \deltaE_00 gets better in comparision to YUV or a souped up hdr-YUV version. How so? By tayloring PQ for just noticeable differences in Luma perception and equal distribution and sizes of MacAdams ellipses with regards to Ct and Cp parameters. I’m sure that JzAzBz is trying something very similar (as well as ITP and all the others) in order to avoid licensing fees.
So doing this for delivery purposes makes sense. It saves some bandwidth when it’s taylored for perceptual.
Funny enough, the creation of Oklab follows the same/similar principles as to what Dolby or Huawei did for ICtCp/JzAzBz, and the end result is respectable and open sourced, albeit non-HDR.

Maybe there are more place where even non-HDR perceptual tayloring might be useful. Again, if JzAzBz covers perceptual stuff for many applications, great! If Yrg controls ontop of linear RGB is useful, that’s great too. I am generally interested in the reasoning behind things.

Also I want to reiterate that while Oklab might have only some applicability, I find the presentation of what has been done, and why it was done, exceptional in it’s clarity. So even if one thinks that this is not that usfeul, the presentation for me definitely is.

1 Like

There is a researcher at my institute that was very interested into putting some solid experimental data into this problem, using modern equipment, modern theories about human vision, and modern analysis tools. We even got involved in the early stages of the experimental design. Sadly her PhD student went on to do other things and the project never left the concept phase, but everything is there for the right student to pick it up.

2 Likes

Would be interesting to see old ideas undergo modern rigours.