Oklab Colorspace

Hello People!

Just wanted to throw this https://bottosson.github.io/posts/oklab/ into the discussion of yet another perceptual colorspace that someone out there (not me!) created.

I specifically find the differences to CAM16-UCS with respect to the full gamut plot and very saturated blues interesting. Maybe this could be useful in the context of SSF-generated Profiles @ggbutcher or when it comes to white balancing/color calibration calculations such as gamut compression @anon41087856 ?

It’s not designed to be a HDR colorspace like JzAzBz (bummer) but consequently spares the user from the ‘real vs actual viewing conditions’-complication that should be taken into account for JzAzBz to work as intended, AFAIK. But for calibration and/or profile generation this might be sufficient?

Denoising in the right colorspace is another application that jumps to mind.

And it’s claimed to be ‘fast’, whatever that means regarding colorspaces.

On top of that, the key benefits and the ideas behind it are neatly laid out.

Even if it’s not useful for anyone here, it is a very nice presentation of Ideas and the context in which it came to be.

Cheers and merry christmas!
Bob :smiley:

5 Likes

Did this come out of nowhere or is there context to it? His bio gives us some clues, at least about himself:

For about a bit more than a decade, until 2020, I worked at DICE and the Frostbite game engine with things like software engineering, architecture, technical product management and team leadership.

There are quite a few colour ideas floating around in the research sphere but I don’t recall coming across this one, so thanks for sharing.

1 Like

Interesting.

One of the vestiges of developer ignorance in rawproc is my use of HSL for the saturation operator. I may try this as an alternative, after I release 1.0…

2 Likes

HSL is arithmetically simple but perceptually horrible. Personally, I prefer HCL which is slightly more complex arithmetically and slightly better perceptually.

Oklab looks interesting, and seems not to have the “weird” values we get from Jzazbz, so could be more useful.

1 Like

For me it’s what I call a twitter-driveby, the algorithm shoved it into my face.

I guess it’s just one of many. It probably has downsides too that aren’t yet talked about much. But I found the approach and the documentation worth sharing. You’re welcome!

Another application that I haven’t thought about! :blush:

Glad to hear. I am very curious how much ‘better’ this can be and what the unexplored downsides might be.

If your handle is @PhotoPhysicsGuy, then it makes sense.

Is Oklab a play on OK boomer? My first thought when I saw the thread. :joy_cat:

HSL is poor for maintaining uniformity of lightness as you change hue, but for a saturation change operation I don’t imagine there being any issue with using it instead of something more powerful like Oklab.

For other operations, though, I can definitely see the advantages of Oklab.

1 Like

Recently, I’ve been trying a touch of saturation in otherwise “color-dull” images in an attempt to get another kind of contrast. When I find myself “amping-up” the gain aggressively, I usually abandon the image, or I go into “abstract mode” and abandon all thought of colorimetric consistency, in pursuit of a “look”… :stuck_out_tongue:

Also, HSL produces negative saturation if L > 1.

Increasing the saturation of colors, while maintaining perceived hue and lightness

So it’s increasing chroma, because saturation is a mix of chroma and brightness, and you can’t maintain lightness when changing saturation.

Turning an image grayscale, while keeping the perceived lightness the same

XYZ does that already.

So it’s useless.

We have everything we need for profiling calibration : XYZ.

And that would be a physically-defined space, not a perceptual one, because non-linear scaling will mess-up the noise variance.

1 Like

Creating a ‘look’ is certainly not an easy thing. The Colorist in cinematography probably is hard to replace with a simple saturation slider. Look creation probably can also be done in Oklab but I am not sure if this is the intended use. It’s a nice side effect that color-constancy looks quite okayish with respect to Lab and a bit better for colors outside of sRGB than for Lab.

Oof maybe? You’re asking the wrong one here! :smiley:

So it’s not useful for programs that have a XYZ pipeline?

Would you advise against it’s use in general, or for raw-editing specifically? I am well aware of your work towards a pipeline that can accomodate HDR-large-gamut-inputs for all the benefits and future proofing that comes with it. But it (Oklab) is a tool that confidently surpasses sRGB(easy), coming close to CAM16-UCS(not so easy) and tackling a specific problem of JzAzBz (it’s non trivial implementation regarding actual viewing conditions) really worth being called useless. I am fine if this is unmarked intentional hyperbole or has a foundation in practical aspects of what darktable wants to achieve.

Finding a good estimator/predictor for the noise variance isn’t easy anyway. It’s not pure-poissonian photon statistics because the sensor ADC is not perfect AND the photo-diodes have different sensitivities for different wavelengths by design (the CFA is not a box-function vs. wavelength, quantum efficiency is also wavelength dependent). A counted electron in e.g. the blue channel can mean very different incident fluxes (and with that significantly different noise variances). It’s compounding the problem to do noise estimation in a perceptual space somewhat, yes, but a physically-defined space does not suddenly solve messed up noise variances, it may improve them but noise-variances will stay quite complex anyway.
Denoising in a simple perceptual space like this one could be fast to implement and with a decent localized noise estimation be a tool for someone. Just like wavelet denoising still exists in parallel to nl-means.
In short, I see your argument, but I don’t think it’s as clear cut as you make it sound.
I could always be missing something though, and I will always stay curious.

Again: I do not need to defend that specific colorspace. But it touches on some arguments that I find at least worth discussing.

I am also curious if and how this specific community would discuss benefits of certain colorspaces. What would be on a list of things needed? HDR, gamut-size, hue-lineartiy, etc. What would be the holy grail of colorspcaces, which ones come close and why? Which one is lacking in certain aspects? How is uselesness defined in the context of photo-editors?

Cheerio!

3 Likes

It seems to me that denoising is most easily done in linear colorspace, before demosaicing. But that shouldn’t stop people from experimenting. Try it in a perceptual colorspace. After all, noise reduction is (for me) an aesthetic judgement. It might work fine.

A perceptual color space for image processing says Oklab is designed to be a better perceptual colorspace than CIELAB. If it is, as claimed, then it could be used anywhere that CIELAB is currently used.

I’m not keen on the channel names: L, a and b. This invites confusion with CIELAB, which has channels that should be called L*, a* and b*, but everyone abbreviates them to L, a and b.

And the “Comparison with other color spaces” gives RMSE scores, but doesn’t give units. Are the numbers on a scale of 0-100 or 0-255 or 0-65535 or what? Grrrr. I know the table is for comparing between colorspaces, but imagine if a road atlas gave distances between cities but didn’t say if the numbers were miles or kilometers or parsecs or whatever. Grrrr.

2 Likes

Hi everyone! Author here. I’ve been reading some of the discussions here, but never posted. Lots of interesting threads :slight_smile:.

The RMS values are CIEDE2000 distances. It is explained a bit further up in the article, but not where the table is, sorry about that.

I agree naming is a bit problematic, but inventing more letters to describe opponent color spaces is also problematic. I went with Lab, since that is the most common convention with RLab, HunterLab, LLab, CIELAB, CAM-UCS (the ab part at least) etc.

The problem I see is that, while there is a lot of research going into models like Cam16 and JzAzBz, in practice they are hard to use and sRGB, HSV and CIELAB get used instead (or if the advanced models get used it is often incorrectly, since accounting for viewing conditions is hard).

I wanted there to be a simple alternative that does an OK job at predicting hue, lightness and chroma and that it easy to adopt.

8 Likes

@bottosson Welcome to the forum!

3 Likes

@bottosson welcome to the forum!

Do you know when simpler models like yours actually start to have significant errors when force feeding them a HDR workflow? Does it ‘fail’ at higher contrast ratios (not even talking about viewing conditions here) than sRGB, HSV, cielab? Or does it simply fail a tad more graceful? Might Oklab be also Ok at extended dynamic range?

I would think of it like this: Oklab will behave as if the background and surround color is a an even grey color, of similar luminance to the colors you are modeling. This means it will treat colors of all luminance levels the same and this way makes no assumption of the viewing conditions. For certain types of operations, I would say this behavior can be an advantage, since the behavior will be consistent and quite predictable. For doing perceptual tone mapping for example, it definitely isn’t the right model.

Models aimed at modeling HDR perception will always have assumptions about viewing conditions (and if those are not met behave incorrectly), or otherwise they couldn’t perform better than an Oklab like model (but you could probably make a better model with better experimental data and a more advanced mathematical model using the same assumptions).

I haven’t looked into this that much, but I think Oklab and a model including hdr lightness would be quite similar within about an order of magnitude around the background color luminance, but both results could be meaningful depending on the use case. Based on reading papers like this:

“Brightness, Lightness, and Specifying Color in High-Dynamic-Range Scenes and Images”
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.298.8538&rep=rep1&type=pdf

1 Like

Color profiling and calibration is mostly a 3D vector base { rotation + homothety + transvection }. When you calibrate a medium, you find the 3×3 matrix of a linear map that minimizes the error between reference and measurement. You can do it in virtually any linear space, but we usually do it in XYZ (as a connection space) to have profiles that are interchangeable.

No color model surpasses sRGB because sRGB is not a color model, it’s a medium space. The point is to work in the right color space/model at any step of the pipeline depending on what you try to do.

For viewing/surround adaptation, we have CIECAM16. It’s LDR but I’m not sure it’s possible to adapt for surround properly on HDR anyway. For color constency in HDR, we have JzAzBz. But then there are many other spaces that aim at a “sort-of” hue linearity while preserving the emission-wise linearity of the signal. For the color balance reboot, I use https://doi.org/10.2352/issn.2169-2629.2019.27.38. It’s all about choosing the data representation that has the properties you want for the operation you have to perform.

But finding the bestest non-HDR CAM is useless, since we have already CIECAM16, IPT, IgPgTg, and ICpCt and so-on.

Tackle the problem where it arises. Sure, you might be able to hide chroma noise in a perceptual model in a quick and dirty way, but then you can’t use variance profiles as a prior because your variance is now non-uniform.

Also, variance depends linearly on signal intensity, so linearity allowed us to define an exposure-invariant guided filter by simply normalizing the patch-wise variance by the average of the patch. You loose that kind of property when going to perceptual.

1 Like

That is the assumption. :wink:
I.e. the CAMs allow for other cases.

Could give us a little more context on how you developed Oklab?

1 Like

I probably used the word “look” a bit at odds with the convention. “Abstraction” is probably a better term; what I’m looking to do is depart from realism in some manner. Here’s an example:

DSC_7696c-small

My wife’s uncle Gary. The original image looks almost nothing at all like this one; aggressive crop, then extreme manipulation with GIMP G’MIC tools whose names I don’t recall. In the original, he’s not even looking at the camera…

1 Like

okay, so with an order of magnitude it might be dipping its toes into what apple calls EDR-extended dynamic range. But I agree that viewing conditions could mess with this a fair bit.

Thanks for that paper!

Whoops, I guess I either meant YUV or CIELab.

So use the right tool for the job, whatever ‘right’ means in this context. Got it.

So, that’s interesting. Actually two interesting things! Surround adaptation sounds a bit like whitebalance and for that a LDR-perceptual space is used? Probably not the one with the -UCS extension in which the MacAdams ellipses have been attempted to be normalized? Because CAM16-UCS has problems with spectral blues as we saw in the presentation of @bottosson. The second interesting thing: proper adaptation for surround on HDR…not possible? hard?

Sure. But apparently it’s not good enough for hue-linearity, because for that you use a Filmlight developed thing:

Unfortuantely I can’t find a version that I don’t have to pay for. I’m sure it’s cool, the Filmlight people know what they do and why they do it.

Could all this be summed up like this? You want to use HDR CAMs as much as possible, for certain things LDR tools like CIECAM16 are used. For special cases that JzAzBz cannot cover other HDR-tools are used. The HDR tools at the moment do not take viewing conditions into account.