OKLAB, CIELAB, linear CIELAB tonemapping

Oklab, Cie Lab and linear Cie Lab, are these color spaces capable of hdr handling?
Yes, they are perfectly fine for hdr image processing, what they can’t do is only to encode an hdr image in a 10bit compressed file because hdr values are above 1.0.

How to correctly apply a tonemapping operator like reinhard with these color spaces? just linearize the L channel, do the tone mapping, re-apply the transfer curve, do the saturation correction.

Saturation correction: the input/tonemapped norm ratio it never had really worked , too much or too low saturation and it results always in a fake hdr effect.

I would point again to this new strategy
https://discuss.pixls.us/t/3-4-scene-referred-workflow-bad-bokeh-highlights-reconstruction/22135/60

This is really great because it mimics saturation like rgb per-channels curves, shadows will have a nice saturation boost and highlights are smoothly desaturated.

Linear Cie Lab: This is just Cie Lab but without the square root gamma in the XYZ color space, fast and the hue linearity is very similar to ycbcr or rgb ratio, overall is better.

Here some presets based on sigmoid tonemapping so they could be used like a"base curve"
photoflow preset.zip (7.7 KB)

The tone mapping is applied in the OkLab, CIE Lab, linear Cie Lab and RGB color space.
I’ve included the crossatlak tonemapping too, with linear CIE Lab desaturation and re-saturation.
Load the presets in photoflow using a linear working space.

P1000320.RW2 (18.7 MB)
This file is licensed Creative Commons, By-Attribution, Share-Alike.

Original

RGB tonemapping

OKLab tonemapping

Linear CIE Lab tonemapping

CIE Lab tonemapping

Crosstalk tonemapping

1 Like

Can you explain why all of these are virtually identical, except the “CIE Lab tonemapping” which has clearly different tone compression? The only other difference between the samples I see is for a minor apparent color cast in the blacks.

1 Like

That’s my fault, I have not perfectly linearized from the lab trc

Because the way the saturation factor is calculated is close to the rgb image and most of the time it’s really hard to see the rgb hue shift.

For example this is my attempt with darktable’s filmic and midtone saturation set to 0.
P1000320.RW2.xmp (8.1 KB)

The bigger difference is only in the saturation.

A better example
This file is licensed Creative Commons, By-Attribution, Share-Alike.

CRW_0446.DNG (23.6 MB)

RGB tonemapping

OKLab tonemapping

1 Like

You would also expect a hue difference when you scale the saturation to change colour space, from say BT.2020 or AdobeRGB to sRGB/BT.709. The difference in hue being dependant on the scaling space chosen.

But showing that on a web browser expecting sRGB input is difficult.

1 Like

mimics saturation like rgb per-channels curves - Yep, that’s exactly why I came up with that method. But I think that’s the wrong way to look at it.

When you reduce luminance, apparent saturation also reduces, this is Hunt effect. So what you want is to compensate that, and per channel curves (and my method) both do that crudely. This is even done automatically if you do curves in a perceptual space, but you can also do it manually.

So this “boost” - isn’t actually a boost - it’s just preserving saturation (sensation) as you reduce the exposure of a pixel.

I use cube root as the brightness scale (that’s what Lab/Luv/Oklab use), it’s a good enough Hunt approximation in an SDR range.

Pseudocode for this method:

// Apply curve
new_Y = curve(pixel.Y)

// Hunt saturation factor to *preserve* saturation, not even boost it.
hunt_fac = cbrt(pixel.Y) / cbrt(new_Y);

// Seems sensible to limit values in some way.
if (hunt_fac > 1.5) hunt_fac = 1.5;

// apply saturation, use a perceptual space like Oklab IPT or Jzazbz
// for slightly better hue linearity (not CIELAB though, that is much
// worse than simple linear saturation)
pixel = saturate(pixel, hunt_fac)

This code will need some kind of zero check, or you can add a tiny offset like 0.0000001 to prevent division by zero.

Edit: looks like you are already going to a perceptual space and then linearising? Then you might as well just do curves in the perceptual space without touching the saturation! It will do what I described here automatically. The only reason I see to keep using my old ratio method is for some kind of backward compatibility with per-channel settings.

1 Like