A member messaged me to ask about how I am doing curves, here is the explanation I wrote in response:
The saturation method works like this: I essentially use the gradient for saturation, except with a large ‘d’ to simulate the way channels are far apart from each other in value… so, I generate two points above and below the Y value to emulate a couple of channels, then I apply the curve to the those two points. The difference between the two points with curve applied divided by the difference between the two original points gives the saturation factor. I find the way to generate these points doesn’t matter much, if you want, you can just use the gradient after applying the curve as the saturation factor (which is logically equivalent to the two points being extremely close to Y), but I believe having the points further apart simulates the saturation amount of per-channel curves closer than just using gradient.
Here’s pseudocode:
// calculate luminance factor
new_Y = curve(pixel.Y)
lum_fac = new_Y/pixel.Y
// here, I generate 2 points assuming the range is 0-1, if you are
// not working in limited 0-1 range, use a different way to generate
// p_above, like Y*1.5 instead
p_below = 0.5 * pixel.Y
p_above = (pixel.Y - 1)*0.5 + 1
sat_fac = (curve(p_above)-curve(p_below)) / (p_above-p_below)
// apply luminance factor like linear exposure
pixel *= lum_fac
// apply saturation, use a perceptual space like Oklab IPT or Jzazbz
// for slightly better hue linearity (not CIELAB though, that is much
// worse than simple linear saturation)
pixel = saturate(pixel, sat_fac)
This sums up most of it. I am replicating existing processing techniques, using mostly linear exposure and perceptual colour spaces as building blocks (this example demonstrates both of these), and other things that are can be justified in terms of colour science, like Von-Kries chromatic adaptation, which is per-channel gains inside a CAT space like CIECAT02. Just avoid CIELAB, that screws things up more than it fixes, at least with hue.