3.4 Scene-referred workflow bad bokeh highlights reconstruction

I think the bokeh balls between the young man’s and lady’s shoulders on picture left got a red tint, something I did not see in the other versions:
Of course I was not there, so I cannot tell whether you managed to reveal more of reality or if it’s an artefact (colour propagation from the red sweater, jacket and the red light right between their shoulders).

No, it’s the red sweater and jacket that start leaking in the reconstruction. One has to input just enough iterations, and not more, to avoid that. Anyway, these blown highlights should be turned into white through the log tonemapping, that example is exaggerated for illustrative purpose.

1 Like

Sorry to necro this thread but…

Thank you for the great play raw, really brings out the worst in all raw converters!

I’m working on a completely new image processing software since I have learnt how awful, broken and wrong all software is from a colour perspective (thanks to @troy_s). Processing using RGB channels (for curves, highlight roll-off etc) is 100% wrong and should be avoided. RGB curves produce very recognisable ‘digital’ hue shifts we are all too used to seeing since the 90s and don’t even notice anymore, however, it takes all images much further away from ever looking like film!!! All curve effects can be replicated using linear exposure and perceptual saturation as building blocks, without ugly hue problems!

I have processed this image with my new software (no it’s not in any usable or releasable state yet). No work at all, just exposure, white balance/tint, a touch of saturation and gentle base curve. Handles just about every aspect of the image perfectly as far as I can see …


Can you share in a big picture way how you are going away from rgb with your software??

A member messaged me to ask about how I am doing curves, here is the explanation I wrote in response:

The saturation method works like this: I essentially use the gradient for saturation, except with a large ‘d’ to simulate the way channels are far apart from each other in value… so, I generate two points above and below the Y value to emulate a couple of channels, then I apply the curve to the those two points. The difference between the two points with curve applied divided by the difference between the two original points gives the saturation factor. I find the way to generate these points doesn’t matter much, if you want, you can just use the gradient after applying the curve as the saturation factor (which is logically equivalent to the two points being extremely close to Y), but I believe having the points further apart simulates the saturation amount of per-channel curves closer than just using gradient.

Here’s pseudocode:

// calculate luminance factor
new_Y = curve(pixel.Y)
lum_fac = new_Y/pixel.Y

// here, I generate 2 points assuming the range is 0-1, if you are
// not working in limited 0-1 range, use a different way to generate
// p_above, like Y*1.5 instead
p_below = 0.5 * pixel.Y
p_above = (pixel.Y - 1)*0.5 + 1
sat_fac = (curve(p_above)-curve(p_below)) / (p_above-p_below)

// apply luminance factor like linear exposure
pixel *= lum_fac

// apply saturation, use a perceptual space like Oklab IPT or Jzazbz
// for slightly better hue linearity (not CIELAB though, that is much
// worse than simple linear saturation)
pixel = saturate(pixel, sat_fac)

This sums up most of it. I am replicating existing processing techniques, using mostly linear exposure and perceptual colour spaces as building blocks (this example demonstrates both of these), and other things that are can be justified in terms of colour science, like Von-Kries chromatic adaptation, which is per-channel gains inside a CAT space like CIECAT02. Just avoid CIELAB, that screws things up more than it fixes, at least with hue.


Thanks so much for sharing all those details. This sounds interesting and the results look good. I hope you continue to share so we can follow your progress. Likely this work deserves its own thread.

There was a recent play raw with lots of noise…might be a good test image for your approach…

I don’t think a lot of software uses those rgb manipulations anymore you mention. Just saying :slight_smile:

Why are all the sunset pictures on the internet so yellow then? :grinning:

1 Like

Noisy raw… scary. I have stayed away from those so far, not sure if my processing is equipped for that kind of noise, but I’ll give it a try.

Software can’t solve people’s taste in things.


I don’t think a lot of software uses those rgb manipulations anymore you mention. Just saying :slight_smile:

“Tone curves” in all raw converters are literally an example though. Pulling the “whites” slider upwards in lightroom creates RGB skewing, as does pulling the “blacks” slider down. Rawtherapee uses an S curve for the base look. It is definitely everywhere.

1 Like

That’s because they create most of the problem. Default state of RGB processing is high-chroma colors that have high luminance clip to yellow, and people just got used to it.

Reason is, the only high-chroma color in sRGB gamut at high luminance is yellow. Example here with a chroma/hue slice at constant lightness:

1 Like

You’re not the only one aware of the problem, and working on a solution. Just search this forum for “rat-piss”. :smiley:


Oh, for sure there is software out there doing it wrong. But ‘using better models’ has been in changelogs ever since the end of the nineties, so there is also for sure more software trying to do it differently.

most of the curves in Rawtherapee for instance have (the option for) some sort of ‘preserve chromacity’ andI didn’t touch the curves in a long time in rawtherapee to be honest.

I believe the last (few) process-versions in Lightroom also don’t use the same models anymore for basic white/black/exposure modifications (the curves dialog in ACR still does for ‘compatibility sake’ I think). Colorperfect has been calling for years that they do a better job of preserving true colors (but the plugin is a turd to use for most people :))

And I also mean nothing against your attempt to do it differently. Nothing but respect for people finding different ways of doing things. Like you say, they have to start getting used to more proper ways somehow :slight_smile: .

But, with all the background info Aurelien gave on Color Calibration and Filmic in darktable, there is a clear difference between ‘attempting’ and ‘doing it right / achieving it’. The whole ‘phase’ that a lot of programs used LAB space to do adjustments is a good example. His opinions on CIECAM is also a good one :).

There have been attempts to model color behavior, but - specially in the HDR era now - if we actually managed to do it, I’m not knowledgeable enough.

See it more as a sign of “You’re not alone out there trying to do it right”, so keep it up :).

(PS, yellow skies is also a problem of people using lots of saturation and lowering the highlights down ‘because clipping bad’… thinking the yellow color is supposed to be there all the times. Yes, it’s subjective about what you like… but also, people get used to certain looks that isn’t always wanted :slight_smile: )

1 Like

A bit awkward (and non intuitive) having two modules that try to do the same thing but are used for differing cases.

Do you have an idea how many pliers a goldsmith is using? And why?
Highlight reconstruction in the iop early in the pipe and highlight reconstruction in filmic iop at the end of the pipe are operating on different input. So two tools for different use cases.

1 Like

For the last time (I hope), filmic highlights “reconstruction” is first and foremost aimed at ensuring smooth transition between area that will clipped at filmic’s output and non-clipped areas.

If you set filmic white clipping bound to the same clipping bound as the sensor, then both features may be equivalent (although they certainly don’t work the same).

But nobody says that filmic white exposure should always match the clipping threshold of the sensor “white” (first and foremost because sensors are RGB and know no white). So this highlight handling is completely contextual to what filmic does, regardless to the sensor dynamic range.

The usual highlight reconstruction happens much earlier in the pipe, on non-demosaiced data, and cares only about sensor bounds. Working on non-demosaiced data means it doesn’t see color, but arbitrary RGB plates with holes in them.

Thus these are not the same feature. They become equivalent, feature-wise, only if you willingly set filmic white exposure to the sensor scene-referred clipping value, which is a special case.

This I finally realized the other day! Highlight smoothing may be a less misleading term for describing what it actually does? :slightly_smiling_face:

Still longing for better true reconstruction in darktable. When “reconstruct color” in the “highlight reconstruction” works (not often) I’m happy, but it mostly leads to crazy color shifts. I’m sure it can’t be as easy as borrowing some reconstruction code from RawTherapee or here? :grinning:


Sometimes when reconstructing highlights (Highlights reconstruction + reconstruction in Filmic) I get strange horizontal lines in sky. Probably in burnt out parts. Did anybody experience something similar? I can add some screenshots later on if needed.

The Color mode in Highlights reconstruction may generate that kind of artifacts.