filmic rgb highlight reconstruction

Following up on:

I’ve created an artificial test image with a grey gradient and some coloured spots, in a completely unscientific manner, hoping to gain a better understanding of highlight recovery (not blown raw pixels, simply the behaviour of the algorithm around filmic’s white point, chosen by the user). For these experiments, I set ‘add noise in highlights’ to 0.
gradient-with-colours.tif (5.4 MB)
gradient-with-colours.tif.xmp (5.3 KB)

There are some interesting behaviours. Take this screenshot, and note that reconstruction threshold is + 0.49 EV (above user selected filmic white point), and check how the colours in the yellow square (and those above it) look:

Decreasing the threshold decreases brightness and increases the vividness of the coloured spots:

However, decreasing the threshold further washes them out again:

My guess is that be decreasing the threshold further, these areas now count as ‘brighter’ internally in the recovery logic, and the ‘structure’ (as opposed to ‘texture’) portion of the algorithm gets more weight processing them. To test this, let’s drag the structure/texture slider way up:

It’s too late for me to continue today, but if you find this interesting, let’s continue tomorrow or later.


Would be good to look at the mask too. I think initially if the threshold is high as in default nothing is done then if you hit a sweet spot you get some reconstruction but as you drop it and include more pixels in the highlight then the surrounding data/algorithm must be getting affected. I do like these sorts of demonstrations.

Synthetic charts like that are mildly relevant. You will never encounter hard edges between primary colors like that in real images.

What we try to do here is to diffuse color through edges and take advantage of inter-channels correlations to save details as much as possible. Unfortunately, inter-channels correlation imply we don’t have R or G or B, but at least a mix of all.


I’m a bit confused, I thought the Reconstruct functionality is only needed when the raw data is blown?


Yes, I’ve noticed some artefacts and figured they would be irrelevant, never occurring in real photos. So:

You may get blown pixels because your filmic rgb input contains values over the white point you set. That’s why the highlight reconstruction threshold is relative to your white point.

Thanks, I see.
I’ve just played with this using a previous pic which is not over-exposed. I’m finding it hard to make any significant difference using Reconstruct, but possibly don’t fully understand. If you switch history between latest and the dummy vignette step, that is with and without Rec. afaik, but the effect is tiny.

_CTW1680-Jonathan-Critchley-look-WithReconstruct-S-sRGB.xmp (39.2 KB)

As Aurélien mentioned (see the opening post), the aim is not so much recovery, but rather making the transition smooth.

filmic highlights “reconstruction” is first and foremost aimed at ensuring smooth transition between area that will clipped at filmic’s output and non-clipped areas

No reconstruction (filmic mapping only) :

Filmic reconstruction :

Highlights reconstruction using “reconstruct in Lch” :

Highlights reconstruction using “reconstruct color”:

Highlights reconstruction using “clip highlights”:

Pic by @houz.


@anon41087856 , I’m guessing the flames were burnt out (no pun intended…) in the raw data, so no doubt, the Reconstruction does its job there. Whereas I was trying this new (to me) idea of seeing what happens with ordinary highlights when the white point is lowered to make them clip. That could happen with a portrait against a bright cloudy sky I suppose.

1 Like

Yes, there are highlights that you simply can’t avoid burning (especially with older sensors), or else the whole image is just noise everywhere. So the filmic reconstruction is done specifically to blend bulbs, sun disc, and fires into the rest of the picture.