New Sigmoid Scene to Display mapping

Here is the thing : I don’t care how it looks and neither should you. “Looks good” belongs to amateur empiricism, unless you are Kodak and can conduct extensive aesthetic studies with N > 100 in controlled conditions. Because it looks good on some pics doesn’t guaranty it will look good on all pictures. And also, looks good to whom ? Let’s ditch that thinking.

A tone curve is a mapping. The only relevant question is : how flexible can it be made ? The user will decide what looks good, so starting with his intent (target look), can we provide him a mapping with a reasonable set of parameters that reasonably matches his target ? That’s the only thing that matters, design-wise.

That’s also the primary reason for node-based tone curves. Draw points, interpolate, done. No intermediate assumption, just honor user input. Unfortunately, tone curve GUI is not suited for scene-referred where DR \in ]0; + \infty[ and the relevance of a 2D graph to represent a 1D mapping is disputable. Also, splines interpolations do overshoot.

But mapping in HDR poses an extra problem, compared to SDR tone curves, which ends up in a trade-off : how much do we want to let local contrast unchanged in midtones (aka the safe range that is commonly shared between SDR and HDR) while compressing global contrast ?

Because if you simply adjust contrast for midtones and harshly trim the output to the display DR without further mapping, the picture looks believable and correct. We have done for decades. But us, photo geeks, will mourn the lost details in deep shadows and bright highlights, and the wasted camera possibilities. I guaranty that replacing tone mapping at all with just a soft clipping (like the highlights compression you will find in “negadoctor” module) looks a lot better than all the shit filmic does. Unfortunately, you won’t be able to get skies back, so you need to love white skies with flat clouds to consider going this way. Also, no color handcuffs, so saturation and hues will go somewhere unexpected, depending on setting strength and RGB color space in use.

On the other end, if we do a simple log scaling, we can manage to keep middle grey where it is while bringing back both ends of the scene DR to the display DR, but at what cost ? That picture will look washed and ugly, not believable. Yet, if you go in “unbreak color profile” module, use it in log mode, and if your picture doesn’t need too much compression, it does the trick decently.

So we need to account for both ends of that trade-off : protecting midtones while squeezing DR bounds in a way that allows defining the weighting for each strategy. Now, the mathematical challenge is to devise a smooth, continuous, monotonous (aka bijective aka invertible) function to allow all that.

Starting from these specifications, with almost a decade of experience as a photographer and too many lost battles as a retoucher, and building on Troy Sobotka’s work (another 15-20 years of experience in pixel nonsense), I came up with the filmic spline. Not all filmic parameters lead to a sane curve, just like all tone curves nodes don’t lead to a sane curve, but it does what it is supposed to do.

I’m afraid you went the other way around : you started with a solution you found cool, sigmoids, and tried to retro-fit them in mapping problem you overlooked. Well, they fit the continuity, monotonicity and smoothness bill, ok, but what about mid-tones protection ? I’m sorry but we don’t start design by the solution and we certainly don’t settle for a solution that just “looks good”.

Empiricism works until it doesn’t, a stopped clock gives the exact time twice a day, and pictures that “look good” may well be a stopped clock that you just watched at the right time.

So if you want to contribute something useful, you either start again from the problem at hand, including all the constraints you have skipped, review the possible solutions, and then find the best suited, or do yet-another-arbitrary tone curve that could probably fit in the “base curve” module as a “parametric” mode, but in the latter case, don’t bother calling it HDR-whatnot.

And, for the last time, all the ITU BT something and other ACES stuff usually aims at being usable on embedded TV chips at 60 fps, so they are willing to sacrifice a lot of accuracy to that purpose. We don’t do 60 fps and we do GPU processing, so forget about sacrifices. The only reason for the lack of decent gamut mapping everywhere is using proper color adaptation models is simply too expensive for a TV chip, so at best they do nearest-neighbor mapping on pre-computed Y’uv LUTs. Standards and recommendations have a scope in which they apply, and we are lucky to be out of the scope of HDR TV, so let us not get dragged down by limitations we are not subjected to.

I have quickly played with it, but I’m already swamped with work and I’m afraid that is low on my stack. What I take away from that is you can tweak parameters to approximate any curve with any other curve ± \epsilon, which is expected.

4 Likes