New Sigmoid Scene to Display mapping

Desaturating before tone curve with a saturation function that is neither perceptual nor physical is a nasty trick to hide the fact that decoupled RGB curves change color in uncontrolled ways.

Fix the problem, don’t hide it. The cyan skew in the blue picture is simply unacceptable. Chroma preservation is not perfect, but at least it adds some reliability.

Also, ACES are not a gospel. The logic of what they do is nice, the actual implementation is a monster that changes at every next version.

2 Likes

I’m aware of your feedback on how to handle colors in the scene to display transform. Adding ratio preserving as a processing option is easy but I have just focused on the shape of the curve so far. Adding skew as well as the other curves to compare with has taken all my time so far.

So I would love some feedback on the curve rather than the color at this point!
Some examples of helpful feedback:

  • How do the black and white pictures look to you? Good? Bad? Why good? Why bad? Etc.
  • Are the images bad examples? Do you have any better test images to evaluate the processing on?
  • Have you tried to tone-curve-explorer? In that case, what are your takeaways?
  • Does the skew log-logistic curve fulfill the mathematical requirements for a scene to display transform? I have added support for display white and display black, custom grey scene+display is also supported but not exposed in the explorer. Skew is still not orthogonal but the effect of it should be clear and I’m working on a solution to the orthogonalization.
2 Likes

You might want to consider test images sets from image processing research. The more types you throw at your curves the better. E.g., clean, noisy, blurry, hazy, rainy, blown, low light, underwater, detailed, SDR, HDR, compressed, with artifacts, multi-spectral and other types of imaging, artificial, cartoons, etc.

1 Like

Absolutely agree!
Do you have a link to a repository or similar?

Well, it is a long list I have above. Search Google Scholar or GitHub, etc., for listings of papers that address “innovative” “novel” “awesome” :joy_cat: data sets. They will have the links and info on the sets.

Here is the thing : I don’t care how it looks and neither should you. “Looks good” belongs to amateur empiricism, unless you are Kodak and can conduct extensive aesthetic studies with N > 100 in controlled conditions. Because it looks good on some pics doesn’t guaranty it will look good on all pictures. And also, looks good to whom ? Let’s ditch that thinking.

A tone curve is a mapping. The only relevant question is : how flexible can it be made ? The user will decide what looks good, so starting with his intent (target look), can we provide him a mapping with a reasonable set of parameters that reasonably matches his target ? That’s the only thing that matters, design-wise.

That’s also the primary reason for node-based tone curves. Draw points, interpolate, done. No intermediate assumption, just honor user input. Unfortunately, tone curve GUI is not suited for scene-referred where DR \in ]0; + \infty[ and the relevance of a 2D graph to represent a 1D mapping is disputable. Also, splines interpolations do overshoot.

But mapping in HDR poses an extra problem, compared to SDR tone curves, which ends up in a trade-off : how much do we want to let local contrast unchanged in midtones (aka the safe range that is commonly shared between SDR and HDR) while compressing global contrast ?

Because if you simply adjust contrast for midtones and harshly trim the output to the display DR without further mapping, the picture looks believable and correct. We have done for decades. But us, photo geeks, will mourn the lost details in deep shadows and bright highlights, and the wasted camera possibilities. I guaranty that replacing tone mapping at all with just a soft clipping (like the highlights compression you will find in “negadoctor” module) looks a lot better than all the shit filmic does. Unfortunately, you won’t be able to get skies back, so you need to love white skies with flat clouds to consider going this way. Also, no color handcuffs, so saturation and hues will go somewhere unexpected, depending on setting strength and RGB color space in use.

On the other end, if we do a simple log scaling, we can manage to keep middle grey where it is while bringing back both ends of the scene DR to the display DR, but at what cost ? That picture will look washed and ugly, not believable. Yet, if you go in “unbreak color profile” module, use it in log mode, and if your picture doesn’t need too much compression, it does the trick decently.

So we need to account for both ends of that trade-off : protecting midtones while squeezing DR bounds in a way that allows defining the weighting for each strategy. Now, the mathematical challenge is to devise a smooth, continuous, monotonous (aka bijective aka invertible) function to allow all that.

Starting from these specifications, with almost a decade of experience as a photographer and too many lost battles as a retoucher, and building on Troy Sobotka’s work (another 15-20 years of experience in pixel nonsense), I came up with the filmic spline. Not all filmic parameters lead to a sane curve, just like all tone curves nodes don’t lead to a sane curve, but it does what it is supposed to do.

I’m afraid you went the other way around : you started with a solution you found cool, sigmoids, and tried to retro-fit them in mapping problem you overlooked. Well, they fit the continuity, monotonicity and smoothness bill, ok, but what about mid-tones protection ? I’m sorry but we don’t start design by the solution and we certainly don’t settle for a solution that just “looks good”.

Empiricism works until it doesn’t, a stopped clock gives the exact time twice a day, and pictures that “look good” may well be a stopped clock that you just watched at the right time.

So if you want to contribute something useful, you either start again from the problem at hand, including all the constraints you have skipped, review the possible solutions, and then find the best suited, or do yet-another-arbitrary tone curve that could probably fit in the “base curve” module as a “parametric” mode, but in the latter case, don’t bother calling it HDR-whatnot.

And, for the last time, all the ITU BT something and other ACES stuff usually aims at being usable on embedded TV chips at 60 fps, so they are willing to sacrifice a lot of accuracy to that purpose. We don’t do 60 fps and we do GPU processing, so forget about sacrifices. The only reason for the lack of decent gamut mapping everywhere is using proper color adaptation models is simply too expensive for a TV chip, so at best they do nearest-neighbor mapping on pre-computed Y’uv LUTs. Standards and recommendations have a scope in which they apply, and we are lucky to be out of the scope of HDR TV, so let us not get dragged down by limitations we are not subjected to.

I have quickly played with it, but I’m already swamped with work and I’m afraid that is low on my stack. What I take away from that is you can tweak parameters to approximate any curve with any other curve ± \epsilon, which is expected.

4 Likes

You are starting to fooling yourself, the rgb ratio preserving technique is knowed from 1994 and the only reason why nobody in image and video industry use it as a primary way to manipulate contrast is that it doesn’t works.

You could easily see that a lot of people in this forum don’t like the hue shift introduced by v1,v2,v3 and v4 “color science”.
I must have to say that isn’t color science at all, it’s just desaturation and midtone resaturation like YOU like, and it doesn’t looks good 90% of the time.
Too much desaturation in shadows, total highlights desaturation, grayish hair, innatural skintones, sunset and flames.
Actually rgb ratio preserving works well only in blue sky photography.
Not enough to consider it the new standard, it should be treated just like an alternative over per-channel rgb.

I don’t understand what is your problem with a new tonemapping module, with filmic is hard and sometimes impossible to spot on brightness and contrast.
I’ve tried to match the base curve and i could tell that filmic has some serious limitations.

Open your mind.

4 Likes

Let’s stay civil, please.

5 Likes

It is meant to preserve chromaticity, and it does just that. So it works at what it is supposed to do. And it was part of ACES 0.1 or something, so the “nobody uses it” argument is void. The main reason for not using it is because it’s not possible to put that in a 3D LUT, which, again, is a big deal for the 60 fps guys.

And they have an option to bypass them. I also see a lot of people who are happy with that, and even some analog photographers that switched to digital because of filmic. So, some people like it, some don’t, that leaves us with zero actionable information.

Grow some visual and art culture.

Until you provide a portfolio of photographs that look even remotely publishable, you are a nobody with coding skills and opinions backed up by no practical field experience. I’m fed up by IT guys and physicists who have opinions on stuff they don’t master just because they understand code and equations, and yet they can’t see a gamut escape when it’s right in their face.

3 Likes

To close on the “hue shift” issue, let’s look at 2 pictures from hdrihaven :

They both have colored highlights, one in blue, the other in orange-yellow, so they sit on opposite sides of the color wheel.


Linear correction of -4.5 EV for reference color of highlights (linear correction is supposed to preserve color perfectly) :

Linear correction of +2 EV for reference color of shadows :

Filmic v3 on individual RGB, no norm, no desaturation :

Filmic v4 on power norm, desaturation to 0 outside of latitude :


Linear correction of -3 EV for reference color of highlights :

Linear correction of 0 EV for reference color of shadows :

Filmic v3 on individual RGB, no norm, no desaturation :

Filmic v4 on power norm, desaturation to 0 outside of latitude :


Whoever finds the norm versions more color-shifting than the individual RGB versions should either snort better shit or get their eyes checked. Notice that I don’t care which one looks the best, where filmic happens in the pipeline, the color grading is supposed to have been done before and we are to honor any color decision taken in there, not to enhance or beautify stuff. Meaning we are to retain original colors as much as possible with no additional color opinion.

1 Like

Thanks for pointing out that resource, there is a lot to experiment with true HDR images!
Having said that I am a seasoned amateur photographer (started with Canon film cameras 40 years ago) with some understanding of code, what I personally don’t like of filmic v4 is the midtone resaturation, I find it harsh and prefer to resaturate using color balance. Apart from the fact that in filmic saturation is applied to midtones only, are they two different algorithms, and if so, why?
Maybe, also as a side effect of you recent work on color balance rgb, saturation in filmic can be improved ?

Also one (maybe stupid) curiosity of mine: we say that in scene referred workflow values are unbounded [0, +inf], but RAW files have normally 16 bit and sensors DR is 14-15 bit. This is not going to be improved substantially any time soon, so in reality values should be considere bound to [0,65535] or?

Cheers,
Marco

Can we please stop with the condescending tone, it isn’t good for any of us.

24 Likes

It’s all coming down to a difference in values.

@age values a tool that directly makes a good “look”.

@anon41087856 values a tool that is utterly neutral and objectively accurate and allows you to use other tools to achieve a look independent of dynamic range management.

I have my own preferences on the matter but the key thing is that both opinions are valid and there’s no point in insulting each other over it.

15 Likes

Raw formats can have 12, 14 or 16bit depending on the readout precision of the sensor. DR depends on what you mean, per-pixel or normalized to area. Per-pixel DR can be horrible but with enough pixels your area-normalized DR can be very high. Think 1bit per pixel ADC but for the area you’re interested in you have a million pixels.

This is a hypothetical upper bound.
With 16bit half-float in OpenEXR for example you can cover 30 f-stops of dynamic range with 1024 steps per f-stop. If that’s not enough, there is 32bit-float. So for all intents and purposes, all sensor technology and capture techniques (stacking etc.) calculation- and if you take care storage-precision is, while not infinite, more than good enough.

3 Likes

I’m more on the opinion that image processing algorithms are always imperfect.
This is the reason why there are multiple demosaicing, denoise, chroma preserving options and so on.

Coming back to this topic we don’t have to include in darktable every filmic tonemapper invented, there are too many, but this one is really promising.
I’d like to see it in a next darktable release.

I’d like to see the crosstalk desaturation and resaturation too, we have so many options in the chroma preserving drop down menu, we could add an alternative over the simple rgb curve.
It’s a hack, a compromise, exactly like rgb ratio preserving and every other module.
I’m not some kind of rgb purist, sometimes the rgb ratio preserving option is the best choice, this is true and if it wasn’t implemented i would have request it to the developers.

I have to pointing out that i don’t like when something is hided from the user and undocumented.
For example if we look at the GIMP color balance
https://docs.gimp.org/en/images/menus/colors/color-balance-dialog.png
There is a preserve luminance option, it’s up to the user decide when checking this option, this doesn’t happen in darktable

https://github.com/darktable-org/darktable/issues/6209

I have the same issue with the gamut compression inside filmic, i’m fine with it but sometimes it’s better to not use it.
Please don’t hide imprtant options from the gui.

1 Like

I agree that the comments should stay correct in all circumstances. There is no need to send harsh comments to others.

On my side I have not yet tested this new module. Given the amount of exchanges between Aurélien and Jandren and the corresponding adjustment made I’m pretty sure that this module will finally find its place into dt set of modules.

Again I have not tested it, but if this module gives good final results with very fewer controls I suppose it will be a nice alternative to more advanced Filmic.

12 Likes

Once a person actually takes time to learn filmic and it’s rules/flow then the results are much more predictable on various images in various conditions.

I haven’t tested sigmoid yet but premise seems to go in a way of simplicity vs robustness in every condition.

And If i’m not much mistaken the sigmoid stuff could be a part of filmic since filmic does more that just tonemap.

1 Like

I think in the scene its infinite and I guess some day the number will be greater than 65000 for capture although never infinite so I suppose there is a degree of future proofing??? You may raise a good point though is that “future proofing” in any way impacting the way data in a 0-65000 window processed. I always thought the idea was to make it work no matter the physical bounds theoretically up to infinity??

@priort and @MarcoNex the zero to infinity part does not introduce any extra complexity, it’s just the case that the function supports this range. The curve converges towards the user-defined display white and black targets with contrast defining the rate of convergence. The kind of free lunch you sometimes can get.

@johnny-bit I have taken the time to learn filmic (now I know its source code as well:sweat_smile:) so I’m trying to take some learnings from that with me when exploring this approach. Please try what I have done. Begin with the python based homepage that I did for making it possible for others how the tone curve behaves! https://share.streamlit.io/jandren/tone-curve-explorer Then compile the PR and run some of your own pictures through it.
On the topic of integrating it in the filmic module, let’s wait and see what the consensus is later, it’s easier for me to continue this development in a separate module for now. Reduces the risk for merge conflicts and I can destroy things however I like :wink:

@Pascal_Obry Thanks for standing up for a positive environment! I only want this to be merge if it meets and exceeds all darktable requirements for what a scene to display mapping should do. If it doesn’t, well that’s that. What I do expect is some good feedback, testing, and constructive critique, really no use in trying otherwise :slight_smile:

9 Likes

The data coming from the camera may be bounded to whatever the bit-depth of the camera can encode (and the sensor can provide), but while processing an image in Darktable, that range can change, through use of any number of operations. Having the range unbounded ensures that you can’t accidentally “cut off” the top range of your data by e.g. increasing exposure in one module, then reducing contrast in another. If you were operating with limited numbers, you’d be clipping highlights all the time, by accident.

…and of course there’re also “true” HDR images with dynamic ranges way above what 16-bit integers can encode.

2 Likes