Proposal for updated Filmic curve parameterisation in Darktable

Ahhh, so that’s what all those HDR mapping programs are using! Thanks for clearing that up!

I can’t claim credit for that, unfortunately. It’s a free online thing by some company who is offering it to schools and such. Way quicker to test some functions there than write an interactive pyplot GUI or something.
I’m pretty much at home with matplotlib and numpy, not so much web server-based stuff. So actually, your tone curve explorer should be really useful to test some things which aren’t automatically done by Desmos, like scaling laws for some sliders, or more thorough or complicated constraints on inputs etc., and then get feedback on those without having to put them in DT, compile and distribute that.

1 Like

Ahhh, okay, sorry about that.
Monotone cubic interpolation is thus pretty close to Akima splines (which I’m just independently doing something with…), I really should have noticed that.
Actually, the reason I even started thinking about curve types for DT is because I was working on extending Akima splines to give them some new properties and thought they might be a good fit for filmic. But: In the current case, we have only two points for each curve segment, and neither the monotone cubic interpolator not akima splines can deal with externally-prescribed tangents without losing their properties, since the thing that makes them work is their way to find the tangents needed to avoid overshoot – which means that you could not specify them a priori.
A cubic spline segment has 4 degrees of freedom, but the hyperbola I’ve found has only three, and one of them is just for tuning.
I think something based on \tanh() or \arctan(x) could be even better, but haven’t found a closed solution for those (plus a good strategy for where and how to add extra parameters), and the rational function is easier to handle, so to me that’s a clear winner.

I agree that replacing the dumb splines by something that matches the problem better (one could even call it a model), is more elegant. As is often the case, elegant rhymes with simple/intuitive also here.

It would be great of course If the new curves could be made sufficiently close to the old ones such that practical backwards compatibility could be provided.

A suggestion for your online calculator: you could try fix input and output gray at 18.45%. In current filmic output middle gray is fixed anyway. And input middle gray can be adjusted, but for the reasons @anon41087856 mentioned should not be.

Otherwise, am I the only one who finds the contrast parameter in current filmic (and therefore, I guess, also in your proposal) not very intuitive? In current filmic, contrast is simply the steepness of the linear section of the spline as shown in filmic’s plot. The problem is that the axes of the plot change meaning based on other parameters.

In my experience a more useful parameter would be the true midtone contrast, i.e. the proportionality constant between (linear) input and output intensities around middle gray.

In current filmic, I believe that this midtone contrast is equal to

\frac{cW}{b} / \log\left(\frac{w}{b}+1\right),

where c is filmic’s contrast value, w and b are the absolute values (in EV) of the white- and black-point, and W is 2.44 EV, the output white point.

So I propose to replace this by the midtone contrast c_\mathrm{m} and compute c such that

c = c_\mathrm{m}\frac{b}{W} \log\left(\frac{w}{b}+1\right)

The dumb splines ensure the first and second order continuity of the pice-wise functions while converging to the desired values.

Obviously not possible.

Contrast is a mid-tones preservation strategy across the log tonemapping liked to film gamma, there is no such thing as true or false contrast. That said, I don’t really understand your scaling.

@hanatos What I don’t like with Hermite splines is we have only 5 control nodes here and it’s really not enough to drive a parametric shape (namely : how close you stick the curve to the asymptotes). At least we would need additional control nodes, which doesn’t simplify the problem user-wise. Vanilla Hermite is just too slow to converge toward its bounds.

Agree about the importance of continuity. Did not look at @Mister_Teatime’s curves under this aspect.

My claim is that the following works (derivation here):

  • Set up filmic parameters in an arbitrary way.
  • Set the contrast parameter to the following value: b / 2.44 * log(w/b + 1), where w and b are the absolute values of the white and black points as set in filmic. For example, for filmic’s default values (w=4.40, b=7.75) we have to set contrast to 1.43.
  • Observe in filmic’s “Ansel Adams zones plot” that the two vertical lines around the gray point are now parallel.

We have just set what I call “midtone contrast” to one. Setting it to, say, 1.3 would require multiplying the result of the above expression by this value.

My point is that I feel that this “midtone contrast” is a more natural parameter than what filmic calls contrast currently. When “midtone contrast” is one, said lines are parallel. When it’s greater than one, the lines open up V-like, and vice versa.

In contrast ;-), currently when the contrast parameter is fixed, the two lines around middle gray in the “Ansel Adams zones plot” can be converging, or diverging, or parallel, depending on other parameters.

1 Like

On top of that, I don’t even think that photographic paper follows a Michaelis-Menten curve. Which is why I emphasized the very broad terms. I’m not trying to exactly model chemistry here (if DT was trying to do that, it could offer nice film simulation profiles, but the inputs would be useless to anyone who isn’t used to designing their own film emulsions. Might give cool results, but that’s not the way I imagine myself processing my digital photos on a regular basis) – But if we can use functions which provide similar characteristics to the actual physics, that’s always a bonus.

So … I completely understand why the highlight range would need to increase, but not sure why the bottom would need to do that, too. Assuming the darkest grey I want to resolve (let’s pick the darkest grey in the scene) is 4 EV below middle grey before the adjustment, and the lightest value is 4 EV above middle grey. If I adjust middle grey down by 1 EV (i.e. from 18% to 9%), then relative black is auto-adjusted to 5 EV below middle grey and white to 5 EV above. Which means the total range has just increased by 2 EV (not 1), and my understanding of what happens to the mapping in the meantime is that the whole input is lifted by 1 EV (not 100% sure but that’s what it looks like to me). This means that (assuming relative black exposure was initially set to the darkest value in the input) there is now no data in the bottom 2 EV of my dynamic range (or is that not what it does?).
So … what is the middle grey shift is actually trying to accomplish? Is there an analog scenario to which this would correspond, and what is the use case for this?
It’s clear to me now that what I’m trying to achieve when shifting middle grey is not what the slider in DT it was made for, but I don’t understand what it actually tries to do. I hope you can spare some time to explain this.

What I am looking for is a way to change exposure and have the relative white and black exposure update accordingly, without having to make manual adjustments. The information is present in DT, so there is no need for the user to make those adjustments by hand.

Diplay-referred? Weren’t you recommending working in unbounded scene-referred space? So after reading the raw data and applying the sensor curve and white balance, should it not all be linear, scene-referred and unbounded until filmic translates back to display space? Mathematically, middle grey should not matter in linear unbounded space, but practically, I can imagine that some tools might assume certain levels need to be treated different from others.

“I didn’t have it easy, therefore I need to make it hard for everyone else, too”
I’d rather you didn’t keep saying that.

Intuition is the most subjective thing ever. I don’t know what you find intuitive or not, but refusing to consider improvements is … not nice.

I challenge you to show me a screenshot of the graph where you produce that.
(actually, you can, but only if you manage find out how to circumvent the constraint that keeps the lower hand-over point above zero and the upper one below one.)
Otherwise (i.e. as long as the linear segment stays in the “box”) there is no divzero. That’s because all terms are positive in the ranges within which the function is evaluated.
Keeping that linear segment in the box is not a problem if you can write actual code and aren’t just using an online graphing tool.

6 Likes

The long and annoying history of “this is not intuitive” across foss photography software condemned down to TL;DR is “intuitive = loosely resembles something I already know and understand.” I think its a tainted word around here.

6 Likes

Thank for confirming. It’s always a bit tricky trying to replicate other people’s work … my nightmare was getting it all wrong and looking very stupid indeed.

1 Like

Hmm… okay, there is intuitive, and there is “intuitive”. Depending on which one you mean, I actually understand the sentiment.

If someone was requesting a single “improve image” slider because that’s intuitive to use … yeah. If someone was requiring a 1:1 application of a concept that doesn’t apply here, too. Why is there no “beautify” in DT?

That’s not at all what I mean by intuitive, and I did not think that’s what I sounded like.
What I do mean is that it should be possible for a reasonably versed user to work out what the tools do and to use them correctly, without spending hours buried in documentation, nights at the computer, only to find out that “you can’t activate option x if also using option y, duh”, buried in some forum thread – to provide an equally extreme example as above.

To find a way to make it more intuitive, we need to find out where user’s intuition is leading them, and how we can guide them from wherever that is to where they need to be. One way to do that is to make the tool generally more robust. I think avoiding overshoot will help with that, simply because changes to an image should then more closely resemble the intended effect of a slider, in most circumstances. That makes it easier for someone to directly observe what they’re doing.

Another way to do this, for myself, is to for me to find out where I go wrong and then consider what the interface could look like to keep me from making any false assumptions I am making along the way. That’s specific to me, but I suppose if I misunderstand something in a particular way, there may well be others. and then there’s this whole thread here, which I hope collects some more data points from other people in the same way.

Most seasoned users are unaware of the knowledge they apply regularly, which makes it harder to understand why someone else might not get it. But good software helps you understand it. There’s no need to filmic to be the Dark Souls of photo adjustment tools :smiling_imp:

7 Likes

Well, you’re not operating in a vacuum, and the term.has been used with fervor for sometime.

A lot of the “this isn’t intuitive” crowd comes from lightroom users and their aunt’s husband’s second cousin’s son told them “darktable is a replacement for lightroom!” And took that to mean that we are replicating lightroom feature for feature, then get insanely upset when they figure out we haven’t done that.

It isn’t, and using it is pretty simple. You just have to understand that you don’t have to fiddle every knob every single time.

For almost all my photos I adjust the white and black siders and that’s it.

2 Likes

If you play with the graph a little, you can find settings where they are kind of close – but I suspect the differences would still be visible in a photo.
Of course, if you go to contrast and latitude settings where the polynomials overshoot, the only way to get backwards compatibility is by reducing the range for the new curve such that the endpoint for the new curve ends up where the polynomial was dropping below 0 or increasing above 1. I don’t think that would be very desirable behaviour …

Generally, if the new curves have advantages over the old ones, then constraining them to look just like the old ones would erase most of those advantages.

In other words: I think for cases where the polynomials work well it should be possible to tweak the hyperbolae to be fairly close, though not indistinguishable. In other cases, I don’t think it’d be useful to try making them similar.

This seems harsh, by the way. Understand that you’re talking about 2+ years of largely unpaid/way under funded work.

Aurelien is no slouch, either.

3 Likes

Acknowledged.
So, what phrase should I then use to refer to “less difficult to wrap your head around”, “make it easier to acquire a feeling for how some setting affects the picture”, and “reduce trial and error”?
I’m sure those are possible. No idea how far things can be improved, but I think most people here would agree that improvements on that front are possible, even if we don’t agree (or are genuinely uncertain) what those improvement would need to look like.

I’ve yet to encounter a picture where that gave me what I wanted. I have encountered pictures where I got want I wanted, but usually only after many a painful iteration. And there’s almost no slider which I can move, observe the picture, conclude that it is where I need it, and leave it there. Almost every time I change something to get some brightness range to look as I want to, I find that some other part of the setup is now out of whack and needs to be re-adjusted.

This is a longer discussion, and I’m starting to think it should maybe be had separate from the discussion around curve parameterisation, because it tends to eat whatever thread it happens in. Which itself is telling us something…

6 Likes

I think your implementation shows what ‘intuitive’ could mean, mostly because inputs have a very orthogonal feel. You want to change one thing, you tweak one slider and that is represented in the graph. Done.

Coincidentally this is what imho the frowned upon lightroom/ACR does as well.

1 Like

Because if grey decreases, we slide the DR to the left, so the black point should slide along too. It’s a design choice here, but usually you need to keep your EV range constant between black and grey when you change that.

That has nothing to do with bounds and linearity. Screen display linear light, “display-referred” means “in regard to screen luminance”, so bounded between [0;100]% of display peak-luminance, but doesn’t necessarily imply non-linear signal, since gammas/EOTF are removed in the screen anyway, before being converted into a light emission…

The point is your raw data have middle-grey = something > 0. With the exposure module, you remap that something to 0.18-0.20 in a linear fashion and in an unbounded space, 0.18 being is our display-referred middle-grey. So you simply pin scene grey to display grey. And we could honestly just stop at that if we didn’t care for clipping.

Then filmic deals with the bounds mapping to fit whatever into the display DR, but doesn’t change the pinning, so it ensures that the midtones pass-through mostly unchanged, which is what you want to protect them since the midtones are the luminance range common to all DR.

I simply refuse to chase butterflies. “Intuitive” is not a design goal as long as we don’t know for whom we design and in which scope. Intuitive for a lab tech with 20+ years of experience in the darkroom is not the same intuitive as a digital kid with button pushing experience. So, let’s ditch that.

The point is beyond empiricism. Just because the algo holds in the condition of testing doesn’t prove the test is exhaustive and it will hold all the time. The math caveat is real, we can’t simply ignore it and hope the case doesn’t arise.

2 Likes

I hoped that the tone of that post was making reasonably clear how I meant that. In case it did not:

I can’t teach most of the people here much about colour science stuff. I’m reasonably good with engineering maths, parametric geometry, understanding coupled problems, and putting complicated stuff into software for end users. I’ve delivered more than one functionality that started out as an exercise in wagging the dog for users, until I worked out where they were, where they were going, and therefore, which of the screws I needed to expose to keep the bits fixed that they needed fixed and let them change the bits that need to be changed. I’ve also taught at uni a few years. Making weird shit logical has been part of my job for about 5 years, and finding ways to explain stuff to people for whom the regular explanation did not work[*]. I would like to apply these things.

So I fully expect (actually hope!) to learn a lecture or three about the colour process and DT’s pipeline and inner logistics here. I cannot pretend that I knew a terrible lot in that regard, but I do think I can contribute some of the other stuff I do know – but that is only going to work if we can have sufficient respect for each other’s abilities and awareness of our own weaknesses.

I know you guys have put a lot of effort into DT, and I won’t pretend that I could have done that, or even known to think about doing that. But if you look around here, there are two kinds of people: Those who “get” filmic and those who don’t. The path from one to the other seems very narrow, and I feel that myself. I’m trying to do what the classical recommended solution to difficulties with open source software is: help improve it rather than complain.

I’m aware that I’m not always the perfect diplomat but neither is Aurelien (which he knows), but that should not be a requirement (although it’s sure nice to try sometimes). I’d suggest we all give each other the benefit of the doubt. It’s not possible to improve something without also critiquing it, but I wouldn’t be trying if I thought that it was crap.

[*]»If you can’t explain something to a first-year student, you haven’t really understood« (Richard Feynman). I don’t actually completely agree with that, but I’ve experienced first-hand, often enough, that actively trying to explain something in better and different ways does wonders.

13 Likes

Nobody is asking you to be perfect or to be a diplomat. I respect that you’re jumping in and trying to see if there are improvements that can be made.

Nobody is asking Aurelien to do that either.

But, if you want the benefit of the doubt, as you say you do, then you must not say things like “filmic is the Dark Souls of image processing” because that is incindary and insulting, and that how you end up not getting the benefit of the doubt.

You also have to understand that it is more or less Aurelien that you have to interface with if you want changes to the filmic module. There aren’t more than a few people who understand the code and could make improvements quickly.

Soo…

  • if I move middle grey down, the hightlight range increases by the amount by which I move it
  • “you need to keep your EV range constant between black and grey” – so relative brightness between middle grey and the bottom of the range is not affected?
  • but then why does DT expand the shadow range, by whichever amount the middle grey is reduced? That would seem to directly contradict the previous point.

If I was looking at the histogram in the input to filmic, in log scaling: Is the middle grey doing anything else than shifting the input up or down by some fixed amount (and then adjusting highlight/shadow range)?

Does this mean that the scene-referred colourspace is using a “scale” factor which assumes that 0.18 maps to screen middle grey, and 1.0 to screen white? Is that necessary? Since there’s integers coming out of the RAW file anyway, would there be an issue if they were mapped to any arbitrary range for the scene-referred workflow, as long as they are later mapped back to whatever the displayable range is?
Mapping to some arbitrary range might of course make it harder to later find the range that needs to be mapped back, so I’m not suggesting to actually do that, but rather to help me understand why it’s necessary to keep any scene-referred value pinned to any display-referred level.

Okay, let’s use another word if you don’t like this one. Suggest one. I don’t think that you believe there was no value in trying to change filmic so that people have an easier time learning to use it, and that it can be used more efficiently. If you did that, you wouldn’t be replying. Yet, that is what your words above appear to mean.

Seriously: please, suggest a word. I cannot talk to you if I need to spend three sentences every time to circumnavigate a word which I know you interpret different than I mean it.

I try to avoid absolutes as much as I can, and I find the idea of calling anything “perfect” delusional. There seem to be people who have some built-in standard of what is/is not intuitive, but to me, this is not a boolean property of something. It’s a multidimensional vector, and it’s a different one for any person you care to ask. So when I use the term, maybe you think I want to get it to a state where a 5-year-old will get it right the first time. I would find that idea absurd.

So, yeah, gradual improvements is what I mean. I see things that I believe can be improved → I try to improve them.

hoookay, there you go:

  • b is constrained to be >0
  • x is substituted such that it is only evaluated for positive values – that’s why there’s (x-x_{hi}) for the highlight roll-off and (x_{low}-x) in the shadows. The curves are only defined where these terms are positive
  • c is defined: c=\frac{y_1}{g}\frac{bx_1^{2}+x_1}{bx_1^{2}+x_1-\frac{y_1}{g}}
    It’s always computed using a positive y_1 and positive x_1, positive g (and b is already positive, as mentioned above). This means the only way it could become negative is if x_1<\frac{y_1}{g}, i.e. g < \frac{y_1}{x_1}. This means that the slope of the linear segment would have to be smaller than the mean gradient between the two points which define it. But g is constrained not to do that. Geometrically, that would mean trying to curve the wrong way.

concluding:
bx^2 + x + c, b, x, c > 0 \Rightarrow bx^2 + x + c>0
q.e.d.

…and double-checking that: whoa, I just noticed that the constraint on g is not sufficient, for Desmos at least. If g = \frac{y_1}{x_1}, it stops plotting because the denominator goes to zero. This is doubly weird because I grabbed that curve and pulled it all over the place yesterday, including into configurations where it produces straight lines, and it worked just fine.

However, it can quickly be solved in one of two ways:

  1. constrain g \geq \frac{y_1}{x_1} + \epsilon
  2. check if g == \frac{y_1}{x_1}, and draw a straight line in that case.
1 Like

There are lots of things involved here that could easily be explained to a first year student… over a whole semester. Problem being, not the difficulty per se, but the amount of background required to start connecting the dots, because we sit between chemistry, physics, computer science, fundamental maths, psychophysics, psychology, color theory, GUI design, workflow design, darkroom legacy, colorists legacy and the burden of an output-agnostic/multimedia pipeline exit.

After all, this is a job. The biggest frustration for me is to have to loop-repeat to people who usually master only one of these aspects (if they master anything at all) how their seemingly-bright, simple and clean idea ignores all the other aspects of the problem.

People with master degrees and more tend to look down on photography as something that is and should be easy, almost a menial job, compared to their grown-up complicated job, and they expect to be automatically good at this lesser task. That’s a mistake. Many issues would be addressed by solving this hubris problem : no matter your IQ, it’s gonna take work to get proficient at image editing/retouching, and it’s gonna be painful. Bright guys, you are going to feel stupid, and you need to be upfront ok with the concept.

Anyway, I’m starting to empty my brain on the topic :

6 Likes