Proposal for updated Filmic curve parameterisation in Darktable

So, there’s been quite a bit of discussion around filmic. Some find it hard to use, some want to change the curve around, and I … little bit of both. I think it’s a very useful tool, as evidenced by quite a few examples around these forums. But I also find it less intuitive. Which is of course always a subjective quality, but having a background in engineering maths and software for end users implementing mathematical models, I’ve also learned a bit about “user-proofing” inputs, and what a difference you can make by deciding to parameterize the model in different ways, and how those parameters are presented and/or limited. So I tried to apply some of those things to filmic, and below is where I got.

First, I need to thank @anon41087856, who was patient enough to explain a few of the concepts behind filmic in his blog, on this forum. I’ve got a decent enough idea about colour spaces but that’s mostly from a safe distance, so It’s been very helpful that he put some of his notes and testing code up online, and answered a few questions, too. Also credit to him for even starting the whole filmic business in Darktable, of course. It’s what even got me to fire up DT after a long time.

A quick mock-up of the proposed curve

Right, so here’s an interactive graph of what I came up with. You can drag some of the bits around the graph, zoom in and out, or manipulate the inputs on the left-hand side, to see how the curve behaves.
Blue is the new suggestion, black is (my interpretation of) the current 4th order polynomials in use by current Darktable (v3.4). The x axis is the logarithm of the (scene-referred, linear) input with theoretically-unlimited dynamic range, and the y axis (labelled O for Output) is the mapped output for display on a screen or for printing (which has to fit in between 0 and 1):

The controls in more detail:

  • x_0, x_1 define the upper and lower bound. This determines where the two ends of the curve land on the graph. You can just drag the endpoints left or right, too. As long as c=0, this is equivalent to the current shadow and highlight range sliders.
  • c determines the center of the range, in terms of input values. This defaults to 0, as it does in the current filmic version.
  • O_c determines to which output level the center value is mapped. Default is 0.5, as per current filmic and conventional photographer wisdom (that is: in the graph. This is actually your 18% reference gray, but plotted on a linear scale, referring to optical density, as in the example graph, it’s 50%). You really don’t have to move it, but I’ll explain later why it might still be useful sometimes.
  • l_{low} and l_{hi} determine the extents of the linear segment. This is how far above and below c, the curve stays linear. This is an alternative way to specify the “latitude” and “shadow/highlight balance” settings
  • g is the steepness (“gradient”) of the linear segment and is equivalent to the “contrast” setting in filmic.
  • b_{hyplow} and b_{hyphi} are two additional tuning parameters to influence how “flat” or “crunchy” the roll-off curves above and below the linear segment are. Depending on the other settings, you may have more or less range to play with there.

What’s with the roll-off curves?

One frustration I sometimes have with filmic in its current state is that the mapping curve can overshoot and produce negative values, like so:
grafik
If that happens and you’re not actually looking at the curve, you might think you’re increasing the shadow range but actually everything that’s orange in that graph produces negative numbers which are clipped to zero, and so the bottom 5 stops or so are mapped to pitch black. So you would actually include more shadows if you reduced the shadow range at this point. Of course, the overshoot can be dealt with by decreasing contrast in the look tab, or reducing latitude but that might not be the look you were going for, and it’s one more thing a user needs to notice and take into account while tweaking filmic.
However, this can be dealt with by using a curve that does not overshoot, ever.

This would also be closer to actual film because I don’t think there’s any film response curve which gets lighter first, then darker again. So if we can find something that’s mathematically forced to stay monotonic, the user won’t have to pay attention to this issue and solve it manually.

I have done modelling of enzyme reactions and photogrammetry of fluorescing proteins in the past, and one important curve in that context is the Michaelis-Menten curve. As the input increases, output increases, then asymptotically approaches saturation. That’s very similar to what the upper end of measured curves from actual film look like (and the chemistry is similar, in the very broad terms that shooting a photon at an almost-overexposed film has less of an effect than at a less-exposed one).
In slightly rearranged form, the Michaelis-Menten curve has this equation:
f(x) = a \frac{x}{x+c}
However, this simple hyperbola doesn’t quite give me enough degrees of freedom to make it fit both the tangent and the end point of the exposure range. Another search finds that the original filmic presentation cites some of the curves being used as slightly expanded hyperbola equations (slides 55 and 56). Slightly rearranged again:
f(x) = \frac{ax^2 + bx}{ax^2 + dx +c}
However, the curve above is meant to give a complete mapping from top to bottom, with no linear segment (because it models the whole sigmoid). Having four parameters also makes it a little harder to handle if you want to derive something parametric for users to play with, as the numbers are not very intuitive if you’re not used to them.

So, the version that I settled on is this:
f(x) = a\frac{bx^2 + x}{bx^2 + x + c}
The fraction converges to 1 as x approaches infinity, and so a tells us the asymptotic value. c is the parameter which determines the steepness of the tangent at x=0 (because bx^2 is zero, and has a derivative of zero if x=0), and b is a parameter we can use to tune the shape of the curve a little.

I worked out how which values these parameters need to get in order to conform to the following constraints:

  • curve starts at the end of the linear segment, with the same steepness as the linear piece
  • the curve must go through the user-determined end point of the exposure range. So the upper roll-off curve reaches 1 at x_1
  • That’s it, really. This would be enough.
  • However, to find the useful range for b, I’ve also worked out at which value of b the curvature (i.e. second derivative) of the curve at the hand-over point to the linear segment becomes zero. The useful range is then between 0 and whatever that value is. With b=0, you get a plain Michaelis-Menten curve, and with b=b_{max}, you get a rounder version of the curve, for “crunchier” shadows/highlights. If you went beyond b=b_{max}, the curve would be curving the wrong way first, which I don’t think makes too much sense.

The constraints on inputs

This was originally the main reason I started playing with these curves, before coming up with the hyperbola. I think by exposing the right kind of parameters and constraining them in the right way, the whole tool can become both more fool-proof, intuitive and require less back-and-forth to tune the curves to get the desired result (or to realize that your photo needs local adaptations to get the desired result…):

  1. the contrast parameter is constrained so that the linear segment can not be flatter than a straight line between the end points. Currently, the lowest contrast setting is 1.0 which is close to a straight line when using the minimum dynamic range (from -1EV to +1EV). If that happens, the new hyperbolae turn into straight lines, too. The upper bound is set so that the end points of the linear range don’t map to less than 0.1 or more than 0.9. That is to make sure that the roll-off curves still have something to work with, and to avoid negative inputs to the equations, because that would break them (in addition to falling outside the meaningful range)
  2. The linear ranges, l_{low} and l_{hi}, can be moved all the way to the ends of the exposure range – but the contrast parameter will be adapted using the rule above to make sure the line does not extend to negative numbers. So if you have a steep contrast and extend the linear range too far, it will automatically be come flatter.
  3. The tuning parameters b_{hyplow} and b_{hyphi} are constrained to stay between 0 and whatever value gives the curve zero curvature at the hand-over point. You can try removing that bound in Desmos and moving beyond it to see how the curve starts over-shooting. Negative numbers also have funny consequences, because they can permit the denominator in the equation to reach zero in the range of interest, which gets you a vertical asymptote.

There are some instances where the constraints fail, which may have to do with how Desmos implements/checks them. I think that could be done more robustly. Or maybe I overlooked some conditions that should have been checked? Please feel free to play with the graph and tell me what you think I missed.

Exposed handles, scaling

This was the second reason I started with this: I though it must be possible to reduce the number of sliders (or other DT modules) which a user needs to adjust and iterate between to get what they want. Humans are amazingly adaptive with such things (otherwise we could not ride bicycles), but if we can find a system that requires less adaptation, that should work quicker and for more people.

exposure/middle grey
Here’s an example. If I set up a curve in filmic to include the complete brightness range of a photo, but then decide that the picture needs overall brightening, I can go to the exposure module and correct exposure. This changes the input seen by filmic, and means that I then also need to adjust the shadow and highlight ranges, and possibly some other settings, too. The other way of brightening the picture is changing the middle grey luminance input. However, here are before/after screenshots of adjusting nothing but middle grey (rather large adjustment to make the effect more visible):

grafik
grafik
Note how both the shadow and highlight range were extended at the same time. I’m not sure if this is intended (and if yes, what the reason is), or maybe a bug (of which I know a few were discovered and fixed since v 3.4 came out), but the shadow end is now way below the point where it started, and the highlights in the picture have actually become darker. All ends of the curve are moving at the same time, and I need to re-adjust them again.

The c parameter in the proposed setup works a little differently: It does not touch the upper or lower end of the range and simply puts the pivot at a different input level. This means if you move it by 1EV down, 1 more EV is added to the highlight range, and 1 EV is removed from the shadow range, so whatever the darkest and brightest inputs which the users wanted to map to output black and white, they stay where they were:


I feel that this is a more intuuitive way of changing exposure, and (unless I misunderstand what the exposure module does), I think it is mathematically equivalent to changing exposure and then adapting the relative white and black exposure sliders in filmic to match the shifted input levels. Except it can be done using a single number.
Now, it might be that there are other modules applied between exposure compensation and filmic which benefit from having exposure corrected right at the start – I’m writing this to get some feedback, so please let me know! – but at least for a quick tweak, this should deliver exactly what most users would hope to achive with it. Even in more involved cases, if all the in-between modules are already set up and a user wants to modify overall exposure, changing it this way could avoid having to re-adjust all the other

latitude
I think the “latitude” and “shadows/highlight balance” parameters generally work well enough. However, if I find that there is a case to use parameters which affect only either shadows or highlights. So e.g. if my shadows are a little too crushed and I want to make more space for the shadow roll-off curve, I can change just one end, directly see the effect and know that I don’t need to check if it also affected highlights. Mathematically, this the results are completely equivalent to the current state – this is just a different way to input two numbers which define the linear segment.

That said: I’m actually starting to think it might be even better to let users input the proportion of the shadow range covered by the linear segment. So the user would specify either \frac{l_{low}}{c-x_0}, or \frac{g ~l_{low}}{O_c}. This means adjusting the black exposure would change the length of the linear segment, but the relative proportion (how much is linear vs. how much is covered by the roll-off curve) would stay constant. Opinions?

The mathematics of it

I’ll write up the full maths in another post here (because this is getting long), but all equations are in the Desmos widget above (equations can be copied and pasted as LaTeX from Desmos), but here’s a quick explainer:

This is the core function itself:
f(x) = a\frac{bx^2 + x}{bx^2 + x + c}
And this is its derivative (i.e. the steepness of the curve)
f'(x) = ac\frac{2bx + 1}{ \left( bx^2 + x + c \right)^2}

We can directly see that f(x=0)=0 – that’s fine. It’s actually very convenient, because we can move it to whatever x it needs to be later.
What we need to impose is the tangent at the origin and the end point of the exposure range:
f'(x=0)=g
f(x=x_1)=y_1
Note I’m using y_1 as stand-in for whatever the difference in outputs between the end of the linear segment and the top or bottom of the output range is.
Putting those conditions into the equations above and solving for a and c gives us:
a=c g
c=\frac{y_1}{g}\frac{bx_1^{2}+x}{bx_1^{2}+x_1-\frac{y_1}{g}}
…done! That’s the shape of our curve.

So now we only need to move it to wherever we need it. Say we need the upper roll-off, that means we need the curve to start at x=x_{hi} and O=O_{hi}, and end at x=x_1 and O=1. So, simply substitute (x-x_{hi} for x in the equations above, compute y_1= 1- O_{hi}, when computing the coefficiencts a and c, and add $O_{hi} to the equation as you’re evaluating it:
c_{hyphi}=\frac{1-O_{hi}}{g}\frac{b_{hyphi}\left(x_{1}-x_{hi}\right)^{2}+\left(x_{1}-x_{hi}\right)}{b_{hyphi}\left(x_{1}-x_{hi}\right)^{2}+\left(x_{1}-x_{hi}\right)-\frac{1-O_{hi}}{g}}
a_{hyphi}=c_{hyphi}\ g
O_{hyphi}\left(x\right)=O_{hi}+a_{hyphi}\frac{b_{hyphi}\left(x-x_{hi}\right)^{2}+\left(x-x_{hi}\right)}{b_{hyphi}\left(x-x_{hi}\right)^{2}+\left(x-x_{hi}\right)+c_{hyphi}}\left\{x_{hi}\le x\le x_{1}\right\}

…and that’s the upper roll-off curve.
The lower one uses x_{low} - x and y_1=O_{low} instead, and subtracts the resulting function from O_{low}.

Conclusion (for now)

I’m hoping to get some feedback here. I’ve not programmed in C++ so far (I’m an engineer by trade, so it’s been Matlab, FORTRAN, Python most of the time), and I’ve not compiled darktable before, or looked at its source code. So I’m kind of hoping for a few things:

  • Someone finds this interesting enough to implement it in DT, to actually demonstrate what it can be like IRL. I’d be more than very happy to support this, of course. Example implementation in Python would be no problem, and the derivation of the maths is coming up.
  • Someone is nice enough to show me the way around the Darktable source and how to set up for compiling it. I’m very comfortable on Linux, and computer stuff doesn’t scare me – but I’m new to C++ and DT, so I wouldn’t know about code conventions used there, and would like to offend other maintainers by brazenly ignoring them.
  • In either case, unless everybody thinks that my proposal is completely worthless (how dare you!), there’d be some time to invest. So before I (possibly in tandem with someone else) invest that time, I think it’s smart to solicit views and friendly advice. Would be a shame to do it only to have it rejected, realize it could have worked way better if approached a bit differently, or that there’s some relevant information which I’m completely unaware of.

So: Please let me know what you think, what you like, what you don’t, what you think I may have overlooked …

22 Likes

I love it!
Could we see additional plots? loglog would be cool and as it came up with log-logistic+skew: the first derivative of the curve realized by the parameters.
(Nevermind I found how to define my own function)

Massive kudos for doing this!

1 Like

fwiw i’m using a simple old hermite monotone spline with four points for this purpose. you can play with that like so:

cd /tmp/
wget https://raw.githubusercontent.com/hanatos/vkdt/master/src/pipe/modules/filmcurv/test/test.ipynb
jupyter-notebook test.ipynb

it’s never entirely linear, but close. i’m going to bet that this is actually what’s happening in real film too.

Hi @Mister_Teatime, thanks for sharing this very interesting work.

If I understand correctly you are not simply proposing an alternative parametrization of the existing filmic curve, but a somewhat different curve (the bit where you talk about the Michaelis-Menten curve).

If this observation is correct: wouldn’t it be a better approach to keep the current curve exactly as it is and only revise the parametrization? The advantage would be that this could be done in a perfectly backwards-compatible way.

I have two gripes with filmic’s parametrization in its current form:

  • The overshoots you mention when contrast or latitude is too high.
  • The fact that actual midtone contrast is not preserved when white and black point are adjusted. See here for details.

Here is one possible way to deal with the above while keeping filmic’s exact spline and changing its parametrization only slightly:

  • Filmic could reduce latitude as much as necessary in order to avoid overshooting. Rationale: an overshooting and therefore clipped curve is likely a more extreme departure from what the user had in mind than a curve with reduced size of the linear section.
  • Instead of the currently used contrast parameter, filmic could keep the midtone contrast constant. I also think that midtone contrast would be a better parameter than current “contrast”. See the above post (Part III) for details. Rationale: everything in filmic revolves around the midtones, so controlling the contrast of the midtones directly seems like a good idea.

The above is not a complete proposal yet, but it shows the idea.

1 Like

@jandren I think you could be interested in this post and maybe collaborate with @Mister_Teatime

2 Likes

Thanks, @rgo!

It’s nice to not be the only one staying up at night thinking about this parametrization :slight_smile:
That under and overshoot has kind of been my motivation to explore this topic as well!

Loved the graph tool! :heart_eyes: Very easy to manipulate that curve without accidentally breaking it!
I did kind of the same thing in Python using streamlit: https://share.streamlit.io/jandren/tone-curve-explorer
It would be a pleasure to merge your proposal in there if you want to come with a PR! Not sure how to support the nice in graph interaction though. Anyway, you can find the code here: GitHub - jandren/tone-curve-explorer: A simple streamlit app to explore the shapes of some tone curves.

Fun note, the Michaelis-Menten curve is often called Reinhard tone mapping in image editing if x is your scene-referred pixels.

There is a lot of content over at the sigmoid thread, don’t know if you read it or if it will help in any way but here is the link regardless: New Sigmoid Scene to Display mapping

1 Like

Just be careful there, because after the chemical reaction, what we deal with is transmittance/absorbance of the translucent medium, so you would probably need to convolve Beer-Lambert law on top of Michaelis-Menten one.

Not a bug. In practice, when you need to decrease the middle-grey reference by 1 EV, you also need to increase the DR by 1 EV or so because white is defined from middle grey, so the reference SDR values is white = middle grey + 2.45 EV, and if you start sliding your grey to the left (say by an offset off), you need to increase white appropriately such that white = (old middle grey - off EV) + 2.45 EV + off EV = new middle grey + (2.45 + off) EV. Ergo to keep the white value at the same luminance intensity, you need to increase the DR as well, not just slide it.

Yeah, no, please don’t do that. Exposure should not be changed at Filmic’s stage, it’s too late. Remember, we have a pipeline, and shit happens earlier than filmic than need grey roughly pinned around 18% display-referred ASAP in the pipeline. I don’t care about intuitivity, nothing in there is ever intuitive anyway.

I verified the computation and we agree on that.

I checked the computation there too, all good.

The problem I have, again, is in case c = x(bx - 1) for any x in at least [0; 1] (since we work in normalized log space) , your curve is not definite since you have a division by zero, and I don’t see a way to deal with that.

I like the fact that the latitude becomes here our “tension” parameter over the curve (as in “how sticky to the asymptotes you want it to be”), so it is a lot closer to the shape parameter it was supposed to be. But that math issue needs to be dealt with because it’s a lot worse than a bare overshooting, it’s a plain hole in the middle of the model.

The nice thing about splines is that you can use them to make almost any kind of shape.

The bad thing about splines is that they can make any kind of shape. So that means you need to somehow force them into whatever shape you want to generate. That’s also a thing we’re seeing with the existing 4th order polynomials: They don’t know anything about asymptotes and will happily overshoot, undershoot or oscillate around whatever you would ideally like them to do.

That’s why it’s usually better to use a curve type which naturally has the kind of shape you want, and then add a parameter or two, to modify it. You can’t make these hyperbolae overshoot. Which means fewer degrees of freedom needed to control them and one thing less for users to worry about. Plus it’s closer to the actual physics, which I always consider a bonus.

Kind of. I’m proposing to keep the straight bit but replace the roll-off curves with different curve types – which are constrained in very similar ways but have some advantages over the current polynomials.

The thing is this: It’s not that contrast or latitude is “too high” – it’s too high for 4th-order polynomials. Or the other way round: polynomials (I actually tried with 2nd and 3rd order, too) can’t cope with the constraints in a lot of cases, particularly if you have a lot of shadow or highlight to compress. So by limiting contrast or latitude (or auto-adjusting black or white exposure) to prevent overshoot with a polynomial, you’d reduce the range of possible contrast/latitude combinations a lot. All of which is not necessary when switching to a roll-off curve that does not overshoot.

Another issue: In order to reduce overshoot, the gradient at the upper and lower end have been constrained to zero, which means that the contrast at the ends of the mapped range is also zero, which means that the actual mapped range is always a little smaller than what the user specified.

I haven’t played around with it too much but with b at the maximum setting, the new curves should be pretty close to what you’d get with the current definition. Although of course we have no direct comparison yet of how close it is visually when looking at a photo treated that way. That remains to be seen.

Ahhh, so that’s what all those HDR mapping programs are using! Thanks for clearing that up!

I can’t claim credit for that, unfortunately. It’s a free online thing by some company who is offering it to schools and such. Way quicker to test some functions there than write an interactive pyplot GUI or something.
I’m pretty much at home with matplotlib and numpy, not so much web server-based stuff. So actually, your tone curve explorer should be really useful to test some things which aren’t automatically done by Desmos, like scaling laws for some sliders, or more thorough or complicated constraints on inputs etc., and then get feedback on those without having to put them in DT, compile and distribute that.

1 Like

Ahhh, okay, sorry about that.
Monotone cubic interpolation is thus pretty close to Akima splines (which I’m just independently doing something with…), I really should have noticed that.
Actually, the reason I even started thinking about curve types for DT is because I was working on extending Akima splines to give them some new properties and thought they might be a good fit for filmic. But: In the current case, we have only two points for each curve segment, and neither the monotone cubic interpolator not akima splines can deal with externally-prescribed tangents without losing their properties, since the thing that makes them work is their way to find the tangents needed to avoid overshoot – which means that you could not specify them a priori.
A cubic spline segment has 4 degrees of freedom, but the hyperbola I’ve found has only three, and one of them is just for tuning.
I think something based on \tanh() or \arctan(x) could be even better, but haven’t found a closed solution for those (plus a good strategy for where and how to add extra parameters), and the rational function is easier to handle, so to me that’s a clear winner.

I agree that replacing the dumb splines by something that matches the problem better (one could even call it a model), is more elegant. As is often the case, elegant rhymes with simple/intuitive also here.

It would be great of course If the new curves could be made sufficiently close to the old ones such that practical backwards compatibility could be provided.

A suggestion for your online calculator: you could try fix input and output gray at 18.45%. In current filmic output middle gray is fixed anyway. And input middle gray can be adjusted, but for the reasons @anon41087856 mentioned should not be.

Otherwise, am I the only one who finds the contrast parameter in current filmic (and therefore, I guess, also in your proposal) not very intuitive? In current filmic, contrast is simply the steepness of the linear section of the spline as shown in filmic’s plot. The problem is that the axes of the plot change meaning based on other parameters.

In my experience a more useful parameter would be the true midtone contrast, i.e. the proportionality constant between (linear) input and output intensities around middle gray.

In current filmic, I believe that this midtone contrast is equal to

\frac{cW}{b} / \log\left(\frac{w}{b}+1\right),

where c is filmic’s contrast value, w and b are the absolute values (in EV) of the white- and black-point, and W is 2.44 EV, the output white point.

So I propose to replace this by the midtone contrast c_\mathrm{m} and compute c such that

c = c_\mathrm{m}\frac{b}{W} \log\left(\frac{w}{b}+1\right)

The dumb splines ensure the first and second order continuity of the pice-wise functions while converging to the desired values.

Obviously not possible.

Contrast is a mid-tones preservation strategy across the log tonemapping liked to film gamma, there is no such thing as true or false contrast. That said, I don’t really understand your scaling.

@hanatos What I don’t like with Hermite splines is we have only 5 control nodes here and it’s really not enough to drive a parametric shape (namely : how close you stick the curve to the asymptotes). At least we would need additional control nodes, which doesn’t simplify the problem user-wise. Vanilla Hermite is just too slow to converge toward its bounds.

Agree about the importance of continuity. Did not look at @Mister_Teatime’s curves under this aspect.

My claim is that the following works (derivation here):

  • Set up filmic parameters in an arbitrary way.
  • Set the contrast parameter to the following value: b / 2.44 * log(w/b + 1), where w and b are the absolute values of the white and black points as set in filmic. For example, for filmic’s default values (w=4.40, b=7.75) we have to set contrast to 1.43.
  • Observe in filmic’s “Ansel Adams zones plot” that the two vertical lines around the gray point are now parallel.

We have just set what I call “midtone contrast” to one. Setting it to, say, 1.3 would require multiplying the result of the above expression by this value.

My point is that I feel that this “midtone contrast” is a more natural parameter than what filmic calls contrast currently. When “midtone contrast” is one, said lines are parallel. When it’s greater than one, the lines open up V-like, and vice versa.

In contrast ;-), currently when the contrast parameter is fixed, the two lines around middle gray in the “Ansel Adams zones plot” can be converging, or diverging, or parallel, depending on other parameters.

1 Like

On top of that, I don’t even think that photographic paper follows a Michaelis-Menten curve. Which is why I emphasized the very broad terms. I’m not trying to exactly model chemistry here (if DT was trying to do that, it could offer nice film simulation profiles, but the inputs would be useless to anyone who isn’t used to designing their own film emulsions. Might give cool results, but that’s not the way I imagine myself processing my digital photos on a regular basis) – But if we can use functions which provide similar characteristics to the actual physics, that’s always a bonus.

So … I completely understand why the highlight range would need to increase, but not sure why the bottom would need to do that, too. Assuming the darkest grey I want to resolve (let’s pick the darkest grey in the scene) is 4 EV below middle grey before the adjustment, and the lightest value is 4 EV above middle grey. If I adjust middle grey down by 1 EV (i.e. from 18% to 9%), then relative black is auto-adjusted to 5 EV below middle grey and white to 5 EV above. Which means the total range has just increased by 2 EV (not 1), and my understanding of what happens to the mapping in the meantime is that the whole input is lifted by 1 EV (not 100% sure but that’s what it looks like to me). This means that (assuming relative black exposure was initially set to the darkest value in the input) there is now no data in the bottom 2 EV of my dynamic range (or is that not what it does?).
So … what is the middle grey shift is actually trying to accomplish? Is there an analog scenario to which this would correspond, and what is the use case for this?
It’s clear to me now that what I’m trying to achieve when shifting middle grey is not what the slider in DT it was made for, but I don’t understand what it actually tries to do. I hope you can spare some time to explain this.

What I am looking for is a way to change exposure and have the relative white and black exposure update accordingly, without having to make manual adjustments. The information is present in DT, so there is no need for the user to make those adjustments by hand.

Diplay-referred? Weren’t you recommending working in unbounded scene-referred space? So after reading the raw data and applying the sensor curve and white balance, should it not all be linear, scene-referred and unbounded until filmic translates back to display space? Mathematically, middle grey should not matter in linear unbounded space, but practically, I can imagine that some tools might assume certain levels need to be treated different from others.

“I didn’t have it easy, therefore I need to make it hard for everyone else, too”
I’d rather you didn’t keep saying that.

Intuition is the most subjective thing ever. I don’t know what you find intuitive or not, but refusing to consider improvements is … not nice.

I challenge you to show me a screenshot of the graph where you produce that.
(actually, you can, but only if you manage find out how to circumvent the constraint that keeps the lower hand-over point above zero and the upper one below one.)
Otherwise (i.e. as long as the linear segment stays in the “box”) there is no divzero. That’s because all terms are positive in the ranges within which the function is evaluated.
Keeping that linear segment in the box is not a problem if you can write actual code and aren’t just using an online graphing tool.

6 Likes

The long and annoying history of “this is not intuitive” across foss photography software condemned down to TL;DR is “intuitive = loosely resembles something I already know and understand.” I think its a tainted word around here.

6 Likes

Thank for confirming. It’s always a bit tricky trying to replicate other people’s work … my nightmare was getting it all wrong and looking very stupid indeed.

1 Like

Hmm… okay, there is intuitive, and there is “intuitive”. Depending on which one you mean, I actually understand the sentiment.

If someone was requesting a single “improve image” slider because that’s intuitive to use … yeah. If someone was requiring a 1:1 application of a concept that doesn’t apply here, too. Why is there no “beautify” in DT?

That’s not at all what I mean by intuitive, and I did not think that’s what I sounded like.
What I do mean is that it should be possible for a reasonably versed user to work out what the tools do and to use them correctly, without spending hours buried in documentation, nights at the computer, only to find out that “you can’t activate option x if also using option y, duh”, buried in some forum thread – to provide an equally extreme example as above.

To find a way to make it more intuitive, we need to find out where user’s intuition is leading them, and how we can guide them from wherever that is to where they need to be. One way to do that is to make the tool generally more robust. I think avoiding overshoot will help with that, simply because changes to an image should then more closely resemble the intended effect of a slider, in most circumstances. That makes it easier for someone to directly observe what they’re doing.

Another way to do this, for myself, is to for me to find out where I go wrong and then consider what the interface could look like to keep me from making any false assumptions I am making along the way. That’s specific to me, but I suppose if I misunderstand something in a particular way, there may well be others. and then there’s this whole thread here, which I hope collects some more data points from other people in the same way.

Most seasoned users are unaware of the knowledge they apply regularly, which makes it harder to understand why someone else might not get it. But good software helps you understand it. There’s no need to filmic to be the Dark Souls of photo adjustment tools :smiling_imp:

7 Likes

Well, you’re not operating in a vacuum, and the term.has been used with fervor for sometime.

A lot of the “this isn’t intuitive” crowd comes from lightroom users and their aunt’s husband’s second cousin’s son told them “darktable is a replacement for lightroom!” And took that to mean that we are replicating lightroom feature for feature, then get insanely upset when they figure out we haven’t done that.

It isn’t, and using it is pretty simple. You just have to understand that you don’t have to fiddle every knob every single time.

For almost all my photos I adjust the white and black siders and that’s it.

2 Likes