Darktable 3:RGB or Lab? Which Modules? Help!

The “hard” setting uses 4th order, the “soft” setting uses 3rd order.

What ? Why ? We have a system of algebraic equations, why is that not possible ? Anyway, if you want to improve the solver, go ahead.

For continuity ?

Simply because people have complained about washed blacks with the v2 (using cubic splines), so we ensure maximum crunch by imposing slope at bounds.

1 Like

Thankss for the explanations!

It’s perfectly possible to run a linear solver, but it’s much faster to work out the general solution on paper, then calculate the coefficients directly. Link to a demo is below.
But actually, I’ll have to tone my reaction down a little bit. That linear solver is probably taking only a small fraction of the time needed to also update the photo whenever the curve is changed – so while computing the result directly is several times as fast as the solver, it probably makes a negligible impact overall on runtime (but makes the code less complicated)

Yes, I assumed that much, but do you think it would have a noticeable impact if the second derivative was not continuous? I haven’t applied it to photos, so I can’t tell. But letting go of the constraint might allow some other roll-off functions which don’t overshoot so easily.

That said: I’ve looked at a few graphs of actual negatives and photo paper since then, and they all seem to have some sort of sigmoid curve which could be approximated by a arctan, or tanh function.

Hmm… not sure if I’d agree with maximum crunch. If I want crunchy blacks, wouldn’t I rather just reduce the shadow range or increase the contrast setting? If first derivative =0 is always imposed, I cannot un-crunch the blacks.

small demo
(I hope this link stays alive for long enough…)

Contents:
1: Coefficients for the roll-off curves are computed inline, no solver needed (I made up my own naming scheme, hope you can decipher it)
2: top and bottom end of the full range are set first, and don’t change if anything else is changed
3: bounds for the extents of the linear range (l_1, l_2) and the contrast setting (“a”) are adaptive to the dynamic range of the input range
4: I also included a parabolic roll-off at the lower end, with just the values at the end and tangency to the linear segment imposed. It overshoots even quicker (thus not a good idea to actually use), but has the nice property that it turns into a straight line if a is at the lower bound, while the 4th order polynomial wiggles around
5: You can change c without affecting either end of the range. In photo terms: Whatever shadows and highlights you chose to preserve will still stay preserved, but in between, everything gets brighter/darker

Not included:
1: There are probably more ways to include some “smart” bounds for the inputs to prevent silly results, and some of the rules I used are a little arbitrary (but try changing any input values, it’s not too bad, I’d say.)
2: center point can be shifted left and right, but not up and down. doing that would be mathematically equivalent to shifting left/right and adjusting the extents of the linear range – although that might be less intuitive to do. So I think there could still be value in being able to do this, but I’d save that for another discussion.
3: After looking at a bunch of film/photo paper response curves online, I think it would be best to replace the polynomials with arctan functions (scaled to match the constraints). I naïvely tried to do that in Desmos, too, but I think there isn’t actually a closed analytical solution for this, so I could not include that. The same goes for hyperbolae or tanh, unfortunately. I’ll see if I can code up a quick and stable solution for that in Python. arctan and tanh also have the nice property that they become straight lines if the contrast is at the lower bound, but in addition they also never overshoot (that is: as long as the lower end of the linear range is >0, and the upper end < 1, of course).

Update:
I played a bit with the bounds of inputs and came up with more robust specifications:

You can now grab that center point and drag it quite far without getting any overshoot or similar weirdness.

  • The slope of the linear section is constrained such that the flattest slope will have it point directly at one of the end points, and there’s only some edge cases where you can make it so steep as to cause overshoot.
  • The hand-over points from linear to the roll-off curves are also limited such that they don’t overshoot (mostly…)
  • you can now specify the “center” input and output.

So this will let you start linear and roll of only at the highlights, or the other way round (as would happen with very underexposed film, I think).

I still think that roll-off curves based on arctan or tanh would work better but that would require me to do some actual programming. I’m definitely up for finding a robust way to define those, but not today.

Actually, I’m almost at the point where I’d like to see what this would look like in DT, except I have never done anything in C++, or compiled DT …
So, I’m kind of reluctant to start doing that (or to hope that someone else buys into these ideas to do it themselves) before I have some opinions from people around here.

Do you think I’m making sense?

You need to apply some of these to some images to see what’ll really happen. I’ve done that and have been surprised at what the real shapes need to be.

It’s not that hard, get an image library in your favorite language, read in an image and just loop through the pixels (tone is a local function, so you don’t even need to recognize the rows and columns), applying your function to each channel component. Save the image, and regard it in your favorite viewer. Easy-Peasy… :crazy_face: (Edit: found this emoticon last week, I just love it… )

1 Like

oh, you do need to pay attention to in what state is the input image. I’d use a linear 16-bit TIFF encoded with a working profile, and note that you’d still have to do an output tone/colorspace transform (sRGB?) for rendition. Okay, it gets a little complicated…

So far, I’m pretty much replicating the curves which are used by filmic anyways, so the results will be the same. This is (so far) mostly an exercise in coming up with ways to control those curves to make them easier to handle. I think so far, that can be done well enough just on curves.
Using different functions for the roll-off, on the other hand, is likely to make visible differences, particularly once you push the parameters out of the range where the current curve looks fine anyway.

But then, if I’m going to apply any curve type I come up wih to an image, I think the best way to do that would be within Darktable because otherwise there’s still a bunch of things I’d need to get right to make sure my conclusions can be transferred. (“Yeah sure looks nice, but you didn’t take into account that DT actually applies before handing the image to filmic”. “cool but that’s already part of the colour management pipeline”. "that won’t fly in DT because it needs to be able to do ". Or, my favourite: “You can get the same effect by using this 27-step procedure, so we’re not going to allow that stuff in DT anyway”).

So … yes, I’d like to know what people think before I put serious time in. I’d also be happy if someone could lend me a hand in setting up a routine for me to fork DT, change some code and compile it, and point me to where in the code the filmic curve lives.

Alternative: Is there a way to export the exact image data that is fed to filmic in DT, and to insert whatever I produce in Python from it back into DT at the correct point in the pipeline? Or a how-to? I keep stumbling over information and discussions I wasn’t aware of, so I think it is smarter to ask first than to reinvent the wheel and look stupid later.

1 Like

We use Gauss-Jordan an a 4×4 matrix, last time I checked, the solving time was below 5 ms. The beauty of it is we have a uniform way to deal with different kinds of parametrizations, once the matrix of constraints is defined, it all goes through the same pipe. Want a third order ? Set the first column to zero. Wants to relax a constraint ? Set the corresponding line to zero. Simple, uniform, elegant.

Well, that’s the beauty of any approximation… There are dozens of way to parametrize a sigmoid, out there, and they are all equal ±\epsilon.

That could very easily be as an option in filmic without having to change any pixel code. Just remove the corresponding lines and columns in the matrix of the linear system. We already have “hard” (4th order with imposed curvature at latitude bounds and DR bounds), “soft” (3rd order with imposed curvature at DR bounds), we could also have “safe” (3rd order with curvature imposed at latitude but not at DR bounds).

Also we could check that \frac{d^2P(x)}{dx^2} ≠ 0 on the computed spline, after solving the system, but then I’m not sure what to do if the check fails.

Problem is the pure-sigmoid functions don’t allow to control the latitude contrast separately from the shoulder/toe rate of convergence toward the bounds. You will find that if you set the latitude range to 0 (so, in practice, directly connect toe and shoulder), you solve 90% of the overshooting issues but void the meaning of the contrast parameter.

If the contrast option in colour balance is being removed from v3.5 onwards, what will be the recommended module to replace Curves/RGB Curves please ? I assume the tone equalizer.

Yes, tone equalizer will allow you to easily make global or local tonal adjustments.

1 Like

okay, so it’s actually pretty benevolent :slight_smile: Point taken.

I’ve done a little more searching, and found some examples of people using rational functions with polynomial terms. The original publication on Filmic by Haarm-Pieter Duiker has some useful examples:

  • slide 36 shows a few example sigmoids – note how they don’t have horizontal gradients at the ends
  • slides 55 and 56 have equations, and they use rational functions. Kind of similar to the Michaelis-Menten curve I mentioned earlier, but with a few additional terms, which makes complete sense since you’d want the ability to adjust them somewhat.

That prompted me to go off and do some maths, and I think I’ve found a nice solution:
f(x)= a (bx² + x) / (bx² + x +c)

That’s my baseline hyperpola, and I’ve had to fill a few pages with equations, but I’ve just made it dance to my tune.

I’ll write a longer post explaining the details in a bit, but here’s a screenshot:
grafik
This is the lower roll-off, with the new hyperbolic curve in blue, the original curve (that is: my interpretation of it) in grey, and a second-order curve dashed.
If we extend the lower linear range a bit, it looks like this:
grafik
The polynomial overshoots, but the hyperbola does not. The same thing happens if you extend the shadow range to however far down you care to. The hyperbola does not overshoot and simply stretches however far it needs to.

Another nice property: If the gradient of the linear segment is reduced to minimum, the roll-off curve becomes a straight line:
grafik
Note how the 4th order polynomial keeps “dancing” around the straight line because it is constrained to arrive with gradient 0.

I’ve even researched an additional parameter onto it to allow to make it more or less “crunchy”, and I put a bunch of adaptive limits/constraints on the input parameters to make it harder (though not quite impossible – yet) to produce impossible or silly curves.

Feedback welcome. As I said, I’ll explain the maths and other details in a bit, in a separate post. Will take a little time to write up everything in human-readable form (that is: for humans who can’t read my handwriting).

@paperdigits: Is that what you meant?

Yeah that’s a good start.

The curve looks good but I’m worried by the denominator. How do you make sure b x^2 + x + c \neq 0, \forall x \in \mathbb{R} or at least \forall x \in [0 ; \text{toe}] ?

Yeah but slope = 1 is not really a use case here, so I wouldn’t put too much emphasis on getting a straight line.

I’m starting to wonder if it wouldn’t be possible to keep current 3rd/4th orders and add constraints over the derivatives in the solver (like y''(x) \neq 0 to ensure mononicity and y'(x) > 0 to avoid gradient reversal). So that would yield an optimization more than a solving. Something like that : https://fr.mathworks.com/help/optim/ug/lsqlin.html. The reason is higher order splines should be able to model any “round things”, I don’t like having to divide by a polynomial (mind those zeros and their neighbourhood, that will cause arithmetic problem and float denormals that will slow the vector vode), and so far, the splines are handled uniformly by a vectorized routine of FMA that is very efficient. As much as possible, I would like to stick to polynomials.

EDIT : https://stanford.edu/class/engr108/lectures/constrained-least-squares_slides.pdf Just spot on.

Bingo !

As noted in the introduction, BVLS has been used to solve a variety of statistical problems arising in inverse problems. Stark and Parker [15] used BVLS to find a confidence region for the velocity with which seismic waves propagate in the Earth’s core. The upper and lower bounds resulted from a nonlinear transformation that rendered the problem exactly linear, and from thermodynamic constraints on the monotonicity of velocity with radius in the Earth’s outer core.

Imposing conditions on the monotonicity and velocity is spot-on what we try to do here, and since we have a closed form for the desired model, any derivative constraint turns into a simple linear equation going in the matrix, so it’s only a matter of unrolling the algo.

Smelled it, found it, remains to nail it.

1 Like

The fulcrum and contrast slider were quite quick…what would be the equivalent to apply a global contrast in the TE…

A very slight S-shape.

I considered using least squares, but that’s kinda cheating. Almost gave in because the second derivative was resisting for a long time, but I found a closed analytical solution for everything I wanted. I’m about to sit down and write a longer post on it, or several… more later today :slight_smile:

1 Like

So we will have to create an tone curve preset maybe strong and then use opacity as a new global contrast slider??

I’m on board and thrilled that work on DT is striving to push the envelope but really a raw editor with no contrast slider in a current module…it may have been flawed but the combination of the fulcrum and contrast slider in the current CB module were quick and if not correct could often generally produce pleasing results quickly right from the same control that you could be tweaking tones and color……I think it will be missed

So we will have to create an tone curve preset maybe strong and then use opacity as a new global contrast slider??

I’m on board and thrilled that work on DT is striving to push the envelope but really a raw editor with no contrast slider in a current module…it may have been flawed but the combination of the fulcrum and contrast slider in the current CB module were quick and if not correct could often generally produce pleasing results quickly right from the same control that you could be tweaking tones and color……I think it will be missed

Hold on.

1 Like

Consider me holding…thanks for the response…