Darktable 3:RGB or Lab? Which Modules? Help!

To me, this is weird. If you want your subject to be middle gray, then put the exposure there the first time. If your subject isn’t middle gray, then use another tool, such as tone eq, to render your subject where you want it tone-wise. In my mind, filmic (and exposure) are about stretching or contracting the histogram to a pleasing place. Then you can use other tools to tonally place subjects where you want them.

Here if you want it…recommendation is to use exposure instead…image

You don’t need to go back and forth if you use a dynamic short cut for exposure…hold the E key down and use the mouse scroll wheel. This works what ever module you are in…you can set up several of these …

Ah, so you’d map your subject to middle grey in any case and then move it where you want it with tools later in the pipeline? Does that mean you use the color picker to adjust exposure first, to get the subject to middle grey then filmic to define the top and bottom end of the histogram, and then a third one to move the subject where you actually want it (in the cases where that’s not middle grey)?

That’s also what I’d like to use it for. Yet, I find that many changes I make in other places (or even in filmic itself) mean that I have to reiterate over filmic’s settings. And I think this could work better if it was possible to input the parameters which control filmic in a different way.
Example: If I adjust filmic for middle grey, then reduce overall brightness of my subject, I may find that the shadows are now a little too crushed. To counter that, I can adjust the shadow range in filmic, but then there’s a chance that the mapping curve overhoots (which visually crushes the shadows even more than before, which is counter-intuitive), so I also have to change the extent of the linear range in filmic or the contrast, but that affects highlights as well as shadows, so a bunch of other adjustments I made will need revisiting.

Of course, there will always be modules which require readjusting other modules, and once you have an established routine for dealing with your images, you can probably deal with it. But if I’m experimenting, or new to filmic, that is going to be a lot harder. Being able to set input and output for “(not-)middle grey” (dangit, this needs a name), independently of the upper and lower bounds can reduce the need for that iteration, and also make the whole thing more intuitive, because you’d no longer have to know from some external source that exposure must be set in order for filmic to work correctly – you could fire up DT, enable filmic, map the bottom, middle and top of the range and have a fairly decent result already.

I’m not trying to persuade you that your way of using DT was wrong of inferior(*), but I think with a module that uses an approach as different from what came before as filmic, it’d be a very weird coincidence that the first parameterisation/control layout/user interface are the best possible versions. Power to everyone who got it the first time around (and of course to the person who came up with it!) but I think filmic can achieve a lot more than it already has if we actively look for ways to improve it. I don’t think it needs to be dumbed down to do that, either. There’s no need to break anyone’s workflow in order to permit different workflows or come up with visually more intuitive, more robust or just plain different ways of controlling the module.

(*)or that filmic was somehow bad – if I thought that I just wouldn’t use it and walk away. I’m mostly working with RawTherapee these days, but filmic and the colour calibration tool (and the things they can do, as demonstrated by Pierre) are the reasons I’m using DT a bit more again. If I found a way to make them work as fast as my current process in RT, I’d be switching over for most of my stuff, nevermind my muscle memory and established routines. So, really, I’m somehow trying to contribute some useful suggestions to filmic to try and help it become more useful for myself and others.

1 Like

I use exposure to put what I want to be middle gray at middle gray, sometimes that’s my subject, sometimes something in the background. I rarely use a color picker, I do it by eye. Then filmic to adjust the highlights and shadows. I usually leave a bit of headroom in the histogram, as I turn on local.l contrast after filmic. Then color balance or color zones for color adjustments. Then my photo should have the global tone and color o want, then I start working on adjustments.

Such as?

This is essentially what I do, just that the middle.is controlled by the exposure module. You could.always enable the v3 filmic controls, which gives you a middle gray slider in filmic.

You are correct, and the filmic rgb module in 3.4.1is the 3rd or 4th iteration of the filmic module.

Sure and I’m sure people are open to ideas, but those ideas have to be concrete and fleshed out. You haven’t presented anything other than an abstract idea that it could be better. If you have some ideas please share them, write out a detailed statement or make a UI mock up.

Why, thank you! I had not been aware the maths was around here, and it’s actually a lot less scary than I’d expected. Analytical geometry has been a large part of my job for the last seven years, so this might be something where I can actually contribute.

So, it seems that @anon41087856 is using a 4th order polynomial for the shadow and highlight ranges, and those are unfortunately prone to overshooting (unless constrained wisely, which is not always an option).

Oh, and he posted Python code (here), isn’t that comfortable for me… except there’s an reference to an undefined callable “setup_spline()”, but I think I’ll be able to do something with that …

Some quick remarks:
1: fourth-order polynomials like that can be solved analytically. I know this is just Python PoC code. So I hope that DT is not running a linear solver to deal with those constraints. If it does: Let me know, I’ve just worked out the analytical solution.

2: Why is the curvature of the “roll-off” polynomials constrained to that of the middle curve (i.e. to zero)? In aerodynamics that would be very important but I doubt anyone could visually notice if the rate at which the rate of the brightness gradient changes had a small jump… heck, there are still people constructing aeroplane components from circles and straight lines.

3: Is it necessary to have the gradient at the bottom and top end constrained to zero, too? There might be a good reason based on colour science (which I wouldn’t be aware of), or maybe that’s how actual film reacts?

4(and then I’ll shut up for tonight): I’d bet that actual film has some kind of exponential or hyperbolic curve. The upper end might look something like this in real life. That’s the activity curve of enzymes as a function of the concentration of “food” they are given.

My best algebra and calculus days are all behind me…way behind me…still very interesting to hear your analysis.
Perhaps you should share these comments on the sigmoid curve thread as they might get more attension??

The “hard” setting uses 4th order, the “soft” setting uses 3rd order.

What ? Why ? We have a system of algebraic equations, why is that not possible ? Anyway, if you want to improve the solver, go ahead.

For continuity ?

Simply because people have complained about washed blacks with the v2 (using cubic splines), so we ensure maximum crunch by imposing slope at bounds.

1 Like

Thankss for the explanations!

It’s perfectly possible to run a linear solver, but it’s much faster to work out the general solution on paper, then calculate the coefficients directly. Link to a demo is below.
But actually, I’ll have to tone my reaction down a little bit. That linear solver is probably taking only a small fraction of the time needed to also update the photo whenever the curve is changed – so while computing the result directly is several times as fast as the solver, it probably makes a negligible impact overall on runtime (but makes the code less complicated)

Yes, I assumed that much, but do you think it would have a noticeable impact if the second derivative was not continuous? I haven’t applied it to photos, so I can’t tell. But letting go of the constraint might allow some other roll-off functions which don’t overshoot so easily.

That said: I’ve looked at a few graphs of actual negatives and photo paper since then, and they all seem to have some sort of sigmoid curve which could be approximated by a arctan, or tanh function.

Hmm… not sure if I’d agree with maximum crunch. If I want crunchy blacks, wouldn’t I rather just reduce the shadow range or increase the contrast setting? If first derivative =0 is always imposed, I cannot un-crunch the blacks.

small demo
(I hope this link stays alive for long enough…)

Contents:
1: Coefficients for the roll-off curves are computed inline, no solver needed (I made up my own naming scheme, hope you can decipher it)
2: top and bottom end of the full range are set first, and don’t change if anything else is changed
3: bounds for the extents of the linear range (l_1, l_2) and the contrast setting (“a”) are adaptive to the dynamic range of the input range
4: I also included a parabolic roll-off at the lower end, with just the values at the end and tangency to the linear segment imposed. It overshoots even quicker (thus not a good idea to actually use), but has the nice property that it turns into a straight line if a is at the lower bound, while the 4th order polynomial wiggles around
5: You can change c without affecting either end of the range. In photo terms: Whatever shadows and highlights you chose to preserve will still stay preserved, but in between, everything gets brighter/darker

Not included:
1: There are probably more ways to include some “smart” bounds for the inputs to prevent silly results, and some of the rules I used are a little arbitrary (but try changing any input values, it’s not too bad, I’d say.)
2: center point can be shifted left and right, but not up and down. doing that would be mathematically equivalent to shifting left/right and adjusting the extents of the linear range – although that might be less intuitive to do. So I think there could still be value in being able to do this, but I’d save that for another discussion.
3: After looking at a bunch of film/photo paper response curves online, I think it would be best to replace the polynomials with arctan functions (scaled to match the constraints). I naïvely tried to do that in Desmos, too, but I think there isn’t actually a closed analytical solution for this, so I could not include that. The same goes for hyperbolae or tanh, unfortunately. I’ll see if I can code up a quick and stable solution for that in Python. arctan and tanh also have the nice property that they become straight lines if the contrast is at the lower bound, but in addition they also never overshoot (that is: as long as the lower end of the linear range is >0, and the upper end < 1, of course).

Update:
I played a bit with the bounds of inputs and came up with more robust specifications:

You can now grab that center point and drag it quite far without getting any overshoot or similar weirdness.

  • The slope of the linear section is constrained such that the flattest slope will have it point directly at one of the end points, and there’s only some edge cases where you can make it so steep as to cause overshoot.
  • The hand-over points from linear to the roll-off curves are also limited such that they don’t overshoot (mostly…)
  • you can now specify the “center” input and output.

So this will let you start linear and roll of only at the highlights, or the other way round (as would happen with very underexposed film, I think).

I still think that roll-off curves based on arctan or tanh would work better but that would require me to do some actual programming. I’m definitely up for finding a robust way to define those, but not today.

Actually, I’m almost at the point where I’d like to see what this would look like in DT, except I have never done anything in C++, or compiled DT …
So, I’m kind of reluctant to start doing that (or to hope that someone else buys into these ideas to do it themselves) before I have some opinions from people around here.

Do you think I’m making sense?

You need to apply some of these to some images to see what’ll really happen. I’ve done that and have been surprised at what the real shapes need to be.

It’s not that hard, get an image library in your favorite language, read in an image and just loop through the pixels (tone is a local function, so you don’t even need to recognize the rows and columns), applying your function to each channel component. Save the image, and regard it in your favorite viewer. Easy-Peasy… :crazy_face: (Edit: found this emoticon last week, I just love it… )

1 Like

oh, you do need to pay attention to in what state is the input image. I’d use a linear 16-bit TIFF encoded with a working profile, and note that you’d still have to do an output tone/colorspace transform (sRGB?) for rendition. Okay, it gets a little complicated…

So far, I’m pretty much replicating the curves which are used by filmic anyways, so the results will be the same. This is (so far) mostly an exercise in coming up with ways to control those curves to make them easier to handle. I think so far, that can be done well enough just on curves.
Using different functions for the roll-off, on the other hand, is likely to make visible differences, particularly once you push the parameters out of the range where the current curve looks fine anyway.

But then, if I’m going to apply any curve type I come up wih to an image, I think the best way to do that would be within Darktable because otherwise there’s still a bunch of things I’d need to get right to make sure my conclusions can be transferred. (“Yeah sure looks nice, but you didn’t take into account that DT actually applies before handing the image to filmic”. “cool but that’s already part of the colour management pipeline”. "that won’t fly in DT because it needs to be able to do ". Or, my favourite: “You can get the same effect by using this 27-step procedure, so we’re not going to allow that stuff in DT anyway”).

So … yes, I’d like to know what people think before I put serious time in. I’d also be happy if someone could lend me a hand in setting up a routine for me to fork DT, change some code and compile it, and point me to where in the code the filmic curve lives.

Alternative: Is there a way to export the exact image data that is fed to filmic in DT, and to insert whatever I produce in Python from it back into DT at the correct point in the pipeline? Or a how-to? I keep stumbling over information and discussions I wasn’t aware of, so I think it is smarter to ask first than to reinvent the wheel and look stupid later.

1 Like

We use Gauss-Jordan an a 4×4 matrix, last time I checked, the solving time was below 5 ms. The beauty of it is we have a uniform way to deal with different kinds of parametrizations, once the matrix of constraints is defined, it all goes through the same pipe. Want a third order ? Set the first column to zero. Wants to relax a constraint ? Set the corresponding line to zero. Simple, uniform, elegant.

Well, that’s the beauty of any approximation… There are dozens of way to parametrize a sigmoid, out there, and they are all equal ±\epsilon.

That could very easily be as an option in filmic without having to change any pixel code. Just remove the corresponding lines and columns in the matrix of the linear system. We already have “hard” (4th order with imposed curvature at latitude bounds and DR bounds), “soft” (3rd order with imposed curvature at DR bounds), we could also have “safe” (3rd order with curvature imposed at latitude but not at DR bounds).

Also we could check that \frac{d^2P(x)}{dx^2} ≠ 0 on the computed spline, after solving the system, but then I’m not sure what to do if the check fails.

Problem is the pure-sigmoid functions don’t allow to control the latitude contrast separately from the shoulder/toe rate of convergence toward the bounds. You will find that if you set the latitude range to 0 (so, in practice, directly connect toe and shoulder), you solve 90% of the overshooting issues but void the meaning of the contrast parameter.

If the contrast option in colour balance is being removed from v3.5 onwards, what will be the recommended module to replace Curves/RGB Curves please ? I assume the tone equalizer.

Yes, tone equalizer will allow you to easily make global or local tonal adjustments.

1 Like

okay, so it’s actually pretty benevolent :slight_smile: Point taken.

I’ve done a little more searching, and found some examples of people using rational functions with polynomial terms. The original publication on Filmic by Haarm-Pieter Duiker has some useful examples:

  • slide 36 shows a few example sigmoids – note how they don’t have horizontal gradients at the ends
  • slides 55 and 56 have equations, and they use rational functions. Kind of similar to the Michaelis-Menten curve I mentioned earlier, but with a few additional terms, which makes complete sense since you’d want the ability to adjust them somewhat.

That prompted me to go off and do some maths, and I think I’ve found a nice solution:
f(x)= a (bx² + x) / (bx² + x +c)

That’s my baseline hyperpola, and I’ve had to fill a few pages with equations, but I’ve just made it dance to my tune.

I’ll write a longer post explaining the details in a bit, but here’s a screenshot:
grafik
This is the lower roll-off, with the new hyperbolic curve in blue, the original curve (that is: my interpretation of it) in grey, and a second-order curve dashed.
If we extend the lower linear range a bit, it looks like this:
grafik
The polynomial overshoots, but the hyperbola does not. The same thing happens if you extend the shadow range to however far down you care to. The hyperbola does not overshoot and simply stretches however far it needs to.

Another nice property: If the gradient of the linear segment is reduced to minimum, the roll-off curve becomes a straight line:
grafik
Note how the 4th order polynomial keeps “dancing” around the straight line because it is constrained to arrive with gradient 0.

I’ve even researched an additional parameter onto it to allow to make it more or less “crunchy”, and I put a bunch of adaptive limits/constraints on the input parameters to make it harder (though not quite impossible – yet) to produce impossible or silly curves.

Feedback welcome. As I said, I’ll explain the maths and other details in a bit, in a separate post. Will take a little time to write up everything in human-readable form (that is: for humans who can’t read my handwriting).

@paperdigits: Is that what you meant?

Yeah that’s a good start.

The curve looks good but I’m worried by the denominator. How do you make sure b x^2 + x + c \neq 0, \forall x \in \mathbb{R} or at least \forall x \in [0 ; \text{toe}] ?

Yeah but slope = 1 is not really a use case here, so I wouldn’t put too much emphasis on getting a straight line.

I’m starting to wonder if it wouldn’t be possible to keep current 3rd/4th orders and add constraints over the derivatives in the solver (like y''(x) \neq 0 to ensure mononicity and y'(x) > 0 to avoid gradient reversal). So that would yield an optimization more than a solving. Something like that : https://fr.mathworks.com/help/optim/ug/lsqlin.html. The reason is higher order splines should be able to model any “round things”, I don’t like having to divide by a polynomial (mind those zeros and their neighbourhood, that will cause arithmetic problem and float denormals that will slow the vector vode), and so far, the splines are handled uniformly by a vectorized routine of FMA that is very efficient. As much as possible, I would like to stick to polynomials.

EDIT : https://stanford.edu/class/engr108/lectures/constrained-least-squares_slides.pdf Just spot on.

Bingo !

As noted in the introduction, BVLS has been used to solve a variety of statistical problems arising in inverse problems. Stark and Parker [15] used BVLS to find a confidence region for the velocity with which seismic waves propagate in the Earth’s core. The upper and lower bounds resulted from a nonlinear transformation that rendered the problem exactly linear, and from thermodynamic constraints on the monotonicity of velocity with radius in the Earth’s outer core.

Imposing conditions on the monotonicity and velocity is spot-on what we try to do here, and since we have a closed form for the desired model, any derivative constraint turns into a simple linear equation going in the matrix, so it’s only a matter of unrolling the algo.

Smelled it, found it, remains to nail it.

1 Like