Following up on the matter of making darktable’s editing workflow more efficient, and images look better, my last module was merged yesterday into master. It is based on Troy Sobotska’s plugin for Blender, and on my previous work on the logarithmic tone-mapping all boundled at once. Is is intended to fit with my other change of the color balance module. These two can replace a lot of silly things inside darktable and don’t mess-up the colors as most Lab module do, or the whole pipe as the base curve do (please, folks, stop using that nonsense). As usual, it is packed with optimizers that try to guess the best parameters from readings in the image, to spare you the hassle of pushing forward and backward every slider forever.
very interesting @aurelienpierre! I’ll definitely try this out asap. I have been experimenting with log encoding and scene-referred editing inside RT, and I can definitely see some benefits. it’s hard to quantify (for me at least), but it seems to produce smoother tonal variations, that are more pleasing to the eye. the difference is not dramatic, but it is there, especially on portrait shots (probably because we can more easily perceive small differences in skin tones?). thanks for the informative posts! (and thanks also to the other people involved!)
@aurelienpierre - I followed the tutorial in this thread and tried to follow the logic:
It looks like the code linearizes “something” which I’m guessing is the LAB TRC, because the camera input profile is already linear.
Is above correct? If not, what is linearized?
Then the code applies a log2 curve that takes parameters from the image, modified if the user wants by picking middle gray and etc. At this point it seems to me that the image is no longer encoded linearly. Edit: this question is meant rhetorically, in case anyone wonders.
Is above correct? Is the image no longer encoded linearly?
The next step is to apply a tone curve, which happens in LAB space, but using automatic RGB to restore the otherwise lost saturation.
Is the tone curve itself applied in LAB space even though the saturation is added back in using “automatic RGB”?
And what is “automatic RGB”?
Question 2 above is meant rhetorically. This new darktable algorithm linearizes “something” (that’s question 1), applies a log2 (or gamma, user choice) curve, and then modifies the result in LAB space, but add saturation using RGB. The result is hardly “scene-referred”. The result also has hue shifts all over the place.
Whether the new darktable algorithms produce “pretty results” or an easier, quicker way to get to “pretty results” is a matter of aesthetic opinion, and personally I don’t care whether an operation is done on linear RGB or otherwise as long as the user is happy (and preferably also has a clue regarding what they are doing to their RGB data).
@gez - several times you’ve expressed a rather intense dislike of editing nonlinear RGB, often in connection with doing so using GIMP. Is there something about the darktable “unbreak profile” algorithm that makes it OK, acceptable, etc to operate on nonlinear RGB?
I’ve been testing it a bit and found that, for instance, the contrast slider creates sudden large changes between 0.01 values whereas for the most part there are smooth smaller changes. See below screenshot for instance. In the below example there is a sudden loss of contrast when increasing the value of the contrast slider.
Colors are, in general, better edited in linear space, hence why they come first in the pipe. However, as lightness adjustements change the colors and it’s super difficult to see colors properly on dark images, it’s better to edit them after.
That should be fixed now, or at least you get a view of the curve to understand how your parameters affect the result.
When I change the middle grey from 4 to 8%, I expect to get back the original image decreasing black and white point by 1. It is not the case. Why ?
What are the neutral values for S-curve ? I would find easier to understand what it does knowing where it starts from.
For manual tweaking I think the view of the curve should show only the S-Curve (not including the log part). It would be a straight line at the beginning.
Materializing the linear part (and fulcrum ?) on the curve would also help controling what we do (especially on limits).
I have written an extensive doc for that module, emptying my brain after weeks of tests, so I’m happy to RTFM you : https://hackmd.io/BoyDhxRwQFq3z3H2d6wjqA?view This is a collaborative work I do with @asn, so feel free to correct my bad English (that’s my third language, I do my best).
Because they are not related. That middle grey is a pre-amplification before the log, values below the middle grey values are just less “log-ified” than values above. We discussed that before on the unbreak color profile, it’s the same.
Neutral values for the S curve would be an 2-based exponential function (the inverse of the log) with an y offset equals to ((black exposure) / (dynamic range)) / (output grey)^(1 / output power factor). That’s pretty much the default parameters, except you should set the contrast at 1, but that produces weird things sometimes.
You only get a pure S curve if white level = - black level (that is, grey = 50 %). Otherwise, you get an S curve convolved with the exponential function mentionned above. I could set a log scale for the graph, as I did in the tonecurve module, but I’m pretty sure it would be misleading more than helpful.
Thanks for the whole work!
I’m not sure if this is the right place (if not, author and/or mods, just let me know), but I’m exercising the module with this image from a recent play raw, and I would like to know if the workflow, settings and final result are ok. (trying to be as close to the scene as possible)
1 - WB
2 - filmic
3 - exposure
4 - color balance (local adjustment on the tail of the storm, to stress it’s rainy nature)