Darktable 3:RGB or Lab? Which Modules? Help!

Technically speaking dodging and burning are masked exposure corrections, so best module would be exposure combined with drawn and parametric masks :slight_smile: That said - you can use anything else that produces results you’re happy with, only keep in mind that those aren’t “real” dodge and burn :wink:

3 Likes

Where does split-toning fall: linear RGB or not?

Well I find the saturation slider in contrast/brightness/saturation still works well. I think Lab ist still very good for tweaking saturation.

I think I’ve sort of understood exposure+filmic, and am now tackling color balance. Have I understood right that exposure + filmic is to get the overall exposure correct (within display range, within gamut, not clipping etc… I’m not sure of all the terms) and then use other modules to give the image “pop” or whatever look one is aiming for?

Can I make a s-curve using the 3 factor sliders in color balance? Would that be equivalent to a tone curve or rgb curve?

Thanx a lot. I am a very happy DT user since a month and this is a must read. Experimenting led me to a few minor conclusions about which modules to use. It is great to get the full picture and the list of recommended modules. Sorry, just praise, nothing constructive.

1 Like

Positive feedback is always welcome, thanks @alwinvrm

Woww…
Great article!! Thank you. I have to re read it carefully.

I have recently discovered dartkatable (well I have tried it long ago but it looked quite difficult for me, coming from lightroom and capture one).
I was impressed for some of its tools.

One of my reasons for selecting darktable over others was that it was 32bit float calculations in what I thought was the best color space and more related to human vision: CIE LAB.

I thought that processing in CIE LAB would be the best option and get most appealing colors.

I had not thought about problems with focusing or defocusing due to luminance being not linear.

After reading this, I understand the problem and the migration to other space where luminance is linear, and calculations would be easier and don’t produce artifacts.

I was clear for me that the working space was CIE LAB (except for the initial steps of generating colors from camera data and the last steps of generating color components of the destination device).

What I have not understood after reading is what is the color working space now in dartable 3.0 and onwards.

Are most of the calculations done in linear/additive CIE XYZ space?
Are them done in CIE xyY?
Do you use CIE RGB colors?

All seem to be linear and additive color space but I would like to know where the color calculations are done.
It seems you are using xyY from now on, is it right?

You say many of the plugins still use CIE LAB.
But then, does darkroom convert the pipeline to CIE LAB before sending the result as input to that modules and back again after getting the result from the module?

Are all those modules planned to be reconvertid to the current working space?

I use frequently frequency separation for focusing in other software.
And a high pass filter with a blending mode of linear or soft light to get more find details where I want.

But there is no radius in the high pass filter.

I had realized that many of the blending modes do not produce the same results in darktable than PS or other software.
Linear light gives you darker zones where you get lighter ones in PS.

Is it due to calculations of the blending made in the linear space and grey being 18% instead of 50%?
That is what I understood, at least.

How can we circunvent that problem to get similar results?

One of the most attractive features of darktable is being able to apply masks (with lots of options) and blending modes, over Capture One, for example, where yo do no have blending modes and masks are less feature rich.

I just browsed the article, although I’m not a darktable user. However I read myself into theory of color models the last months. Let me add a few remarks:

“Lab sets the middle gray (18%) to 50% (…)”: This is quite confusing as two concepts are mixed, namely intensity and lightness. L*a*b* uses Lightness L* (perceived brightness).

“The XYZ space represents what happens in the retina, and Lab represents what subsequently happens in the brain (…)”: CIE XYZ is (AFAIK) an absolute color model, while L*a*b* is a “double-relative” color model (relative intensity, colors expressed relative to white). I think the statement is a bit too much of a simplification.
Also the range of L* in Lab may be 0 to 100, but there may be thousand steps in between. Some implementation may use only 7 or 8 bits for L* (limiting the effective range), however (likewise for a* and b*).

“Everything (e.g. the camera sensor) starts from a linear RGB space(…)”: It may be “some RGB space”, but most likely not the same because of different color filters, sensors and amplifiers. And you should remark that X, Y, and Z are not colors (in the sense of the definition: “Everything you can see is a color, and everything you can’t isn’t”).

“Lab is (…) highly non-linear.”: You forgot to specify the domain: linear L* is mapped no non-linear intensity, but it’s rather well mapped to linear “perceived brightness”.

“Lab is like applying 2.44 gamma to linear RGB(…)”: Isn’t it like x^0.42 (gamma 2.38)? (page 633 of “Poynton, Charles. 2012. Digital Video and HD . Second Edition. Waltham MA 02451, USA: Morgan Kaufmann.” says: “CIE L*: An objective quantity defined by the CIE, approximately the 0.42-power of relative luminance.”. Wikipedia claims it’s basically Y^(1/3).

For all practical ends, in darktable, Lab is used as a color space to push pixels. So the matter is not whether L represents brightness or intensity, but that it re-encodes your pixels values with a non-linear scale.

CIE XYZ is (AFAIK) an absolute color model, while Lab* is a “double-relative” color model (relative intensity, colors expressed relative to white). I think the statement is a bit too much of a simplification.

No. XYZ derivates from the physiological cone response to light spectrum, Lab is computed directly from XYZ with adding a cubic root and offset and models psychological distortions added on top.

This is not the matter here.

“Everything (e.g. the camera sensor) starts from a linear RGB space(…)”: It may be “some RGB space”, but most likely not the same because of different color filters, sensors and amplifiers.

I never said all sensor RGB were the same.

The whole post is about that actually.

“Lab is like applying 2.44 gamma to linear RGB(…)”: Isn’t it like x^0.42 (gamma 2.38)? (page 633 of “Poynton, Charles. 2012. Digital Video and HD . Second Edition. Waltham MA 02451, USA: Morgan Kaufmann.” says: “CIE L*: An objective quantity defined by the CIE, approximately the 0.42-power of relative luminance.”. Wikipedia claims it’s basically Y^(1/3).

I said “is like” because 0.1845^\frac{1}{2.44} = 0.5002. Exactly, it is 0.1845^\frac{1}{3} * 1.16 - 0.16 = 0.5003. Hair-splitting and nitpicking.

Sorry for more nitpicking… it’s related to cone response, but actually derives from trichromatic matching data (not directly from cone response). Anyway I don’t think that affects the spirit of this nice article!

1 Like

For some reson, I’m only stumbling across this post today, and it seems I have questions…

Question: Could that be changed by switching to 16bit or 32 bit floating point numbers for the L* coordinate, or is there some deeper issue with CIELAB?
Not that this would be necessary if we can get everything done without using CIELAB, but there might be some operations which are just more intuitive that way. so if there’s a way to convert back and forth (or to define an operation in such a CIELAB-derived space, translate it to linear RGB and then apply it).

…and I think this is the bit that has always tripped me up when I tried using filmic so far: It seems to assume that the middle grey in the image it gets stays the middle grey in the output, but this is not obvious from the GUI. (or am I wrong about this? At least there doesn’t seem to be a direct way of defining which input is mapped to middle grey).

Since I switched to shooting RAW with MagicLantern, I’m using the ETTL module almost always. SO everything is exposed for the highlights, and that means my middle grey is just 18% if the brightest parts of the scene. And filmic RGB changes the brightness of the image quite a bit, I was trying to set the exposure at the same time as the exposure range, because I find that the most intuitive thing to do, particularly if a picture has a large dynamic range, but I’m not sure if I can preserve it all while also keeping enough contrast on the main subject.

So, I then had to go back and forth between filmic and the exposure module, and every time I change exposure, I need to readjust filmic. If everything before filmic works in linear RGB, then those processing steps should be mostly independent of where middle grey is, shouldn’t they? This then means that it should be possible to have a workflow that defines what I want to map to middle grey in the filmic RGB module. That way, the exposure module becomes unnecessary (except for masked local adjustments), and users could define the limits of the exposure range in the same step as the middle, which should reduce the number of times you need to go back and forth, and also remove the need to adjust exposure before everything else, which can also be unintuitive, given how much filmic can change the overall appearance of the whole picture (and given that in some cases, I may not /want/ to map my subject precisely to middle grey).

1 Like

You use the exposure module to adjust your midtones, then filmic to tame the highlights and shadows. The exposure module let’s you define what is middle gray. If you want to get middle gray by the numbers, use the color picker and adjust exposure until your selection is middle gray.

1 Like

I find using this view and changing the parameters in filmic will sort out what goes on…You can reveal the middle grey slider but it is recommended to make the adjustment via exposure
image

When you convert to L*a*b* space, the conversion depends on a function that uses a cube root for pixels where the luminance-to-luminance_max is >0.9%, but reverts to a linear function if you go below that. The baking-in of this piecewise function is where the remarks come from that L*a*b* is not well suited for sutuations where this ratio exceeds 100:1. This is why the scene-referred blending modes in the latest darktable releases use instead the JzCzhz colour space, which is a perceptual space like L*a*b*, but one that doesn’t suffer from this same weakness.

2 Likes

See, I’d find it way more intuitive to have the top, bottom an center of the range set up in just one tool. Particularly if the RAW is exposed for highlights, and particularly if I want the main subject somewhat below or above middle grey – because that means that I need to:

  • set exposure to my subject lands a bit below middle grey
  • set the highlight and shadow range to include everything I don’t want to clip
  • see where my subject ends up on the brightness scale (because adjusting filmic moves it closer/further from middle grey
  • re-adjust exposure
  • re-adjust filmic because the shift in middle grey means my shadow and highlight ranges have shifted, too
  • check again …
  • repeat
    This is to get a look which I know I can get, but there’s no direct way to get it. With RAWs exposed for highlights, this can take quite a bit of tinkering. I’m sure that doing this often enough will eventually give me enough intuition to iterate less, but I don’t think this use case needs to be so complicated.

I may be be misunderstanding some part of the maths, but as far as I can tell, it should be perfectly feasible to define the top, bottom and center of the exposure range relative to the input: You define the value corresponding to black, to the brightest highlights and the “level of interest” (needs a better name. It’s not really middle grey anymore, although that’s what it should default to), and whatever is between that level and the bottom is your shadow range, and the same goes for highlights. That would immediately take the guesswork out of it.

I don’t think it would break any existing work flows or change the underlying maths.
Anyone who is used to thinking in terms of middle greys and exposes their pictures accordingly could work exactly as before, but anyone who needs to adjust exposure after adjusting filmic would be spared the need to iterate.

Reveal the middle grey slider? Which version of DT? I’m on 3.4.0.
edit
oh, found the it! So … what does it do? When I change the setting, the straight line in the graph keeps linking “0 EV” and “18%”, but both the highlight and shadow range sliders are being increased if I brighten the image, and both are reduced if I darken it. I think I understand why the marker stays at “0EV” (because the slider only changes what input level is interpreted as 0EV), but why would this change the dynamic range? I.e.: if I darken the picture, the shadow range should increase by the amount by which I increased the middle grey level, and the highlight range should decrease by the same amount (because I’m declaring parts of the picture shadow, which used to be considered highlight). >the current implementation means that after I have adjusted the middle grey slider, I will always have to re-adjust shadow and highlight ranges, iteratively.
And of course I still have no direct control over anything but black (relative to middle grey), white (also relative) and 18% output. So if I want the subject to be brighter or darker than 18%, I still cannot directly prescribe that, but have to run iterations around iterations to get it where I want it.
end edit

That graph actually does help, though I’ve found myself often enough wishing I could:
1: fix the top and bottom end of the range relative to input levels
2: grab that middle grey and move it to a different point (“this is the input level I’m interested in, and this is the output value I want to map it to”)
3: keep the top and bottom ends of the curve from overshooting, thus mapping a part of my shadows to zero, and highlights to 1, although they’re in the range. I know I can change that by adjusting the width and contrast parameters, but that requires iterative adjustments again.

Again I need to state of course that I have not seen the maths behind filmic written down (and am not used to reading Darktable source code, so It’d be a big task to look it up), but I’m reasonably confident with analysis, vector maths, and I think by now I have a decent idea of what the transformations in filmic actually look like – so I’ll be happy for @anon41087856 (or someone else who knows better than me) to explain to me that this is mathematically either impossible or hard to implement. I think however, that there are solutions to these issues which don’t break the current functionality or workflows but should make it easier for people currently struggling with filmic to get results quicker and more robustly:

1: is a question of providing an input to define top and bottom end in terms of input values as an alternative to “middle grey ±x”. I think it’d make things more comfortable because it removes some of the ways in which different user inputs are coupled to each other (change one thing, now the other one is out of whack…).
2: Is the more important one to me because being able to set middle grey directly would reduce the need to go back and forth between exposure and filmic.
3: Is probably a bigger task and may not be easily done, as it’s not just a question of the user interface. This would require adjusting the definition of the highlight and shadow “roll-off” curves but could help to make the whole process more robust (i.e. reduce the range of user inputs leading to “broken” results). This is a point where some of the maths would need an update, but there must be a mapping function which does not overshoot for any inputs. Actually I know there is because I’ve been working on curve definitions for smooth geometries in a different context, and it’s possible. htan() functions could work, some splines, or an additional curve parameter injected into the current function which only becomes active if overshoot needs to be prevented.

Ah, that makes sense. So Lab* effectively assumes a noise floor. Thanks a bunch

I’ve poked at this a bit, but I keep coming to this realization: there’s no simple way to determine an optimal tone map for all scenes. This really came to clarity in my recent attempt to re-process some into-the-sun images with a log curve to lift the shadows followed by a regular control-point curve to scooch the data around to please. The shots were taken from a moving train, so each scene was slightly different. Comparing the respective control-point curves after I finished, each is different enough to compel me to think that there are no reliable heuristics to characterize them.

Note that this utterance on my part should not dismiss the possibility of people smarter than I (a rather large subset of the population) coming up with nice encode-able heuristics… :laughing:

To me, this is weird. If you want your subject to be middle gray, then put the exposure there the first time. If your subject isn’t middle gray, then use another tool, such as tone eq, to render your subject where you want it tone-wise. In my mind, filmic (and exposure) are about stretching or contracting the histogram to a pleasing place. Then you can use other tools to tonally place subjects where you want them.