the darktable 3.0 video series

Hi,

I began a series of videos today to celebrate darktable 3.0. The point is to give first-hand technical info about the guts of darktable, mainly to tutorials makers, who will probably do a better job than myself to make exciting content. As of myself, I will try to find a balance between bug fixes and knowledge sharing.

1) Introduction : image processing background and implications in pixel pipeline

Whole series:

https://www.youtube.com/playlist?list=PL4EYo8VotTsiZLr3BqGeBRj-qYGO63bIv

49 Likes

Thank you so much for putting this video together. I learnt so SO much today. :slight_smile:

1 Like

Thank you. Great video ! Keep up your fantastic work

1 Like

Yes, thank you for all your work on the darktable project, and for your education of the community.

It occurred to me that all the modules in the darktable workspace could have some basic colour-coding to signify which part of the processing pipeline you are in. E.g the pre-demosaicing modules could have a black or grey-coloured font in the module name, etc. Or it could be something more discreet than a font colour, e.g. a little coloured dot or square element next to the name. This would make it more obvious when a user inadvertently re-orders e.g. a linear space module to a display-referred group of modules, and you could see it at a glance.

Or at least you could make such modules as demosaicing, input colour profile, base curve / filmic stand out a bit more from the rest of the modules to mark boundaries between the raw data, the linear and non-linear processing stages.

5 Likes

Merci beaucoup

1 Like

:clap: :clap: :clap:

1 Like

that’s a double-edged sword though, because it will load the GUI some more.

I must admit I really like the style you chose for explaining things in your video. First, the pen and paper explanations are simple and general, second you are giving real explanations instead of just telling which button to press to get some result, third linking theory and implementation works well that way, and fourth, you are well prepared and know what aspects to cover and what to keep for another occasion. This is what I am missing from many of the other tutorials out there.

Edit: I removed the garbage I wrote last night and asked the question in a understandable way four posts below.

1 Like

You can support @anon41087856 many ways, but paypal is probably the most universal: https://www.paypal.com/paypalme2/aurelienpierre

1 Like

Excellent video. Very interesting. Can’t wait for the next.

As seen by us (our eyes and brain)?

He showed that

the perception we have has little to do with physical intensity of light.
That’s something to keep in mind when we are remapping the dynamic ranges.
The point here is that your perceptual system does not behave as your (camera) sensor.
We don’t remap linearly, we remap in logarithmic space.

(at 5:57)

As I understood it, you don’t compress 14 EV’s (camera) into, say, 10 EV’s (HDR monitor) in a linear way if you want to reproduce in the monitor what your eyes/brain saw in the scene, since they saw it in a logarithmic space, or, we don’t see four candles as four light units, but only three. I suspect if we keep linear all the way we would keep that pale, flat look that we see when we open a raw file and do nothing with it.
(a noob interpretation of the video :roll_eyes:)

Hm, maybe I was explaining it wrong what I mean. I should not write such things around midnight. The need to compress the dynamic range becomes pretty clear from the video. But not all scenes have 20 EV of dynamic range. And the remapping function could be any function. The questions that I mean are, why does this curve type that we use work for what we want to achieve? Or in other words, why can I compress the dynamic range and still get a natural looking result? And why do I need it even for low dynamic range scenes (or don’t I)?

1 Like

Great video. I did not know that the embedded color matrix is optimized for the midtones, or how the exposure value ties in with that. Granted, I always use custom profiles or adobe profiles but it’s good to know.

As far as I understand the camera data has got a bigger dynamic range than the monitor we use to view the video.
Therefore we can compress the camera data and still get a natural looking result.

This is the first time (after watching the video) that I got a good grip on the nature of the pixelpipeline and the order of the modules. Thanks very much, Aurelienne.

1 Like

Adobe use hue twists in their DCP input profiles so that colours look perceptually OK (the chromatic adaptation phenomenon) when you change brightness values. As far as I know, the linear ICC input profiles that darktable ships with do not account for chromatic adaptation, so you need to pay special attention to the base curve you apply. That’s how the filmic module was born – to counteract the saturation issues. But (correct me if I’m wrong) the chromatic adaptation issues still stand unresolved in the current state of affairs, unless you create your custom ICC input profiles with an appropriate Tone Reproduction Operator.

3 Likes

Thanks for this excellent informative (a real eye opener) video.

That was not the point of the second question. Suppose your scene only covers a small range of your camera’s entire dynamic range (e.g. a cloudy dusk scene without artificial light sources and therefore causing your histogram to be filled only partially with usable data), is it still necessary to do the remapping with a filmic curve?

That’s what film does, that’s what your OOC JPEG do, that’s what we are use to see. In this case, the filmic curve just fallbacks to a simple S curve you would use to increase contrast. The filmic module in dt makes that curve scalable to whatever input and output dynamic range you get, that’s its advantage over the base curve. If input DR = output DR, there is no mapping, simply midtones raising and contrast enhancement.

3 Likes

Merci Pierre pour cette vidéo je ferrais un tutoriel sur les nouveautés de darktable 3.0.0 en français
Jean-Louis