the darktable 3.0 video series

As seen by us (our eyes and brain)?

He showed that

the perception we have has little to do with physical intensity of light.
That’s something to keep in mind when we are remapping the dynamic ranges.
The point here is that your perceptual system does not behave as your (camera) sensor.
We don’t remap linearly, we remap in logarithmic space.

(at 5:57)

As I understood it, you don’t compress 14 EV’s (camera) into, say, 10 EV’s (HDR monitor) in a linear way if you want to reproduce in the monitor what your eyes/brain saw in the scene, since they saw it in a logarithmic space, or, we don’t see four candles as four light units, but only three. I suspect if we keep linear all the way we would keep that pale, flat look that we see when we open a raw file and do nothing with it.
(a noob interpretation of the video :roll_eyes:)

Hm, maybe I was explaining it wrong what I mean. I should not write such things around midnight. The need to compress the dynamic range becomes pretty clear from the video. But not all scenes have 20 EV of dynamic range. And the remapping function could be any function. The questions that I mean are, why does this curve type that we use work for what we want to achieve? Or in other words, why can I compress the dynamic range and still get a natural looking result? And why do I need it even for low dynamic range scenes (or don’t I)?

1 Like

Great video. I did not know that the embedded color matrix is optimized for the midtones, or how the exposure value ties in with that. Granted, I always use custom profiles or adobe profiles but it’s good to know.

As far as I understand the camera data has got a bigger dynamic range than the monitor we use to view the video.
Therefore we can compress the camera data and still get a natural looking result.

This is the first time (after watching the video) that I got a good grip on the nature of the pixelpipeline and the order of the modules. Thanks very much, Aurelienne.

1 Like

Adobe use hue twists in their DCP input profiles so that colours look perceptually OK (the chromatic adaptation phenomenon) when you change brightness values. As far as I know, the linear ICC input profiles that darktable ships with do not account for chromatic adaptation, so you need to pay special attention to the base curve you apply. That’s how the filmic module was born – to counteract the saturation issues. But (correct me if I’m wrong) the chromatic adaptation issues still stand unresolved in the current state of affairs, unless you create your custom ICC input profiles with an appropriate Tone Reproduction Operator.

3 Likes

Thanks for this excellent informative (a real eye opener) video.

That was not the point of the second question. Suppose your scene only covers a small range of your camera’s entire dynamic range (e.g. a cloudy dusk scene without artificial light sources and therefore causing your histogram to be filled only partially with usable data), is it still necessary to do the remapping with a filmic curve?

That’s what film does, that’s what your OOC JPEG do, that’s what we are use to see. In this case, the filmic curve just fallbacks to a simple S curve you would use to increase contrast. The filmic module in dt makes that curve scalable to whatever input and output dynamic range you get, that’s its advantage over the base curve. If input DR = output DR, there is no mapping, simply midtones raising and contrast enhancement.

3 Likes

Merci Pierre pour cette vidéo je ferrais un tutoriel sur les nouveautés de darktable 3.0.0 en français
Jean-Louis

Hi’ @anon41087856
Thank you for an excellent and interesting video. I like your paper and pen approach, it makes you “digest” the information gradually instead of displaying a full and complicated slide from the outset and then start explaining. I’m looking forward to the next video(s)…:blush:!

In the meantime two questions: You mention that the global tone map tool is broken. Is the tone mapping tool also broken?

From your video, I understand that you always need the base curve (or filmic). However, in many cases or maybe in all cases the base curve is not enough. You need further adjustments. Why is that the case?

I’m sure you recognize the following picture. The contrast between the shadows and the highlights is much bigger than my eyes recorded it at the time?

:+1:

The tone mapping (local) gives the expected results, compared to the litterature, but produces halos as a side-effect (which is consistent with the algorithm). I would say the result is bad, but consistent with what we expect. The global tone mapping doesn’t reproduce the results shown in the litterature.

It’s a matter of (acquired) taste. Digital photograhy has made us use to very high contrast because the first sensors had very poor dynamic range. When the first Sony alpha 7 went out, people complained they got washed images and had to push the contrast a lot in post. It really depends what you expect to see, therefore what you are used to.

Your eyes should have recorded more contrast than the sensor if you are a standard human being using a consumer camera.

Another Youtuber gave me the idea to look into the stats of my channel…

Over the past 90 days, my videos have been watched by 100% of men between 35 and 65 years. 14027 views, 2451 cumulated hours, not a single woman there.

Ladies, what am I doing wrong ? I don’t mind having a sausage party there, but I suspect the subject will be of some interest for you too (you take pictures too, right ?), so the content is what it is (you can’t escape technics forever), but I can still try to make it understandable… if I knew what doesn’t work for you.

2 Likes

I don’t think 100% of men 35 - 65 watched your video. You’d have a lot more views :wink: Rather 100% of your viewers were men in that age range.

Also… Why does it matter?

I have come across women who enjoy and excel in STEM. Technical information shouldn’t be a hindrance to those who desire it. How does YT arrive at these stats by the way?

Well, language… 100% of the viewers are 35-65 men.

When you have a certain percentage of a certain social group represented in a community, you expect to find a similar percentage ± sampling errors in every sub-group of said community. That’s how probabilities work. When you don’t, something non-random is screwing those probabilities, and you would better figure out what and why, when that certain social group is already a minority in said community. Otherwise, you just perpetuate inequalities and such.

Women represent 3% of OpenSource contribution on Github in 2017 and 45% of photographers in the US, so 3% × 45% = 1.35% of women would be a fair expected minimum in my audience. 0% is off-charts, considering the sample has 14000 members. Something is not normal here (in the statistical sense).

1 Like

That I don’t know (and will probably never will).

My point is that a stat of 100% isn’t a good one since there has to be outliers or at least ones indicated by confidence. I am questioning how YT determines the sex and age. Probably using other stats. There is also the case where no one is watching and the video is auto-playing. :stuck_out_tongue_closed_eyes:

Google knows mostly everything on you, especially when you have an account (you have at least to type your birthdate). Errors should be fairly distributed among the sample for such big samples so I wouldn’t try to look for funny discrepancies as an explanation.