darktable 3.0

To be honest, I always hated the terminology shadows/midtones/highlights in all commercial SW I used, because it isn’t really clear which is which, it sounds arbitrary.
But why do you use the same in Color Balance then ? Is it only because it’s more intuitive than lift/gamma/gain or there are hardcoded assumptions there as well ?

And how is it done in DT ?

Yes, but putting the inflexion at 0.18 will make low-lights very difficult to control since we have increased sensitivity in these parts, so what you need for a good UX is a log interface that scales up this region for better control. Which is not what typical curve do.

The thing is nobody wants to do that, and using free-hand nodes generally ends up losing time to manually micro-tune a smooth monotonous curve… So you only get the illusion of power, and all the overhead. 99% of the time, people just want a S curve, so why not deliver them in a more robust way ?

I think you are mixing things up. Photography is a technically determined art : it was made possible only after optics, micromechanics and chemistry went far enough to fix an image on a substrate. Making feeling-enabled pictures is not incompatible with using the state-of-the-art technics based on the best understanding we have so far of light emissions and colour perceptions. The art is in the using, but to make reliable and sensible tools, you have to care about science. It’s all about getting robust tools that give the best results the quickest. I hate computers, I want the path of least effort and most efficiency to obtain the results I’m after. To achieve that, treating RGB vectors as whatever numbers and mess around with them as if they represented no physical reality is like shooting myself in the foot. Unrolling the physics and psychophysics where needed is the only way to get digital to behave like analog, therefore predictibly.

Intuitive for whom ? For someone who did analog photography in the past (like serious printing stuff, not just sending away negatives to the 1h lab), digital display-referred makes no sense.

I agree. But most non-linear transforms inherited from the display-referred workflow are non inversible, and can get very unpredictible (if you are able to answer the question “how much will I oversaturate shadows while I increase the contrast that much”, you are better than me).

You don’t get it. Your screen has 8 EV of dynamic range, your paper has 5 to 6.5 EV, yet today your average DSLR has 12 EV (up to 14EV and counting) at 100 ISO. This is single-frame HDR, and it’s standard now. These files need to be handled rigorously to keep all the data and blend it gracefully into SDR, or be able to recover details in backlighting situations as advertised by the cameras manufacturers.

Hence me saying a sensible image processing pipeline should be 100% output-agnostic, which is possible only if you work on linear light.

Static contrast is pointless, your brain is doing focus-stacking and exposure-stacking in real-time, so the retina is just the first part of a complex process and the actual dynamic range of human vision is around 18-20 EV depending of the surround lighting.

You are missing the point. The ambitions of myself for dt is having a set of physically-accurate tools to push pixels in a fast and robust way, so I can perform much dramatic edits without nasty side-effects. As of darktable 2.4, all the contrast and dynamic range compression tools gave halos or colour shifts when you pushed them far. The usual answer was “don’t push them that much, they work only for small adjustments”. Right, but what good is a tool that fails me when I need it the most ? If I need to push shadows by +5 EV and the tonemapping tool can’t do it… well it doesn’t work. That implies bad colour models and bad algorithms.

So I studied the problem and tested solutions, and came with the answer that image processing needs to be physically-accurate, work as much as possible in linear light and stop convolving colour and display concepts too soon in the pipe. In this case, you can use the software as a virtual camera and redo the shot in post-processing. Then, editing is analogous to designing your own film emulsion, and many things are more simple even though the UI might get more crowded. Dealing with linear light is very easy, it’s like adding colour filter on top of your lens.

I just think that people who never pushed darktable too far can’t see the problem. It sure works fine for gentle editing, so I get why all my changes just seem like a big pile of habits-changing trouble for many people.

Even if you don’t clip values, having an UI from 0 to 1 still sucks because these special values are only conventions, and nobody knows what data you actually manipulate. I think good algorithms should work in the more general way. But then, sure, you need to expose some scaling parameter in UI and users will start to complain about it, even though its default value will usually not need to be changed for a majority of them.

I tried to be user-friendly, because offset/lift affects mostly shadows, gamma/power midtones and gain/slope highlights. But, of course, there is no threshold in there, let alone hardcoded. The algo is simply RGB_{out} = (slope * RGB_{i n} + offset)^{power}.

It’s been wired progressively. If you look at the pipe now, everything coming before filmic is output-agnostic. Filmic is the HDR->SDR mapping, and everything coming after expects SDR data.

8 Likes

Love the new scene-referred rgb workflow. I’ve revisited a load of my old edits and the improvements are significant. Keep up the good work.

1 Like

Which is good :slight_smile: I think all of our disagreements really aren’t… they simply pivot on the question of what is the most user-effective parameterisation of the HDR → SDR mapping. So I’ll wait for the bugs to settle a little and maybe I’ll be able to convince my self that your proposal is best.

I tried to get something out of the discussion here to answer this question I asked yesterday - but I can’t (or overread something). It is related to the base adjustment module.

Can anybody help me with this?

I assume the new Basic Adjustments module works in LAB color space, where filmic rgb, tone equalizer, and color balance work in linear rgb. Putting the basic adjusents module before the other three will cause the three to misbehave.

1 Like

That’s it. And that’s also because basic adjustements has options that are in the same module (so same step in pipe) instead of different steps of the pipe with other modules. That could make some colors issues, depending on what you use them.

And some sliders have quite near options between some modules, always depending on when you use it, but not in same pipe step.

1 Like

Thanks - I get the idea …

I built and installed DT3 on both Windows 10 and Ubuntu 19.10 on my dual boot PC.
I noticed initially that the Linux version was snappier, while the Windows one was a bit sluggish.
Then I tried to remove the dartablerc on Windows (which was a leftover from DT 2.6 and earlier) and boom! Now the Windows version feel is equally responsive.
The funny thing is that the OpenCL settings were the same, so I don’t know what was causing the slowdown.
I suggest to anyone installing DT 3 to do the same. It takes 10 minutes to reconfigure the preferences, but it’s worth the effort.

So after a bit of an adventure in order to make a local build of 3.1.0, I worked on a duplicate folder with the same RAW files I’d already treated with the “old” base-curve + tone-curve approach (but happily with the channel mixer moved upstream into the linear scene-referred world, thanks for the change :slight_smile: )

One image came out exactly as I wanted without turning on the tone curve… it was equally easy to get with filmic RGB.

The second was a bit more challenging. The standard base curve & filmic RGB both gave the same result again (so I guess this is a small win for filmic RGB since I had to pfaff about quite a bit while setting up the base curve). The fine tuning was … different. still without cross checking, the two images were very close.

One thing that made the tone equaliser awkward was that the histagram is quite narrow relative to the position of the knots: in the tone curve, if I’ve been sensible in using exposure and base curve to achieve the same ends as filmic RGB, I have a full width histogram… which I can expand at the left by going to the log(input) view. So, more awkward using tone equaliser, even though by default it’s logarithmic in the input values (a Good Thing).

Question then: would it be possible to have an adjustable scale for the abscissa, so there is no distraction by knots* that are acting on vanishingly low levels of the histogram?

Images attached…
_IMG3212|690x458

There is one already:

mask exposure compensation centers the histogram where you want it, mask contrast compensation spread it as much as you want it, the colour pickers on the right automatically center it on -4EV and spread its first and last deciles between [-7; -1] EV.

3 Likes

darktable-3.0.0.tar.* are not binaries for linux, instead it contains the source code. And whenever I tried to build, there is always some missing dependency. That’s why I am searching for pre-built binaries.

Thanks a lot for this link. I was missing some dependencies. I got these from this page and able to build DT 3.1.


Bear with me guys, I know this is just a basic screen. But after some struggle, I am able to build DT 3.1.x and just wanted to share my first screen.
Actually, I’m in love with DT… :):smiling_face_with_three_hearts::innocent:

6 Likes

Hi everyone, i’m new here and Darktable looks awsome but i have just a simple question with this new release, can DT3 read CR3 raw files from Canon camera ?
Thanks

You need to convert CR3 to DNG for importing them to darktable because Canon does not publish information on how to read their format.

Ok, thank you.
So I have to convert all my files … It’s what i expected…

Any updates on the 3.0.1 release?

It’s drafted: https://github.com/darktable-org/darktable/releases/tag/untagged-45ae932a9fab665f6e16

Almost there.

3 Likes

Link is broken