"Aurelien said : basecurve is bad"

I pride myself in the fact that, when I give advice and people follow it, their life usually improves. And people who keep ignoring them often complain that darktable is bad, scene-referred sucks, and why-the-fuck-did-you-change-display-referred-that-got-me-faster-results. But said advice usually have a context, out of which they become irrelevant.

The context of scene-referred is a need for more accurate colorimetry as sensor dynamic ranges increased while display dynamic range stayed mostly the same. Namely, I got a Nikon D810, and while everyone praised it for its “HDR” capabilities (13.something EV at 64 ISO), I couldn’t get good color out of raised shadows for the life of me.

More accurate is linked to scalable color models. And, by definition, the only scalable operators are linear, aka proportional to light emission, aka exposure-independent. That’s why scene-referred: it’s exposure-independent and output-agnostic. Much more robust (and, yes, more complicated).

In 2018, I created filmic off Troy’s filmic plugin for Blender, and adapted it to darktable’s pipeline requirements. I’m currently finishing the v6 with built-in gamut mapping.

Then I said “basecurve is bad”. But then I said why. And people forgot the “why” part. Which leads to people citing stuff I said 3 years ago out of context.

In 2018, darktable’s pipeline was rigid : the order in which modules were applied was determined at compilation time, based on a graph-dependency solve. Meaning users couldn’t change the order of modules in the pipeline.

In this rigid pipeline, base curve happened before input profile, which is, for RAW pictures, a matrix profile. Yet matrix profiles are linear and expect linear input. Putting (the heavily non-linear) base curve before the profile voided its core assumptions, leading to very bad colors that were magnified as shadows were brightened to recover HDR-like scenes.

That’s where I created another module, put at the end of the pipeline, to do essentially the same thing with a different GUI. That went into darktable 2.6.

Now, when darktable 3.0 introduced the ability to reorder the pipeline, base curve was moved by default pretty much at the same place as filmic by default (unless you use the legacy pipe order). So now, base curve is color-safe, as far as color profiles and chromatic adaptation are concerned.

So, as of darktable 3.0 and later, base curve or filmic achieve the same goal mostly the same way (you get those chromaticity-preserving norms in base curve too), the only difference being in the GUI and the ability to scale the curve look to the input dynamic range.

So filmic is essentially a base curve with a different GUI. And a shitload of other features because just applying a tone curve is not nearly enough to get a smooth highlights roll-off. But, now, filmic or base curve is pretty much a matter of taste, and whether you keep a pipeline normalized between [0;1] or not.


I always wondered why basecurve was where it was in the darktable pipeline… It just never made sense to me as an early stage.

@hanatos said it is a legacy from the dcraw pre-rawspeed era.

it was just always there. you know how darktable used custom compiled libufraw0.so and libufraw1.so to have different namespaces for the static variables so we could process raws in multiple threads? i never wanted to write image processing code then… now look at the mess :slight_smile:


That code is ugly.

1 Like

I am completely happy with the development of darktable and very grateful for all your work. Thank you.

1 Like

Thanks for reminding us about the essential issue.

Really, one of the reasons I wrote rawproc. I wanted to order things as I saw fit, like one does with G’MIC scripting.

I would like to have more disposable income, any advice? :slight_smile:


While Aurelien won’t shill his liberapay I’d like to remind people - if you’ve listen to Aurelien’s advice and your life improved, chucking a bit towards https://en.liberapay.com/aurelienpierre/ is good way to support darktable and Aurelien :slight_smile:

And it’s certainly better than say netflix subscription, youtube superchats, twitch whatevers or what have you :wink:


… advice regarding processing pictures :wink:

Although my wife has increased her income substantially, even during COVID, since listening to me, but that required careful analysis of expenses.

Thanks to my head of marketing !


Sorry Aurélien, I may have started that a few years before you arrived. :blush: Thanks for making things work, though I am still not on the dt train. (Well, I don’t use any raw processor exclusively…)

Just a thought though: people will still bother the heck out of you. I guess you intend to link to these pieces whenever someone does. But that type of person would probably just skim or disregard your explanations. :sweat_smile:

That’s quite easy - simply reduce your expenses …
It’s like image processing - you can simplify stuff all over the process chain: the more effort spent to get a good capture the less effort has to be spent during raw processing.
I never understood those, who argues, that the benefit of high dynamic range sensors is to be able to boost shadows by 5 EV - and cameras are crap if they don’t have this feature :wink:


Simplification is extreme sophistication.

That said, I am wondering why one would do that, with recent versions of Darktable. Possible, but much less convenient than the alternative.

maybe to use blend modes available in display mode, and have ability to access whole dr

Is there a way to combine the log part of filmic with the curve part of basecurve?

I don’t think so: afaik, the basecurve is calculated from a raw file and its associated camera jpeg.So there is no separation between a tone curve and a log transform. The data are coded like a tonecurve as a series of (x,y) points (see the basecurve.c source file, around l. 238)

Keep in mind that the purpose of the base curve is to simulate the default in-camera rendering, nothing more, nothing less.


Problem is you need to ensure the continuity of the slope (first order derivative) where both curves connect. If both parts of the curve are parametric, you can solve for their parameters in a way that validates this constraint (that’s how cubic splines work, with polynomials). If you connect parametric to user-set nodes, there is no guaranty that such a solution exists in case the parametric curve is a log.

And piece-wise curves that don’t have a slope continuity look bad, because they are basically breaking the gradients somewhere (the worst case would probably be in B&W).

1 Like

But this isn’t a case of connecting two bits of different curves, each covering part of the input range. More like using two transformations in sequence, which is a different matter (and if I recall my math courses correctly, if both transforms produce output which is continuous in the first order derivative, their combination is continuous as well).

@nosle : Another gotcha is that basecurve expects an input in the range [0;1] (everything above 1 is forced at 1). A log transform doesn’t guarantee that, and filmic doesn’t require it either.

Thanks @rvietor you articulated how I had mapped in mentally. As two stacked operations. About the input range is this really a problem? I mean you clip or compress that with other tools in most software right? In the bog standard raw software your basecurve just covers part of the data unless you massage it all in. Something that the log mapping helps with right?

There’s no rule you have to have all data in your displayed final image. Mentally I’m seeing the base curve as a display mapping.