Proposal for updated Filmic curve parameterisation in Darktable

I did just compile the development version myself the other night for the first time, after a false start a couple of weeks ago. I’m not using SUSE, but I didn’t do anything special (other than to make sure the build dependencies were all installed from my repo) to make it work.

All of the dependencies are here, and from the instructions just below that it looks like
sudo zypper si -d darktable
would take care of that on openSUSE. After that I followed the instructions on this page near the bottom under the heading “git version” with the difference that I used
./build.sh --prefix /opt/darktable-test --build-type Release --install --sudo
as suggested on that first github page so it installed in a special place (this uses their included build script, so hopefully that takes care of the needed tweaks). I start it each time as suggested lower on the page with
/opt/darktable-test/bin/darktable --configdir "~/.config/darktable-test"
so that it uses its own config directory. (I eventually aliased that command to make it easy) I also turned off writing to xmp sidecars so that it leaves my files from the stable version completely alone.

I hope that helps. I’m just a user, so I can’t really help too much beyond that.

2 Likes

There are ‘third party packages’ that will allow you to install and maintain the latest dt package without fulling with a compile and all …

Hi,

You can find master snapshots built for you in 3rd party section of Install.

From there you can select snapshots from the master branch, you Linux distribution and install it.

And below you can find how to install from source (current release and git).

Warning: Installing as a package will overwrite your current installed version

1 Like

Additionally, when you’re upgrading, you can’t go backward and downgrade… so back up your configuration directory first! (~/.config/darktable/ on Linux), else you’ll be stuck with an unstable build or will have to reimport everything from scratch.

(Which is OK, as the XMPs are saved and your metatadata will be intact. And so will edits that were done with the previous build. But it’s slower to import a large library than to just use the database you were already using.)

2 Likes

Um, be careful with that. If you edit an image with a dev build, the XMP will contain for each module a binary blob of parameters for that build level. If you then try to revert to an older build, some of those blobs may no longer be readable, and you may lose the edits for some of the modules. In short, back up your XMP files as well. When working with dev builds, I always have a separate directory with a separate copy of “test images”, and I only ever run production builds on my production image store.

3 Likes

Yeah, I should’ve added emphasis here.

Thanks for the clarifications about the edits! (What I wrote really wasn’t clear enough.)

Right … I’d forgotten the easy way because my distro isn’t on that list. :sweat_smile:

2 Likes

It’s not zero, it’s lower than the black value.

Display-referred has seemingly infinite dynamic range due to the fact that it’s bounded by zero, and the whole point is to be output agnostic, so the only thing we care about is the scene.

But then you lose the explicit contrast setting, as the slope of the central part.

It’s not mathematically equivalent since contrast is a slope (dy/dx) around grey and white/black are the bounds of the DR (x_max and x_min).

1 Like

Isn’t it just a silly software bug? The curve should be monotonic, so we know that every point smaller than some point that reaches zero also should be zero. So not user error just a simple bug?

1 Like

I think we’re talking past each other here because my terminology is not quite in line …
It’s trying to map some part of the scene-referred input to a negative output, and I don’t think that that would ever be intended. (or at least: if anyone wanted to cut off the lower bits of the histogram, they could do so by adapting the shadow range in filmic accordingly)

Weeelll, I bet most people will care about the output, wouldn’t they? And so should filmic because mapping to output is what it does, and I would like any filter to “care” about its output. That may not be what you meant here, but I hope you get what I mean.

Also: Why would display-referred by infinite? It’s 0 to 100. Forgive my noobish attitude but isn’t the fixed ceiling of display-referred white what makes the difference between scene and display space? Mapping from theoretically-unlimited scene to physically-limited display is exactly what Filmic does, or what am I not getting here?

Ohh, but not at all! I’ll put it in equations, since that is more explicit:
To specify a segment of a linear mapping f(x)=a + bx for x_1 < x < x_2 , we are currently inputting the contrast b directly (the slope) and the center:
f(x=0) = y_c
From which we directly get a=y_c, which is always mapped to 18% in display space. I don’t suggest touching this part at all.

To determine the ends of the range we currently have:
x_2 - x_1 = l (that’s latitude)
(x_2 + x_1)/2 = x_{bias}
From which you can then compute x_1 and x_2

I don’t suggest to change the first part of this, so contrast would stay exactly what it is. I would only change the way in which the interval is specified, by defining:
f(x_1) = y_1
f(x_2) = y_2
Also two numbers, and you can work out x_1 and x_2 in a very straightforward way:
x_1 = a + b y_1
x_2 = a + b y_2
…done!

The nice thing is that if you only permit inputs between 0 and middle grey for y_1, and only between middle grey and white for y_2, everything stays within the display-referred bounds, all the time.
(provided the lower bound for contrast is set such that you can’t get a horizontal line, but there is already a lower bound for contrast which does just that)

2 Likes

The algo maps the lower bound of the latitude to the black value, expecting the latitude to be higher than black. There is no handling of the case where latitude is lower than black because it does not make sense (it’s basically asking for a non-monotonous curve). I could sanitize values later, between GUI params and pixel ops params, and hide user mistakes, but I don’t want to : I can already hear the next youtuber saying “if I push the bias very far, it doesn’t seem to change anything, so it’s the same”. Silly settings should produce silly outputs that blow up in your face, otherwise users get wrong ideas.

If you only care about output, please use a tone curve. Remember, in a near futur, output will be SDR and HDR alike, so we need to be output-agnostic and filmic is designed around the idea that the output DR may change at export time while still having to preserve the mapping intent.

The mapping intent is defined by how the midtones should be mapped, provided that the extreme luminance values will need to adapt to whatever output. Midtones are where most of the details are and we know for sure that any display will be able to contain them. This is why middle-grey is central in the filmic approach.

Because \log(0) \rightarrow - \infty, and while 0 is a correct RGB code value, it means nothing physically. Display ICC profiles don’t contain the luminance of the medium black (highest density)*, so all we know about output is it’s encoded between 0-100%, which appears to be an infinite DR.

* even though ICC v2 had a metadata field for that medium black, that was removed in ICC v4 because nobody used it, and HDR profiles re-introduce a medium white metadata because that’s critical for tone-mapping in a fluid DR setting. So, right now, black point compensation uses an indirect methods through the TRC floor value estimation to get a sense of that medium black, which clearly shows ICC people have had the head shoved in their asses for too long to be trusted ever again.

What problem does that solve ?

The default latitude has moved between 25% and 33% of the scene DR in past years, depending on splines used. In the past 2 years, I have had to change it only for HDR inputs (that is, exposure-stacked HDR or synthetic renderings with more than 15 EV of DR).

There is this user bias that consists in thinking that, because there are n parameters available, users should manually change exactly n parameters. The reality is latitude rarely needs editing at all, it’s there as a safety fallback for difficult cases. And if you need to change it, you have a control monitor showing the curve with a fat orange warning that shows up when you create problems.

If anything, I would rather look into a different formulation of the bias that prevents moving outside of the range, but I don’t think inputing y_1 and y_2 manually will scale to any output DR.

Thanks a bunch for the advice! Will try that tomorrow.

Aurélien, come on! if you did not care at all about output, then why produce any output at all? Of course you do, and so does filmic. If you didn’t, filmic would not exist and you’d be running about telling people to only produce pictures in scene-referred colourspace and wait for displays with infinite dynamic range or something. Which would be very very silly.

So maybe let’s just assume that whoever tries to map something from x to y would maybe have some tiny amount of interest in which y’s their x’s are being mapped to? I feel very silly having to explain that.

2 Likes

Ahh, okay, I’m starting to get you. on a logarithmic scale the output has infinite DR, because it goes from 0 to 100%. That’s of course correct. But on a linear (or gamma-mapped, as in DT) scale, there is a finite interval, with a 0 at one end and a 100% at the other. Mapping anything to -1 makes no sense.

It solves exactly the problem of confused users who want to create a tone curve in filmic, change something and realize that for some reason half the combinations of inputs make no sense whatsoever. Why even provide those?

It solves the same problem which was solved by moving from brightness + contrast in MS pains and old-timey (as well as new-timey) shitty graphics driver settings to black level + white level: If you want to change one end of the curve, you can change just one end of the curve. It’s a more direct mapping of input parameters to visual effect.

It helps to turn filmic from an opportunity to do a million things wrong into something that someone can play with for a bit and kinda almost work out what it does.

Let’s turn the question around, though: which problem is solved by allowing a mapping which has clearly mathematically-defined bounds of usefulness (and div-zero avoidance, which you called a showstopper not long ago) to cross those bounds routinely, by having multiple settings which can push it over that edge unless the user remembers to turn to the correct display mode which will allow them to check if things are still in line?

Look, I understand you’re not keen to tear everything up once you’ve implemented it (and you’ve implemented this one within less than a day from when you said you’d start), and that’s as good a reason as any to not leave something as it is, but I see no reason to think that removing the ability to break the tone curve (but still do everything it otherwise does) would make anything worse in any way.

3 Likes

While it is normally probably not a good idea to “break the tone curve”, I’m sure someone will come up with a good use for it (cf. solarisation, for those that remember photography with wet chemistry).
If you don’t want it, it’s not all that difficult to avoid, so why limit the options for everyone to suit a few?

And one important point is that those unrealistic values that “break” the tone curve don’t really break anything, contrary to a division by zero. (that’s not to say that unrealistic parameter values will give a realistic result, but why should that be a problem?)

1 Like

I vaguely remember the requirements for such a curve to be:

  • monotonic, thus invertible,
  • smooth as in first and second derivatives should be non-zero and continuous
  • reasonable asymptotic behavior
  • numerical instabilities should be gracefully handled if they cannot be avoided

Do these requirements still exist?

1 Like

So … mapping to negative infinite display-referred luminance may have some creative use? In that case I’d humbly suggest that maybe the thought that middle grey might be mapped to anything but 18.946% be regarded with similar benevolence.

Any input slider has two ends. If half of the range generates nonsense, then there is no reason for that half to exist, other than to set a trap for those users who’d rather let the computer do the math. Says me, who has in fact figured out the math first. I know how to compute the point at which the curve breaks down (which is dependent on several other inputs, btw, so there’s multiple ways to inadvertently push it over the cliff) but I see no benefit in being able to compute which input combinations I must avoid when I also know how easy it would be to simply remove the need to ever pay attention to this quirk again.

You should dig into that code and submit a patch to make it how you’d like it to be. Discussion and ideas and proof-of-concepts are great, but if it is t getting there then maybe you should take care of it yourself.

I agree … problem is I’ve not used C++ before. I’ve done enough other languages that I’m not afraid, either, but it is a significant investment, plus I’m sure any newcomer to the DT code will need to work out what the conventions are. I really don’t like people adding things to my own code if they ignore naming conventions or re-implement functionality that was already available in some helper function, so I assume the same will be true of DT.
To make matters even more interesting, my PC broke last week. Which means I’ve spent a few days trying to fix things (and found it’s a broken video card → not going to solve that soon), so now I’m on a laptop, which is not set up to compile stuff, and I’ve got only so much time for DT. This means I haven’t even had the time to try the Aurélien’s patch yet.

The other aspect is that I actually prefer to talk things over before going off and creating facts. Nobody is smart enough that they don’t overlook some easy improvement, and that goes for myself, too. Which is why I was trying to get some input here.

In other words: Simply going off and implementing what I want is a huge up-front investment of time for me, with a high risk of taking longer than I wanted or being rejected for formal reasons (“your code is ugly”), simply for the purpose of knowing what people think of my proposal. Especially if I look at the tone of most replies here, I’m not entirely optimistic about that.

3 Likes

It’s C, not C++ (well, there are tiny bits written in C++, but no need to care about it) :slight_smile:

prety much it’s quite organic in my experience…

That’s great approach!

We have Peer Review for Pull Requests process along with testing etc :slight_smile: And rarely things get outright rejected. The whole proces is actually welcoming one and one could learn a lot while improving dt at the same time!

2 Likes