Proposal for updated Filmic curve parameterisation in Darktable

Isn’t it just a silly software bug? The curve should be monotonic, so we know that every point smaller than some point that reaches zero also should be zero. So not user error just a simple bug?

1 Like

I think we’re talking past each other here because my terminology is not quite in line …
It’s trying to map some part of the scene-referred input to a negative output, and I don’t think that that would ever be intended. (or at least: if anyone wanted to cut off the lower bits of the histogram, they could do so by adapting the shadow range in filmic accordingly)

Weeelll, I bet most people will care about the output, wouldn’t they? And so should filmic because mapping to output is what it does, and I would like any filter to “care” about its output. That may not be what you meant here, but I hope you get what I mean.

Also: Why would display-referred by infinite? It’s 0 to 100. Forgive my noobish attitude but isn’t the fixed ceiling of display-referred white what makes the difference between scene and display space? Mapping from theoretically-unlimited scene to physically-limited display is exactly what Filmic does, or what am I not getting here?

Ohh, but not at all! I’ll put it in equations, since that is more explicit:
To specify a segment of a linear mapping f(x)=a + bx for x_1 < x < x_2 , we are currently inputting the contrast b directly (the slope) and the center:
f(x=0) = y_c
From which we directly get a=y_c, which is always mapped to 18% in display space. I don’t suggest touching this part at all.

To determine the ends of the range we currently have:
x_2 - x_1 = l (that’s latitude)
(x_2 + x_1)/2 = x_{bias}
From which you can then compute x_1 and x_2

I don’t suggest to change the first part of this, so contrast would stay exactly what it is. I would only change the way in which the interval is specified, by defining:
f(x_1) = y_1
f(x_2) = y_2
Also two numbers, and you can work out x_1 and x_2 in a very straightforward way:
x_1 = a + b y_1
x_2 = a + b y_2
…done!

The nice thing is that if you only permit inputs between 0 and middle grey for y_1, and only between middle grey and white for y_2, everything stays within the display-referred bounds, all the time.
(provided the lower bound for contrast is set such that you can’t get a horizontal line, but there is already a lower bound for contrast which does just that)

2 Likes

The algo maps the lower bound of the latitude to the black value, expecting the latitude to be higher than black. There is no handling of the case where latitude is lower than black because it does not make sense (it’s basically asking for a non-monotonous curve). I could sanitize values later, between GUI params and pixel ops params, and hide user mistakes, but I don’t want to : I can already hear the next youtuber saying “if I push the bias very far, it doesn’t seem to change anything, so it’s the same”. Silly settings should produce silly outputs that blow up in your face, otherwise users get wrong ideas.

If you only care about output, please use a tone curve. Remember, in a near futur, output will be SDR and HDR alike, so we need to be output-agnostic and filmic is designed around the idea that the output DR may change at export time while still having to preserve the mapping intent.

The mapping intent is defined by how the midtones should be mapped, provided that the extreme luminance values will need to adapt to whatever output. Midtones are where most of the details are and we know for sure that any display will be able to contain them. This is why middle-grey is central in the filmic approach.

Because \log(0) \rightarrow - \infty, and while 0 is a correct RGB code value, it means nothing physically. Display ICC profiles don’t contain the luminance of the medium black (highest density)*, so all we know about output is it’s encoded between 0-100%, which appears to be an infinite DR.

* even though ICC v2 had a metadata field for that medium black, that was removed in ICC v4 because nobody used it, and HDR profiles re-introduce a medium white metadata because that’s critical for tone-mapping in a fluid DR setting. So, right now, black point compensation uses an indirect methods through the TRC floor value estimation to get a sense of that medium black, which clearly shows ICC people have had the head shoved in their asses for too long to be trusted ever again.

What problem does that solve ?

The default latitude has moved between 25% and 33% of the scene DR in past years, depending on splines used. In the past 2 years, I have had to change it only for HDR inputs (that is, exposure-stacked HDR or synthetic renderings with more than 15 EV of DR).

There is this user bias that consists in thinking that, because there are n parameters available, users should manually change exactly n parameters. The reality is latitude rarely needs editing at all, it’s there as a safety fallback for difficult cases. And if you need to change it, you have a control monitor showing the curve with a fat orange warning that shows up when you create problems.

If anything, I would rather look into a different formulation of the bias that prevents moving outside of the range, but I don’t think inputing y_1 and y_2 manually will scale to any output DR.

Thanks a bunch for the advice! Will try that tomorrow.

Aurélien, come on! if you did not care at all about output, then why produce any output at all? Of course you do, and so does filmic. If you didn’t, filmic would not exist and you’d be running about telling people to only produce pictures in scene-referred colourspace and wait for displays with infinite dynamic range or something. Which would be very very silly.

So maybe let’s just assume that whoever tries to map something from x to y would maybe have some tiny amount of interest in which y’s their x’s are being mapped to? I feel very silly having to explain that.

2 Likes

Ahh, okay, I’m starting to get you. on a logarithmic scale the output has infinite DR, because it goes from 0 to 100%. That’s of course correct. But on a linear (or gamma-mapped, as in DT) scale, there is a finite interval, with a 0 at one end and a 100% at the other. Mapping anything to -1 makes no sense.

It solves exactly the problem of confused users who want to create a tone curve in filmic, change something and realize that for some reason half the combinations of inputs make no sense whatsoever. Why even provide those?

It solves the same problem which was solved by moving from brightness + contrast in MS pains and old-timey (as well as new-timey) shitty graphics driver settings to black level + white level: If you want to change one end of the curve, you can change just one end of the curve. It’s a more direct mapping of input parameters to visual effect.

It helps to turn filmic from an opportunity to do a million things wrong into something that someone can play with for a bit and kinda almost work out what it does.

Let’s turn the question around, though: which problem is solved by allowing a mapping which has clearly mathematically-defined bounds of usefulness (and div-zero avoidance, which you called a showstopper not long ago) to cross those bounds routinely, by having multiple settings which can push it over that edge unless the user remembers to turn to the correct display mode which will allow them to check if things are still in line?

Look, I understand you’re not keen to tear everything up once you’ve implemented it (and you’ve implemented this one within less than a day from when you said you’d start), and that’s as good a reason as any to not leave something as it is, but I see no reason to think that removing the ability to break the tone curve (but still do everything it otherwise does) would make anything worse in any way.

3 Likes

While it is normally probably not a good idea to “break the tone curve”, I’m sure someone will come up with a good use for it (cf. solarisation, for those that remember photography with wet chemistry).
If you don’t want it, it’s not all that difficult to avoid, so why limit the options for everyone to suit a few?

And one important point is that those unrealistic values that “break” the tone curve don’t really break anything, contrary to a division by zero. (that’s not to say that unrealistic parameter values will give a realistic result, but why should that be a problem?)

1 Like

I vaguely remember the requirements for such a curve to be:

  • monotonic, thus invertible,
  • smooth as in first and second derivatives should be non-zero and continuous
  • reasonable asymptotic behavior
  • numerical instabilities should be gracefully handled if they cannot be avoided

Do these requirements still exist?

1 Like

So … mapping to negative infinite display-referred luminance may have some creative use? In that case I’d humbly suggest that maybe the thought that middle grey might be mapped to anything but 18.946% be regarded with similar benevolence.

Any input slider has two ends. If half of the range generates nonsense, then there is no reason for that half to exist, other than to set a trap for those users who’d rather let the computer do the math. Says me, who has in fact figured out the math first. I know how to compute the point at which the curve breaks down (which is dependent on several other inputs, btw, so there’s multiple ways to inadvertently push it over the cliff) but I see no benefit in being able to compute which input combinations I must avoid when I also know how easy it would be to simply remove the need to ever pay attention to this quirk again.

You should dig into that code and submit a patch to make it how you’d like it to be. Discussion and ideas and proof-of-concepts are great, but if it is t getting there then maybe you should take care of it yourself.

I agree … problem is I’ve not used C++ before. I’ve done enough other languages that I’m not afraid, either, but it is a significant investment, plus I’m sure any newcomer to the DT code will need to work out what the conventions are. I really don’t like people adding things to my own code if they ignore naming conventions or re-implement functionality that was already available in some helper function, so I assume the same will be true of DT.
To make matters even more interesting, my PC broke last week. Which means I’ve spent a few days trying to fix things (and found it’s a broken video card → not going to solve that soon), so now I’m on a laptop, which is not set up to compile stuff, and I’ve got only so much time for DT. This means I haven’t even had the time to try the Aurélien’s patch yet.

The other aspect is that I actually prefer to talk things over before going off and creating facts. Nobody is smart enough that they don’t overlook some easy improvement, and that goes for myself, too. Which is why I was trying to get some input here.

In other words: Simply going off and implementing what I want is a huge up-front investment of time for me, with a high risk of taking longer than I wanted or being rejected for formal reasons (“your code is ugly”), simply for the purpose of knowing what people think of my proposal. Especially if I look at the tone of most replies here, I’m not entirely optimistic about that.

3 Likes

It’s C, not C++ (well, there are tiny bits written in C++, but no need to care about it) :slight_smile:

prety much it’s quite organic in my experience…

That’s great approach!

We have Peer Review for Pull Requests process along with testing etc :slight_smile: And rarely things get outright rejected. The whole proces is actually welcoming one and one could learn a lot while improving dt at the same time!

2 Likes

In my opinion the point here is that @Mister_Teatime is way past the point of needing to hear the various variations of “don’t just suggest, actually do something yourself”. He came up with an idea, visualized and formulated it, and then followed up discussions with Aurelien such that the suggestions are now (in part?) part of darktable. Clearly his contributions were useful. There’s plenty of things people can do to help with open source projects, even without writing a single line of code.

If someone request a feature/change/wathever and can’t accept a “not interested to do the work” by contributors, then yes, throwing a “well you can always do it yourself” into the mix is useful to make a point. When someone is not making demands, just presenting ideas and suggestions, that same comment feels very misplaced.

5 Likes

This is very true, however:

This probably isn’t very compatible with a fairly mature application with multiple contributors, which is why they might want to dig in for themselves.

1 Like

You might be on to something with the Michaelis-Menten equation. It’s used by the iCAM color appearance model (see section 2.4 Tone Mapping of this paper so your suggestion has some support.

3 Likes

That’s a nice paper!

Might lead to the new Enzyme Module :wink:

I very much agree with this point.

I am trying to catch up here re: the math of this post (my math education is through linear algebra, which is great for a classical musician, but it was a long time ago, but it is giving me a great excuse to learn more and show my kids “why learning matters”), but I can comment on this topic.

From my vantage point:
Darktable is a tool. Tools have a scope and audience. So the question becomes “who is darktable for?” (apologizes if this point should fork to a new thread)

I am assuming a lot, but we also have “command line”, powerful tools like GIMIC, JPG tools that can have quite the learning curve, like GIMP, and pano tools like hugin’.

I choose darktable because it most easily, more than any other software free or paid, with some learning of the tool, realizes what I want with my photos.

Darktable seems to be for the photographer who:

  • wants more control over their photographic vision (starting from RAW, more power given to the user)
  • have that process be gui driven
  • prioritize flexible paths toward a goal, irregardless of “standard” workflow in digital photography (a la Adobe)
  • Can still be a tool for those who charge for their time (edits do not all have to be hour long ordeals; backwards compatibility to earlier versions )
  • Embrace FOSS philosophy

And more recently:

  • embrace this new displayed referred philosophy
  • embrace more modern color science, pushed into the public “eye” by the recent availability of great quality video equipment and color grading software.

So, designing the tool and its interface is a dynamic balancing act between people new to the software, people who want to deep dive into the computational processes, and those who want to benefit from the recent, for lack of a better term, new digital photographic aesthetic.

There is no need to chase butterflies, as the design process of darktable already has some of the tool design baked into this process :slight_smile:

Not only do I think the darktable dev team does a great job at this, but, with forums like this, it is a very fair process. “Power users” have tons of control (and, if so motivated, can participate and change the program to their liking), people who are photographers before computer programmers can achieve the look they desire without being limited by “safety”, and, while new users do have a learning curve, it has purpose, and will only enhance their understanding of the process, as well as refine their artistic choices.

Darktable, IMHO, does a great job of inviting new people to work with the software. A little math shouldn’t scare aware those who want what it offers and who are introspective about why they use the tools they use.

Just a long winded side note as you all dive into the math: some context by a passionate user. Hopefully it helps build a road as we move forward into territory really nobody else I see is doing between those who are volunteering their time and knowledge to build a better darktable and those who look at this and go “its hard… its not what I am used to… why can’t it be easier or more what I am used to?”.

10 Likes

The thing you are missing here is the GUI. If you push a slider at 20% of whatever scale, and see some result, you are happy. If you push it then to 15% and see the result change accordingly, again you are happy.

But then, if you push it to 10% and see no change compared to 15% because some internal sanitization clipping has been toggled on, it’s just a big WTF. Some users will take on the habit of constantly pushing to the extremes, just because… Some will conceive the idea that it doesn’t matter.

There is nothing more disturbing than a GUI that suddently stops reacting or has some dead range. When that happens, the first thing you think is “bug”, not “hidden safety jacket triggered backstage”.

So, yeah… clipping and sanitization are great… if they have some GUI feedback, or better, if they actually clip the control range of the slider in GUI. So far, I have no idea on how to do that, so let it break… at least users have the control curve to see it. For now, some settings fail but at least, control and model are 1:1.

5 Likes

There is probably an elegant solution out there but I think devs (and I as a user) are more interested in pushing the envelope on the technical level. Capability over usability. If you want the latter, there are plenty of apps that have that.

1 Like