Proposal for updated Filmic curve parameterisation in Darktable

If you play with the graph a little, you can find settings where they are kind of close – but I suspect the differences would still be visible in a photo.
Of course, if you go to contrast and latitude settings where the polynomials overshoot, the only way to get backwards compatibility is by reducing the range for the new curve such that the endpoint for the new curve ends up where the polynomial was dropping below 0 or increasing above 1. I don’t think that would be very desirable behaviour …

Generally, if the new curves have advantages over the old ones, then constraining them to look just like the old ones would erase most of those advantages.

In other words: I think for cases where the polynomials work well it should be possible to tweak the hyperbolae to be fairly close, though not indistinguishable. In other cases, I don’t think it’d be useful to try making them similar.

This seems harsh, by the way. Understand that you’re talking about 2+ years of largely unpaid/way under funded work.

Aurelien is no slouch, either.

3 Likes

Acknowledged.
So, what phrase should I then use to refer to “less difficult to wrap your head around”, “make it easier to acquire a feeling for how some setting affects the picture”, and “reduce trial and error”?
I’m sure those are possible. No idea how far things can be improved, but I think most people here would agree that improvements on that front are possible, even if we don’t agree (or are genuinely uncertain) what those improvement would need to look like.

I’ve yet to encounter a picture where that gave me what I wanted. I have encountered pictures where I got want I wanted, but usually only after many a painful iteration. And there’s almost no slider which I can move, observe the picture, conclude that it is where I need it, and leave it there. Almost every time I change something to get some brightness range to look as I want to, I find that some other part of the setup is now out of whack and needs to be re-adjusted.

This is a longer discussion, and I’m starting to think it should maybe be had separate from the discussion around curve parameterisation, because it tends to eat whatever thread it happens in. Which itself is telling us something…

6 Likes

I think your implementation shows what ‘intuitive’ could mean, mostly because inputs have a very orthogonal feel. You want to change one thing, you tweak one slider and that is represented in the graph. Done.

Coincidentally this is what imho the frowned upon lightroom/ACR does as well.

1 Like

Because if grey decreases, we slide the DR to the left, so the black point should slide along too. It’s a design choice here, but usually you need to keep your EV range constant between black and grey when you change that.

That has nothing to do with bounds and linearity. Screen display linear light, “display-referred” means “in regard to screen luminance”, so bounded between [0;100]% of display peak-luminance, but doesn’t necessarily imply non-linear signal, since gammas/EOTF are removed in the screen anyway, before being converted into a light emission…

The point is your raw data have middle-grey = something > 0. With the exposure module, you remap that something to 0.18-0.20 in a linear fashion and in an unbounded space, 0.18 being is our display-referred middle-grey. So you simply pin scene grey to display grey. And we could honestly just stop at that if we didn’t care for clipping.

Then filmic deals with the bounds mapping to fit whatever into the display DR, but doesn’t change the pinning, so it ensures that the midtones pass-through mostly unchanged, which is what you want to protect them since the midtones are the luminance range common to all DR.

I simply refuse to chase butterflies. “Intuitive” is not a design goal as long as we don’t know for whom we design and in which scope. Intuitive for a lab tech with 20+ years of experience in the darkroom is not the same intuitive as a digital kid with button pushing experience. So, let’s ditch that.

The point is beyond empiricism. Just because the algo holds in the condition of testing doesn’t prove the test is exhaustive and it will hold all the time. The math caveat is real, we can’t simply ignore it and hope the case doesn’t arise.

2 Likes

I hoped that the tone of that post was making reasonably clear how I meant that. In case it did not:

I can’t teach most of the people here much about colour science stuff. I’m reasonably good with engineering maths, parametric geometry, understanding coupled problems, and putting complicated stuff into software for end users. I’ve delivered more than one functionality that started out as an exercise in wagging the dog for users, until I worked out where they were, where they were going, and therefore, which of the screws I needed to expose to keep the bits fixed that they needed fixed and let them change the bits that need to be changed. I’ve also taught at uni a few years. Making weird shit logical has been part of my job for about 5 years, and finding ways to explain stuff to people for whom the regular explanation did not work[*]. I would like to apply these things.

So I fully expect (actually hope!) to learn a lecture or three about the colour process and DT’s pipeline and inner logistics here. I cannot pretend that I knew a terrible lot in that regard, but I do think I can contribute some of the other stuff I do know – but that is only going to work if we can have sufficient respect for each other’s abilities and awareness of our own weaknesses.

I know you guys have put a lot of effort into DT, and I won’t pretend that I could have done that, or even known to think about doing that. But if you look around here, there are two kinds of people: Those who “get” filmic and those who don’t. The path from one to the other seems very narrow, and I feel that myself. I’m trying to do what the classical recommended solution to difficulties with open source software is: help improve it rather than complain.

I’m aware that I’m not always the perfect diplomat but neither is Aurelien (which he knows), but that should not be a requirement (although it’s sure nice to try sometimes). I’d suggest we all give each other the benefit of the doubt. It’s not possible to improve something without also critiquing it, but I wouldn’t be trying if I thought that it was crap.

[*]»If you can’t explain something to a first-year student, you haven’t really understood« (Richard Feynman). I don’t actually completely agree with that, but I’ve experienced first-hand, often enough, that actively trying to explain something in better and different ways does wonders.

13 Likes

Nobody is asking you to be perfect or to be a diplomat. I respect that you’re jumping in and trying to see if there are improvements that can be made.

Nobody is asking Aurelien to do that either.

But, if you want the benefit of the doubt, as you say you do, then you must not say things like “filmic is the Dark Souls of image processing” because that is incindary and insulting, and that how you end up not getting the benefit of the doubt.

You also have to understand that it is more or less Aurelien that you have to interface with if you want changes to the filmic module. There aren’t more than a few people who understand the code and could make improvements quickly.

Soo…

  • if I move middle grey down, the hightlight range increases by the amount by which I move it
  • “you need to keep your EV range constant between black and grey” – so relative brightness between middle grey and the bottom of the range is not affected?
  • but then why does DT expand the shadow range, by whichever amount the middle grey is reduced? That would seem to directly contradict the previous point.

If I was looking at the histogram in the input to filmic, in log scaling: Is the middle grey doing anything else than shifting the input up or down by some fixed amount (and then adjusting highlight/shadow range)?

Does this mean that the scene-referred colourspace is using a “scale” factor which assumes that 0.18 maps to screen middle grey, and 1.0 to screen white? Is that necessary? Since there’s integers coming out of the RAW file anyway, would there be an issue if they were mapped to any arbitrary range for the scene-referred workflow, as long as they are later mapped back to whatever the displayable range is?
Mapping to some arbitrary range might of course make it harder to later find the range that needs to be mapped back, so I’m not suggesting to actually do that, but rather to help me understand why it’s necessary to keep any scene-referred value pinned to any display-referred level.

Okay, let’s use another word if you don’t like this one. Suggest one. I don’t think that you believe there was no value in trying to change filmic so that people have an easier time learning to use it, and that it can be used more efficiently. If you did that, you wouldn’t be replying. Yet, that is what your words above appear to mean.

Seriously: please, suggest a word. I cannot talk to you if I need to spend three sentences every time to circumnavigate a word which I know you interpret different than I mean it.

I try to avoid absolutes as much as I can, and I find the idea of calling anything “perfect” delusional. There seem to be people who have some built-in standard of what is/is not intuitive, but to me, this is not a boolean property of something. It’s a multidimensional vector, and it’s a different one for any person you care to ask. So when I use the term, maybe you think I want to get it to a state where a 5-year-old will get it right the first time. I would find that idea absurd.

So, yeah, gradual improvements is what I mean. I see things that I believe can be improved → I try to improve them.

hoookay, there you go:

  • b is constrained to be >0
  • x is substituted such that it is only evaluated for positive values – that’s why there’s (x-x_{hi}) for the highlight roll-off and (x_{low}-x) in the shadows. The curves are only defined where these terms are positive
  • c is defined: c=\frac{y_1}{g}\frac{bx_1^{2}+x_1}{bx_1^{2}+x_1-\frac{y_1}{g}}
    It’s always computed using a positive y_1 and positive x_1, positive g (and b is already positive, as mentioned above). This means the only way it could become negative is if x_1<\frac{y_1}{g}, i.e. g < \frac{y_1}{x_1}. This means that the slope of the linear segment would have to be smaller than the mean gradient between the two points which define it. But g is constrained not to do that. Geometrically, that would mean trying to curve the wrong way.

concluding:
bx^2 + x + c, b, x, c > 0 \Rightarrow bx^2 + x + c>0
q.e.d.

…and double-checking that: whoa, I just noticed that the constraint on g is not sufficient, for Desmos at least. If g = \frac{y_1}{x_1}, it stops plotting because the denominator goes to zero. This is doubly weird because I grabbed that curve and pulled it all over the place yesterday, including into configurations where it produces straight lines, and it worked just fine.

However, it can quickly be solved in one of two ways:

  1. constrain g \geq \frac{y_1}{x_1} + \epsilon
  2. check if g == \frac{y_1}{x_1}, and draw a straight line in that case.
1 Like

There are lots of things involved here that could easily be explained to a first year student… over a whole semester. Problem being, not the difficulty per se, but the amount of background required to start connecting the dots, because we sit between chemistry, physics, computer science, fundamental maths, psychophysics, psychology, color theory, GUI design, workflow design, darkroom legacy, colorists legacy and the burden of an output-agnostic/multimedia pipeline exit.

After all, this is a job. The biggest frustration for me is to have to loop-repeat to people who usually master only one of these aspects (if they master anything at all) how their seemingly-bright, simple and clean idea ignores all the other aspects of the problem.

People with master degrees and more tend to look down on photography as something that is and should be easy, almost a menial job, compared to their grown-up complicated job, and they expect to be automatically good at this lesser task. That’s a mistake. Many issues would be addressed by solving this hubris problem : no matter your IQ, it’s gonna take work to get proficient at image editing/retouching, and it’s gonna be painful. Bright guys, you are going to feel stupid, and you need to be upfront ok with the concept.

Anyway, I’m starting to empty my brain on the topic :

6 Likes

I did not say that any more than Aurelien said that the user experience could not be improved – as in: yes, you could read it that way to but that was not the intended message.

Well, that’s certainly a really inviting title…

I don’t think there is a lot of wiggle room for you here when I quoted you directly and there is a link in the quote that when clicked takes you directly back to where you said it:

@Mister_Teatime From one engineer to another, my suggestion is to create a prototype and let people test it. Fork darktable, add a new module with your ideas, and people will download, compile and try it out. The results will speak for themselves…

3 Likes

Also a play raw with an image you are having a hard time with would go a long way in aiding my own understanding

4 Likes

Complete answer tomorrow, but :

is why I always make sure things work on a theoritical level before even bothering opening an IDE…

Just to be clear : if you find a way to make sure we never divide by zero, your curve is good to go as an additional mode in filmic for darktable 3.6, no questions asked.

14 Likes

Thank you @anon41087856, for a really good read. I enjoyed it but I didn’t spend too much time on the math because I didn’t want my brain to spin :smiley:.

This document kind of formalizes what I had already realized but not concretely. You aren’t just building this module or that, you are building a complete processing pipeline. Filmic is just a piece in the pipeline, but it can’t be changed without affecting the way the pipeline functions, so any change needs to be with respect to the entire pipeline (or at least with consideration of).

As far as changing filmic for the better, this is the 4th version of filmic in the last 2 years. There is LOTS of feedback (:smiley:) on filmic and every 6 months or so a new, more capable (and sometimes easier to use) version is released.

2 Likes

I don’t see harm in his comment (I’m not native so I could be wrong).

Dark Souls is a piece of art, for some is hard, for others is enjoyable and for others both (hard and enjoyable).

I don’t think he wanted underrate @anon41087856 works with that comment (in fact I see a good relation between them discussing this topics).

We’re wasting both times discussing about this Dark Souls sentence.

Let’s stick with the topic because we all can benefit from their discussion.

Cheers!

9 Likes

I can pitch in with my current understanding of the middle grey point in scene-referred data. It’s there to define a reference point and make it possible to construct other tools around and have consistency in the workflow. Defining middle grey as a fixed value and not-a-single-photon as zero makes it possible to define a range to work with. White is thus not defined as a specific number anymore but rather just a brighter value.

As for this amazingly infected word “intuitive”. Here are some proposals for other words to use if “intuitive” is destroyed by earlier, to-us-new-contributors-unknown, infected discussions:

  • Robust: methods and modules that do not easily break their “needed” properties.
  • Consistent: is language, values, interaction, etc consistent across darktable? (Probably on a per pipeline section basis) f.ex. middle grey as R = G = B = 0.18 = 0 EV in all scene referred interactions.
  • Orthogonal: is it possible to change one parameter without being forced to update another?

I see some parallels with Blender and its development from its pre 2.5 release to now. Blender was seen as powerful but very hard to use and “unintuitive” in its pre 2.5 versions, and there were voices back then about making dumb/newbie interfaces to tackle this and make the software available to the masses. That never happened but the software has regardless managed to get rid of this “unintuitive” label at the same time as becoming vastly more complicated and competent. I personally believe that the main contributor to that is improved consistency. The user interface follows a more consistent design. Many tools were back then only accessible as shortcuts. All operators are now possible to access through either one menu or hotkey as well as a robust quick search that also exposes the hotkey + documentation link. Another example is progressively better documentation for both users and developers.

Getting there is not a fast process though, it is as @Mister_Teatime puts it, an iterative process. And iterative processes need constructive criticism as well as a positive welcoming environment for new ideas and contributors. And note that being welcoming does not equal accepting everything. Every contribution still needs to fulfill whatever needed pipeline properties as well as the ruling design philosophies of darktable.

If it is a lot of overhead to answer people’s “stupid” ideas that do not fulfill the pipeline properties and design philosophies of darktable. Then that is to me a sign that there is a need for better official developer documentation on this topic. I guess preferably located on the Github developer wiki: Home · darktable-org/darktable Wiki · GitHub
That would make it easy to in a friendly and welcoming manner point new contributors there to learn the stuff we need to learn to do correct contributions.

That said, I 100% agree with the scene-referred pipeline approach pushed by the darktable developers! But it is not easy to infer the exact properties of the pipeline stages with the current developer documentation which makes it substantially harder as a new contributor.

9 Likes

let’s be a bit more precise here: isolating individual problems from a slew of other intertwined problems making up the one big topic of ‘color-science’ is absolutely necessary to iterate towards a better solution. nobody in this thread, or for example the sigmoid thread pretends that this one solution is the holy grail that solves all problems.
To quote your blogpost:

As such, each algorithm should accomplish a minimal set of tasks in a fine-grained fashion, and ideally fully separate tonal corrections (brightness and contrast) from color corrections (hue and chroma)

Pain is not a prerequisite for proficiency. Pain is optional and very personal not only when acquiring proficiency.
To quote your blogpost:

Looking good or not is not for the researchers or the developers to decide, because both have artists to serve and only artists will decide.

If the artist is not satisfied with a solution, he might come up with one by means of adressing a specific problem. Should sound a bit familiar to your experience, no? That’s how you started to contribute to dt, right?

Yes, the pipeline has to be taken into consideration. For that the concept of the pipeline has to be quite well defined. With darktable and it’s legacy requirements this is not at all trivial and a simple ‘you have to see the bigger picture! fools! hubris!’ is not addressing or helping the problem. This thread and others have tried to adress this non-trivial discussion. But unfortunately I start to think that a certain toxicity is used as a tool to prevent a discussion about the pipeline or the problems at hand. I don’t know why and I hope I am wrong.

Tone-curves/tone-mapping/dynamic range compression, however you want to call it, can and should be discussed in order to address peculiarities of current implementations. And of course, tone-curves, tone-mapping is connected to gamut mapping. I see no one questioning that. There’s many ways to do either, many of them are okay-ish, there is no holy grail, but there is definitely methods and algorithms out there that do unexpected and outright wrong things. If to emulate film response it needs over- and under-shooting non-monotonic 4th-order polynomials for tonemapping, that’s cool! I have doubts that this is actually modeling a filmic curve, but that doesn’t matter. To me, alternatives make sense and can coexist with filmic. Why? Because: The user will do the driving as Aurelien puts it in his blogpost.

Having different gamut-mapping operators also seems like a good idea depending where the colors come from and where they should go with which intent. For gamut-mapping-operators it’s even more crucial to have a clearcut idea of the pipeline. Of course tone- and gamut-mapping are connected problems and, yes, what you do in one affects the other.

The interconnectedness of problems should not keep us from discussing specific problems. The ‘art’ of photography should not stop us to solve/improve technical problems. Only because we have no clearcut concept of ‘Intuitive’ we should not just give up on achieving this elusive quality.

8 Likes

Thanks!
I consider that an eminently solvable problem. Would you like me to generate Python example code? The whole trick is to make sure all inputs are robustly bounded.

3 Likes