Proposal for updated Filmic curve parameterisation in Darktable

It’s more about being pleased with the resulting image than with the solution. The solution is a tool, not a result. And it has often to deal with technical constraints that are nice for nobody. I’m pleased with being able to drive only 50 min to go see my grand-parents, I’m not pleased with having to own and maintain a car. If really not happy, I could take the train + bus, but I’m on for a ride of 2h30, so it’s not a solution to the same time-constrained problem. How it feels is much less important than what it allows to do.

Bike shedding with pig-headed people and having to repeat the same info is a considerable loss of my time, you have no idea. On certain days, I do nothing but answering to people.

Yes. But dicussing with whom ? I’m okay to discuss with people who have experience and skills, if I know I can trust their eyes and I’m confident they are aware of the problem in all its complexity. Taking every internet rando’s opinion who has edited 3 pics/week for the last year and struggles with basic color theory is out of question. Show off portfolios guys. The property of skilled people is to be able to produce a good result even with shitty software, the difference software makes is the time required to go from intent to result.

For fuck’s sake, this is designed to ensure C^2 continuity over the full range while having a control over the rate of convergence toward the bounds, nothing more, nothing less. And there is an alternative with 3rd order curves. How many times do I need to repeat myself ?

Alternatives imply they try to fix the same problem under the same constraints. Not a simplified/trimmed version thereof.

You are right, and yet this request is not realistic given current resources.

Design is to be done against SMART tasks:

  • specific,
  • measurable,
  • assignable,
  • realistic,
  • time-related.

Any non-SMART design goal is not suited for a design process but for a political speech. That’s why I scream everytime I read “intuitive” : it has none of these properties. At that point, I believe algos should have a goal in terms of the image properties they aim at controlling, constraints as for what they need to care about (what you call robust), and that’s all. Consistency is going to be difficult in an app that is 10+ years old and coded in sediment layers. Orthogonality is paramount.

You can replace filmic by a base curve or by a 3D LUT if you want. You just need to mind where that scene-referred to display-referred transform happens in the pipe.

That scaling is achieved by the exposure module, there is no auto-scaling aside from that. It’s an old assumption that middle grey is to be met at 18%. Regardless of the DR, we use 18% as a pin-point. Then:

  • if HDR or scene-referred, we unbound the white value
  • if SDR or display-referred, white value is bounded at 100% display peak-luminance.

That makes things easier because we know that, after this scaling, luminance ranges are correlated between all spaces, and all we need to care about after the pinning is the bounds of the DR, which are variable between spaces (while middle grey is not).

Yes, that would be an issue. We need to start the pipeline with values bounded in 0-100% sensor-referred to spot raw clipping (RGB = 100%), to apply denoising (scaling changes variance and voids noise profiles) and for some kinds of non-linear input profiles (LUTs) that can’t be scaled.

So we start in bounded linear sensor-referred, then convert to unbounded scene-referred by linearly scaling to pin the greys, then convert to bounded display-referred by whatever method (simple clipping out of range, or clever tone-mapping).

yes.

Because, since the white point is kept as-is and the black point moves by the same amount as the grey-point, you necessarily expand the DR.

Nope. But anyway, changing the grey point is now discouraged.

2 Likes

I think that’s true.

I think for now I’m focusing on the maths and input parameter bounds, but there should be a more general discussion about how people are (trying to) use filmic, successfully or less so, and which of those ways of using it can be made easier/quicker/more obvious, which ones may be based on misconceptions, and if the UI could help discourage those misconceptions, or even “re-route” them.
That could also take into account the potential results people are trying to achieve – because whatever look you prefer of course affects how you can/should use the tool.

Is there a way to get more detailed info about how somebody uses filmic than by looking at the history stack in an .xmp file? It seems to bundle all settings you make before moving to another tool, so if I went between contrast, latitude and highlight range five times, that would still only be recorded as 1 change. Some sort of macro recording would be neat. I thought about recording my desktop, but making everyone watch me spend ages and not really getting anywhere (or inviting others to let me watch them failing) is also not setting the type of mood I prefer…

Or maybe I’m overthinking this …

There is no shame in asking for help when you’re not getting the results you expect. That’s what this forum is about. I don’t understand why people writhe so hard when asked for raw files.

Yes, record the video.

Yes. Do that please. And the raw file :wink:

3 Likes

Generally, I agree.
There’s two things in the way, though:

  1. I’m not really able to fork DT and implement the changes I imagine (but if somebody has the patience to support me with it, I’d be delighted to go off experimenting!)
  2. I think whatever any of us can come up with will be better if we don’t go off each on their own and create solutions that work for it’s creator and few others, but rather discuss pros and contras, and add our ideas to the ideas of others.
    2a.: Case in point: I know that my knowledge of colour science has holes, and my understanding of the implementation of DT’s pipeline is mostly based on remarks I’ve read around this forum. In other words: If I went and implemented something that works for me, I’d probably be fighting against the conventions which DT uses internally, and I’d produce something which almost nobody else wants to use. I’d probably mis-use some terminology in the UI, thus misleading anyone who’s learned the proper definitions of those terms. It’d take me ages, and the result would only work for others who have the same misconceptions.
    2b.: We should avoid turning this into a pissing contest. I don’t want to go off and build a thing on my own, then present it as “take it or leave it” because I’m not interested in winning some contest (okay, let’s be honest: everyone likes to win – but I do hate contests). I know that the best solution is some combination of ideas which are distributed across several people’s heads.

So yes, a prototype is needed, but (also speaking as an engineer) building one and showing it off before you’re sure you know what you’re doing is dangerous because the stuff that doesn’t work ruins the show for the stuff that does.

Just an useless chime here from me:
When I wanted to start to work on white balance upgrades for darktable, I read enough material to understand that the wb in dt as it was, was not “the best” solution and the temp/tint stuff was “kinda wonky”. I think I’ve read close to several thousand pages just about blackbody emissions/temperature relation, tint stuff (tint is lies!), calculation, specific colorspaces making planckian locust abit easier to calculate/draw (etc etc), spent several weeks with Aurélien polishing stuff and calculations… Which lead to just a bit of interface changes to white balance module, no internal changes to calculations since that would be night impossible to do in backward-compatible manner (given the timeframe and time I had avail at the time) and I do believe that work put into making wb a bit better led to new module: color calibration which actually IS better overall than old white balance (but still relies on old wb, just as a precalc step).

So… Yeah.

Remember: goal is the best and most robust stuff that fits within very resilent framework, that ultimately gives as much power as possible to users willing to invest their time into learning them.

10 Likes

So what outcome are you looking for?

Do you want @anon41087856 to take every idea that someone throws out, code it up, debug it, test it, run thousands of images through it to see what it does, does well, and does badly?

If you have an idea, and you think it’s the next best thing, then fork a copy of the code, code your solution, test it with thousands of images and document what it does and how well it performs. Then present it and show what problem it solves that filmic doesn’t, or how it performs better on a certain class of images.

1 Like

Don’t bother about C if it’s not your core skill, just provide me the equations. It should take me 10 min to code the python prototype and check everything behaves, then less than an hour to get everything in C.

10 Likes

I’ll give you a quote on this from yourself from the sigmoid thread:

The design goal is a ‘pleasing’ result and empiricism is suited as one of many testing priciples. Sure, it’s prone to breaking at some point. That’s why machine learning is only as good as the training dataset.

Again, I know and understand why it is there, you do not have to explain this.

But this is what people actually do/did, no?

2 Likes

If this is done why always repeat the same questionable arguments? Just use their forks and be fine.

Yes, indeed – but what prompted my suggestions was my feeling that there had been enough discussion. At some point you need a proof of concept to help everyone focus and be more productive, instead of diverging into abstract discussions.

1 Like

Yes, and right now there is a pull request against darktable to add a feature to filmic.

+1 for the Blender analogy. The calls for an “easy” interface were unrealistic, but they underscored how incredibly idiomatic Blender was. I’ve worked with a bunch of 3D packages, and was able to figure most of them out from scratch. But opening Blender, the interface felt less than helpful:
Do something → something unexpected happens → repeat
They’ve learned so much since then! And I think there’s still some room for improvement.

Consistent, robust, orthogonal are all really good properties in service of making a tool more … accessible? Except “accessible” is usually used to refer to making things usable to people who may be impaired, colourblind or otherwise inconvenienced. Which I consider a very worthy cause but is not what I mean. What I mean is “requires less training to become comfortable with” and “requires less external information about the implementation to be useful”.

I think more documentation is always great for those in need of information – but producing it is a burden and potentially thankless time sink for those who would otherwise rather implement nice things. Especially since good and easy-to-digest explanations are a major piece of work in and of themselves. So … I’m of course happy to contribute the equations above and the way the constraints work (if you point me to where they need to go…), but I would not be qualified to document the image pipeline. I might be qualified to read what’s there, then ask other people annoying questions to turn up aspects of the setup which are not covered by the documentation, but I’m not sure if everyone would appreciate that :slight_smile:

4 Likes

In my experience as a teacher, this is where the users themselves - especially those without prior theoretical knowledge - need to be drawn into the equation. The developers alone cannot answer this question of comfortable use.

When they are confronted with new complex matter, the users can be roughly divided into three groups:

  1. those who expect “service” and think that the matter is too complex and must be made simpler for them to use.

  2. the “knowledge seekers” who realize that they must first acquire the necessary knowledge to be able to use the new technology. These are mostly the people who are interested in the technology itself.

  3. the third group is a combination of both: These are those who acquire the minimum knowledge necessary to use the tool and, through practical experience, recognize possible ways of optimizing the process. They are essential for the development of a tool and can help developers to optimize the tool. This third group are actually the best mediators between developers and users.

how can you take advantage of this?

As a developer, you don’t have to document everything you develop down to the last detail. But you have to be able to convey the knowledge necessary to use the new implementation, preferably with a couple of visual examples. This creates an incentive and guarantees that a group of users will be formed to test the new implementation for its suitability for everyday use.

What @anon41087856 currently is doing I think is a very good example of how to do it well from the developer side.

As for the feedback from the users, they should not make less effort to present their difficulties or recommendations for changes also on the basis of concrete examples with detailed descriptions. This is interesting not only for the developers, but also for other users who may have the same difficulties or - even better - through their practical experience can help those with difficulties to familiarize themselves with the tool. Often there are already very simple and elegant solutions that you would not even think of if you do not use the tool intensively. For example, instead of discussing forever about a new module, it may be possible to achieve desired results by using an already existing module in a way that was not considered before. Such feedback is also extremely important for the development of a tool.

Unfortunately, this kind of user interaction is still a blind spot in free software development.

8 Likes

And a huge time eater. At some point, we need to be realistic too given the available resources.

5 Likes

Because it means letting others look at your “art” (or not-so-art).
Same reason most programmers don’t like to let others look at their code: fear of being judged.

Not to worry. I’ll sit down and do one. Might take a little time to sit down and do it, but it’s on my list.

2 Likes

So you’d rather set out to reparameterize a whole module than share a raw file? I dunno, that doesn’t make a lot of sense to me.

That’s not what I said … but I’d rather not start arguing about semantics (or worse) today, so please just exercise some patience.

7 Likes

Yes! Someone with the brain and time to have an idea how to make filmic into a non-academic practical tool for real-life usage.

I really appreciate all the hard groundwork that @anon41087856 has put into the tool, changing image editing possibilities for the better. But the actual usage across not-perfectly-controlled images is cumbersome.

Edit: although the problem is less a deficiency of filmic, but rather the interaction of exposure, grading and filmic that is very technical and not centered around actual images.

3 Likes

Can you show a couple examples where you find this too technical and difficult?

2 Likes

It’s not about difficult, it is more about cumbersome or ineffective.

So a single image is rarely ever a problem but rather working with hundreds - take any full day reportage under mixed light conditions and give yourself an editing time of approx. 10 seconds per image for the whole bunch. So 360 images in one hour. Not super fast, but a solid quality edit for a reportage, wedding, sports event, etc.

The scenic¹ workflow has too many controls and tabs and whatnot and a lot of them need a nudge here and a twist there to make the output look good. My own workflow is not stable enough yet to pin down how the interactions could be simplified.

¹) that is not a typo. It is “scene referred + filmic”.

3 Likes