That’s a nice paper!
Might lead to the new Enzyme Module 
I very much agree with this point.
I am trying to catch up here re: the math of this post (my math education is through linear algebra, which is great for a classical musician, but it was a long time ago, but it is giving me a great excuse to learn more and show my kids “why learning matters”), but I can comment on this topic.
From my vantage point:
Darktable is a tool. Tools have a scope and audience. So the question becomes “who is darktable for?” (apologizes if this point should fork to a new thread)
I am assuming a lot, but we also have “command line”, powerful tools like GIMIC, JPG tools that can have quite the learning curve, like GIMP, and pano tools like hugin’.
I choose darktable because it most easily, more than any other software free or paid, with some learning of the tool, realizes what I want with my photos.
Darktable seems to be for the photographer who:
- wants more control over their photographic vision (starting from RAW, more power given to the user)
- have that process be gui driven
- prioritize flexible paths toward a goal, irregardless of “standard” workflow in digital photography (a la Adobe)
- Can still be a tool for those who charge for their time (edits do not all have to be hour long ordeals; backwards compatibility to earlier versions )
- Embrace FOSS philosophy
And more recently:
- embrace this new displayed referred philosophy
- embrace more modern color science, pushed into the public “eye” by the recent availability of great quality video equipment and color grading software.
So, designing the tool and its interface is a dynamic balancing act between people new to the software, people who want to deep dive into the computational processes, and those who want to benefit from the recent, for lack of a better term, new digital photographic aesthetic.
There is no need to chase butterflies, as the design process of darktable already has some of the tool design baked into this process ![]()
Not only do I think the darktable dev team does a great job at this, but, with forums like this, it is a very fair process. “Power users” have tons of control (and, if so motivated, can participate and change the program to their liking), people who are photographers before computer programmers can achieve the look they desire without being limited by “safety”, and, while new users do have a learning curve, it has purpose, and will only enhance their understanding of the process, as well as refine their artistic choices.
Darktable, IMHO, does a great job of inviting new people to work with the software. A little math shouldn’t scare aware those who want what it offers and who are introspective about why they use the tools they use.
Just a long winded side note as you all dive into the math: some context by a passionate user. Hopefully it helps build a road as we move forward into territory really nobody else I see is doing between those who are volunteering their time and knowledge to build a better darktable and those who look at this and go “its hard… its not what I am used to… why can’t it be easier or more what I am used to?”.
The thing you are missing here is the GUI. If you push a slider at 20% of whatever scale, and see some result, you are happy. If you push it then to 15% and see the result change accordingly, again you are happy.
But then, if you push it to 10% and see no change compared to 15% because some internal sanitization clipping has been toggled on, it’s just a big WTF. Some users will take on the habit of constantly pushing to the extremes, just because… Some will conceive the idea that it doesn’t matter.
There is nothing more disturbing than a GUI that suddently stops reacting or has some dead range. When that happens, the first thing you think is “bug”, not “hidden safety jacket triggered backstage”.
So, yeah… clipping and sanitization are great… if they have some GUI feedback, or better, if they actually clip the control range of the slider in GUI. So far, I have no idea on how to do that, so let it break… at least users have the control curve to see it. For now, some settings fail but at least, control and model are 1:1.
There is probably an elegant solution out there but I think devs (and I as a user) are more interested in pushing the envelope on the technical level. Capability over usability. If you want the latter, there are plenty of apps that have that.
That’s a very valid point. The general solution I would suggest is to map the range of non-silly input values to the range from 0 to 1, and 1. So 0 is always the minimum and 1 the maximum value we can use without producing anything obviously silly (like overshoots, undershoots, divzeroes…) This means that moving the slider will always create an effect, and leaving it at the maximum will always generate the largest possible effect without overshooting, etc.
Now, I’m not sure if there are cases where somebody would want to set the slider to a certain position and expect to get the same effect size, or where another slider affects the valid range such that leaving it in the same spot will greatly affect the image, in an unexpected way – but in this case I don’t see that.
Wow, I’m very pleasantl surprised that my hunch turns out to be completely correct ![]()
They seem to be using a generalized form, and use the exponent as a variable parameter … iinteresting…
They’re not modelling the response of photographic paper, though, but that of the human eye, and derive a mapping to make HDR images appear as natural as possible. On skimming the paper, it seems that this includes local contrast/detail adjustments.
I may have to read it properly to make sure I’m understanding at least the gist of it correctly.