Yes, but putting the inflexion at 0.18 will make low-lights very difficult to control since we have increased sensitivity in these parts, so what you need for a good UX is a log interface that scales up this region for better control. Which is not what typical curve do.
The thing is nobody wants to do that, and using free-hand nodes generally ends up losing time to manually micro-tune a smooth monotonous curve… So you only get the illusion of power, and all the overhead. 99% of the time, people just want a S curve, so why not deliver them in a more robust way ?
I think you are mixing things up. Photography is a technically determined art : it was made possible only after optics, micromechanics and chemistry went far enough to fix an image on a substrate. Making feeling-enabled pictures is not incompatible with using the state-of-the-art technics based on the best understanding we have so far of light emissions and colour perceptions. The art is in the using, but to make reliable and sensible tools, you have to care about science. It’s all about getting robust tools that give the best results the quickest. I hate computers, I want the path of least effort and most efficiency to obtain the results I’m after. To achieve that, treating RGB vectors as whatever numbers and mess around with them as if they represented no physical reality is like shooting myself in the foot. Unrolling the physics and psychophysics where needed is the only way to get digital to behave like analog, therefore predictibly.
Intuitive for whom ? For someone who did analog photography in the past (like serious printing stuff, not just sending away negatives to the 1h lab), digital display-referred makes no sense.
I agree. But most non-linear transforms inherited from the display-referred workflow are non inversible, and can get very unpredictible (if you are able to answer the question “how much will I oversaturate shadows while I increase the contrast that much”, you are better than me).
You don’t get it. Your screen has 8 EV of dynamic range, your paper has 5 to 6.5 EV, yet today your average DSLR has 12 EV (up to 14EV and counting) at 100 ISO. This is single-frame HDR, and it’s standard now. These files need to be handled rigorously to keep all the data and blend it gracefully into SDR, or be able to recover details in backlighting situations as advertised by the cameras manufacturers.
Hence me saying a sensible image processing pipeline should be 100% output-agnostic, which is possible only if you work on linear light.
Static contrast is pointless, your brain is doing focus-stacking and exposure-stacking in real-time, so the retina is just the first part of a complex process and the actual dynamic range of human vision is around 18-20 EV depending of the surround lighting.
You are missing the point. The ambitions of myself for dt is having a set of physically-accurate tools to push pixels in a fast and robust way, so I can perform much dramatic edits without nasty side-effects. As of darktable 2.4, all the contrast and dynamic range compression tools gave halos or colour shifts when you pushed them far. The usual answer was “don’t push them that much, they work only for small adjustments”. Right, but what good is a tool that fails me when I need it the most ? If I need to push shadows by +5 EV and the tonemapping tool can’t do it… well it doesn’t work. That implies bad colour models and bad algorithms.
So I studied the problem and tested solutions, and came with the answer that image processing needs to be physically-accurate, work as much as possible in linear light and stop convolving colour and display concepts too soon in the pipe. In this case, you can use the software as a virtual camera and redo the shot in post-processing. Then, editing is analogous to designing your own film emulsion, and many things are more simple even though the UI might get more crowded. Dealing with linear light is very easy, it’s like adding colour filter on top of your lens.
I just think that people who never pushed darktable too far can’t see the problem. It sure works fine for gentle editing, so I get why all my changes just seem like a big pile of habits-changing trouble for many people.
Even if you don’t clip values, having an UI from 0 to 1 still sucks because these special values are only conventions, and nobody knows what data you actually manipulate. I think good algorithms should work in the more general way. But then, sure, you need to expose some scaling parameter in UI and users will start to complain about it, even though its default value will usually not need to be changed for a majority of them.
I tried to be user-friendly, because offset/lift affects mostly shadows, gamma/power midtones and gain/slope highlights. But, of course, there is no threshold in there, let alone hardcoded. The algo is simply RGB_{out} = (slope * RGB_{i n} + offset)^{power}.
It’s been wired progressively. If you look at the pipe now, everything coming before filmic is output-agnostic. Filmic is the HDR->SDR mapping, and everything coming after expects SDR data.