You don’t have to tell me how open source works. I’ve contributed to open source projects, I’ve even been a technical lead/founder for one. (Now retired, Android platform development is hell.)
Perhaps, next time, you should not make assumptions, such as assuming that no developers were willing to put time and effort into fixing the issue. Not one, but TWO developers were willing to put in the time and effort.
The effort was shot down because “the module is too early in the pipeline”. Never mind that it took a single screenshot to show that a single mouse drag fixed that deficiency. Such unfounded attacks continued that “it was too early in the pipeline” despite screenshots showing it capable of operating (and operating well) as the final step before the last colorspace conversion.
Here we are, around 3-4 years later, and Aurelien has finally figured out how to drag a module with the mouse, even after being shown a screenshot of it being done.
The patch was a WIP, and I was willing to put in the effort to determine a way to fix that flaw, even if it meant doing what @jandren is basically being told to do by yourself in New Sigmoid Scene to Display mapping - #423 by paperdigits and put two completely different codepaths into single module. But it was made clear that such effort would be wasted as “it was too early in the pipeline” no matter how many screenshots were provided indicating that flaw was fixable in a matter of seconds by the user, and in fact the changeset was being tested with the module moved to a “better” place.
Such attacks continued even after another developer split out the functionality into a completely separate module (and fixed a number of corner cases I had not yet fixed), and it was made clear that no matter what, any continued effort was pointless because the modules were “too early in the pipeline”.
I don’t think that this kind of tone leads to constructive discussion.
(I am assuming that you are aware that there is a lot of discussion and work that comes between someone proposing a feature with a screenshot and an actual implementation in a stable release, so I won’t expand on that here).
Yes, and such work was being done by myself and Edgardo. Then aurelien called in his rabid attack dog who came in and started making (clearly false) assumptions about how the thing was intended to work or actually worked, and outright ignored unambiguous proof that his primary problem with the module (too early in the pipeline) didn’t actually hold true because modules could be moved.
(For reference, if someone had suggested moving the default position of basecurve later as part of the process, as opposed to s**ting all over it because it was currently too early, I would have been perfectly fine with that, because it clearly worked better.)
I tried as hard as I could to have a constructive discussion with Edgardo and Pascal, but Aurelien is incapable of such discussion.
I will just share one experience as a photography and imaging tutor. I was teaching a new class of students with varying abilities to use DT. We opened an image using base curve as the starting point and made simple adjustments. We took a snapshot. We then reopened that image with no base curve or filmic applied. I taught the students how to create there own curve which they generally found better than the base curve. Finally we opened the image in Filmic V3 and with just a few simple tweaks 100% of the students preferred the look created by Filmic. Since then Filmic V4 and V5 has only made the job simpler. I can’t wait to try V6. I especially love the ability to adjust contrast without clipping the extreme highlights or shadows in V5. I also liked the improve saturation controls that came about in V4.
I am using Windows for my edits as I have a SpyderX calibrated 43 inch monitor attached to my laptop. I have a Linux desktop which I wish to switch to be my editing computer but have so far failed to get the SpyderX to work on Linux as I am a bit of a dummy when it comes to Linux. I am trying to walk away from Windows eventually as I like the philosophy of the linux world. Where will I find your V6 download. I am currently running a 3.9 install from a couple of weeks ago. Is V6 planned for inclusion in 3.9 anytime soon?
I am not sure… Last comments were a couple of weeks old
For now you can try it. I think some changes have been made since I did this…my pc is dead so I am not set up to build and so I can’t update beyond this until it arrives…but it should give you some idea…just unzip even in downloads directory and run using the little batch file to keep everything local to that directory
darktable had colour problems that could not be solved in the framework it used because the problems were actually the framework itself. These problems were made obvious by modern cameras used in harsh conditions that challenged the pipeline more than before (aka pulling shadows like never before). Technical solutions have been proposed to fix these problems. Fixing a buggy framework meant changing the framework itself.
Many users never witnessed those problems. To them, darktable just got more complicated for no good reason and it’s almost impossible to reason with them and try to explain why things are better now. We just broke their toy.
What’s more important is image processing has always been hard and difficult to understand. But the easy toys managed to keep all that hidden, meaning very few people got a chance to understand what really happened when pushing sliders.
Problem is that new framework undoes some core assumptions of the previous framework, assumptions that were never explicitly stated. Things like grey = 50%, white=100%, ergo never clip highlights, always pull the middle of the tone curve.
Again, it’s almost impossible to try to explain how the assumptions have changed and what it actually changes in practice, since those assumptions were never clear or known before.
It’s easy to adapt to things you understand. But being unable to adapt to something hard to understand probably means you did not understand more before. You just got muscle memory.
Scene-referred is more simple. It removes intermediate layers that were broken. It requires fewer modules to achieve the same result. It better splits apart colour properties (hue, chroma, lightness, brightness, saturation) for independent control, which by the way was grounded in darktable’s design from day one (hence the use of CIE Lab).
People don’t want to understand that the lazyness they could afford when shooting 8 EV pictures and inserting them into an image pipeline of 8 EV is not affordable anymore with their 14 EV cameras.
People don’t want to understand how colour works, past the HSL sliders. People don’t want to understand that having used GUIs for decades, they still don’t get what they are doing. And now they are presented the bill, and they don’t like it.
My advice is try watercolour. Then only you will find out for yourself that more simple does not mean more easy.
One thing I truly appreciate about the scene referred changes is that they prompted me to learn. Through these changes, their surrounding discussions, and not least of all Aurelien’s many explanations, I got a chance to think through so many topics I didn’t know where important. Now I understand so much more about image processing.
And that helps me not just in Darktable, but in any tool. For that, I am even more grateful, than the changes themselves, even though they are awesome, too. Thank you!