I thought the following: Would it be feasible to implement a “lua iop module” that executes a lua script at its position in the pipe? The lua script would be called with the image and mask data at its position in the pipe, and would return image data as well. Optionally, it would be able to emit a raster mask as well. The module would not show up by default and only be available if some option in the preferences is set, to prevent its use by regular users. The purpose would be prototyping of iops rather than implementing something for production use, but could also be used for use cases that are needed by a very low number of users. In particular, exporting the current pipe state as image and reimporting an externally processed file could be easy that way, even at different positions in the pipe with two of these modules and respective scripts. An option to define if the module should be neglected in thumbnail rendering would probably be needed.
I know that the roi concept is something to deal with where I lack the knowledge which options are possible, maybe recent work on parallel pipes may ease this topic, but that’s for a later discussion anyway.
E.g., for me, it is feasible to write a bit of lua code if time permits (the last 9 years, at least one of my children was not autonomous enough to give me much time to work on such things, but these days I regain some free time), but my c knowledge is basically nonexistent, which makes it hard to prototype something (I tried but did not succeed). Such a module could help people to try out things with a much more gentle learning curve, and even the solutions that don’t make it into c code eventually may be helpful for some people.
What do you think, is this worth starting a feature request, or is it even possible already?
For me it’s the exact opposite. If I pay for a software such as lightroom, I still have almost no chance to get a feature request implemented unless thousands of people are crying for it. There’s no direct interaction with the devs. With f/l/oss I always have two options, i can implement it myself (or directly pay somebody to do it), or I can start a feature request. In that regard, darktable devs have been very generous, many of my feature requests made it into actual features, many of them back in the redmine era. That’s in particular one reason I am using f/l/oss software: The possibility that I can contribute one way or the other not only money but also ideas, and that there is a chance that they become reality. Lightroom would have never got a vertical waveform just because I had a need for it, just to make one example.
That’s IMO not true. First, when I shoot raw only, there is no jpeg. But even if it is there, the editing capabilities of the camera are very limited. When I shoot a soccer game, the light conditions do not change dramatically, and I can come up with a reasonable color edit in a couple of minutes (I even have a preset for the soccer photos as a starting point). This is then copied to all images of the match that I want to edit (culling comes before). Typically more than 50 pictures, as every player should be at least on a couple of photos. The edit per picture is then only cropping (sometimes very heavy cropping as 200 mm is my maximum focal length) and straightening (this in particular is not possible with the jpeg without massive quality loss), and maybe masking out some distraction in the background to make it a bit darker (e.g. sun reflection, but typically a very low number of images per session require masking, but sometimes I use it also for vignetting), plus some exposure, contrast and vibrance correction due to changing light or shooting direction (sun position relative to the shooting direction). All of this does not require much time, and I am thankful to have such a great tool for it . Using camera jpeg would be a no-go for me. I am doing this in my spare time to give something back to the team, as my son is playing in this team, and so far people like it. But typically I want to have the images out on Sunday if the match was on Saturday, so I cannot spend hours on a single photograph, getting one hour to edit the whole set of pictures is already luxury.
As I said, many of my feature requests made it into code and …
I am very grateful for this and I hope that this is clear. However, from your posts I get the impression that feature requests are in general a bad thing. I don’t think that this is what you intend, but that is what I read on my side. I think I understand which attitude you are arguing against, but we are not all native speakers, and sometimes things may sound more harsh than they are intended, also in feature requests.
Question to the experts: here, it is stated that a good mask is not good enough, but that a foreground detection is mandatory for reasonable matting result. How does this relate to the current gimp implementation of foreground selection?