Worried about LLM-written modules

Note that quite a bit is abstracted away be the various helpers, such as dt_bauhaus_*_from_params, dt_gui_vbox() / dt_gui_hbox(), dt_gui_box_add etc. But signals, thread-safe communication, drawing of curves are still there all over the place, yes.

4 Likes

I think it wont be possible to port darktable to gtk4 and preserve backwards compatibility. It will be necessary to throw out some modules.

I’m noodling around with a node editor organization for a CAD-oriented tool. It would include image tools, mainly OpenCV functions to start, but the organizational concept extends well beyond that domain.

Blender is the existing FOSS product that has gone the furthest in implementing such an environment. Worth a study…

1 Like

Like what for example? Even if the UI of some modules has to change due to UI framework limitations, I don’t see why compatibility needs to be broken as long as the same parameters exist. At the end of the day we are talking about an UI framework, it’s not anything out of this world to find solutions if problems arise

3 Likes

Which modules will/Should be removed, that will be the hardest nut to crack.
It was just explained by others here how much work it is to port the whole thing to gtk4. Some even think its impossible.

Well its easy enough to just throw out modules (i did it in my fork a few years ago), so i Imagine in the first versions of the gtk4 port only a few important modules will be present. And their number will increase with time.

I don’t see why Gtk4 should push us to remove modules. Please stop FUD, things must be done based on facts!

7 Likes

Even if a module was too hard to convert, and if Pascal says that’s not a problem, I believe him, modules could be reduced to just on/off, preserving old edits, but with out the ability to adjust them.

The thing is, the statement that the migration may be not feasible for some modules is just nonsensical. The toolkit is just one more abstraction layer built on top of lower level libraries (all the way down to OS primitives). Any hypothetical UI functionality that could not be reimplemented straightforwardly in GTK4 could still be implemented going one layer down.

4 Likes

Yes, agreed. I had assumed the statement was more expressing that it would be too much work for too little value rather than imposable.

2 Likes

Not being a dev, and not sure this has been mentioned before (TL;DR), wouldn’t a logical step be to create a plugin section where users could download, at their own risk, modules created by the community? The understanding would be that Darktable takes no responsibility for such modules but are there for those who want to try them.

Maintainers would only then be responsible, as they are now for the dev version, not the contributor’s module plug in’s.

Could that work?

It’s no so simple: each module has a place in the pipeline, and is part of a module group. Currently, these are both configured in the core of darktable, hard-coded. One could flip that around, of course.

Then, there can be interactions between some modules, but that’s extremely rare (e.g. white balance + highlight recovery + input colour profile + color calibration, exposure and AgX, or how input and output color profile set the colour spaces that other modules may use.

1 Like

I would say for a plugin it would a very acceptable, maybe even a desired limitation that a plugin cannot interact with other modules.

2 Likes

I think this article raises some interesting points.

Elsewhere, you will find me advocating philosophy for children, with particular emphasis on critical thinking and ethics.

1 Like

I can’t imagine that. I wrote a lot of C code in the the 2000-2010 time frame for a few different platforms. I’m so thoroughly out of practice now I wouldn’t submit anything out of shame. I can’t imagine LLM-ing up something and expecting everyone to be ok with it.

1 Like

In my area of work we are moving away from Zoom interviews because of this, particularly with younger people it’s become a problem. They’re just speech to texting the questions and reading what the LLM is telling them off a second screen. Happened a few times now. Almost have to give them credit for being that clever, almost.

6 Likes

How were they getting away with that out of curiosity? There’s a circa 15s delay whilst the LLM thinks, so surely it’s obvious from the video there’s something weird going on?!

I’m unsure on the specifics of their setup but I think the pay for tiers respond faster. It’s pretty easy to cover up with filler words. Heard about similar incidents from a colleague at another university too. Honestly, these are junior roles. It seems like it would be easier to just know a few basic things than it would be rig up this interview cheating system.

I’m not a heavy user and only really have much experience with the low end Gemini tier through work and it responds much, much faster than 15 seconds. They keep offering me Claude licenses but I keep declining. I just don’t like interacting with the things but most of my coworkers are all in on them.

Edit: Honestly if we’re all in on AI on the job anyway this position makes no sense to me. If they get a Claude and Gemini licensed provisioned then who cares if they’re using it on the interview too? I dunno, I’m not in management, so whatever.

1 Like

All crazy! Makes me think of this track -

1 Like

I think a lot of people are missing the long-term implications of accepting more and more LLM-generated/assisted modules. At first, it may just be used as a tool to generate code faster. But here’s what will happen in the long-term: the emphasis in darktable will slowly shift where more are more modules are created faster.

And that’s not necessarily a good thing. Because it will be so easy to create new modules, less thought will necessarily be put into making them. Any sort of module can be created, and a pro-LLM use stance will discourage more people from actually learning the math and color science.

On the other hand, if Darktable had an anti-LLM policy, it would actually stand as a beacon for people and encourage them to spend more time learning and maybe thinking that a new module is not actually needed. The modules that would be submitted would be ones where someone really thought that it would be worth it to make such a module, and they would be more well-thought out as well.

The means to create the code is an issue because people who create the code mostly themselves will actually understand the code. And they will understand it better than anyone who uses an LLM on average. Of course, there many be an exceptional individual or two who is already a good programmer and might be able to understand code they used an LLM for, but it will increase the submission of modules by people who do not understand the code.

I disagree. As long as there are a few developers who care about darktable and an active community, it won’t stagnate. There’s no proof or even evidence that a lack of LLM use would diminish a sufficient number of developers. That’s just random guessing.

Yes, the development of new modules might be slowed, but a photo editing Raw application actually doesn’t need to have modules added to it all the time. I appreciate the many modules in darktable, but that does not imply in any way that darktable needs many more modules. We might already be at the point where the number of modules is close to or at the ideal level and they just need to be tweaked or improved or simplified.

Because if the development of modules and darktable is entirely relegated to a bot, even IF the bot does as good a job as the devs do (which cannot be the case in the present but hypothetically could be in the future), then no one will really understand darktable any more.

And what does that mean? The fact is, the deep understanding from the devs of the math and photography in the past has been transmitted into darktable through its design.

That overall sense of a cohesive program will surely diminish if whatever can be done, is done. Because then the sense of adding to darktable to make it a better PHOTO EDITOR will be gone; instead, people who don’t have a lot of experience in photography will just start adding all sorts of modules. Note that this may not happen in the short-term, because there are still many devs who are experienced in color science, but if darktable adopts a lenient LLM policy, it may happen in the future that such expertise is de-emphasized.

No it’s not, actually. Imagine that there was a new drug that allowed one to read other people’s thoughts at will. You could say the same thing, but it would be false. Why? Because we are not simply rational thinkers and we also have instincts and temptations that often override our rational thinking. The ability to read thoughts would be too great a temptation for people to use for good, and it wouldn’t be up to the individual.

LLMs are similar; although we theoretically could decide how to use them, most of the time people use them to the extent that they become lazier. For example, it’s a well-known problem that many students in school these days overuse LLMs and AI for their homework, even if it puts them at a disadvantage because they learn less. Yes, a small number of students learn more because they are disciplined, but many do not and they suffer because of it.

We can rationally control a hammer and most people don’t go around killing their neighbour with a hammer, even if they have an argument with their neighbour. But there’s a reason most places are banned from selling dynamite. Because a sufficient number of people don’t have the self-control to use it and because the damage done by it is too great.

That might be true now. But in the future, the overuse of LLMs to code is very likely to reduce the number of people who understand the color science because those that had the benefit of an education before LLMs will retire, etc. If we don’t institute restrictions now and just promote this software and accept it, then it will slowly shift the mass of developers who simply want to make quick and easy modules.

Absolutely, it’s different. Again, it’s a matter of degree. Not sure why so many arguments revert to this sort of binary fallacy. If I paste one equation into another module, I have a very high likelihood of still being able to understand the overall idea. If I paste ten of them, maybe less so. If I paste a hundred lines of code from an LLM, still less.

One must be moderate in one’s use of external resources. It’s just the same when I write a program. I might get a snippet or two from the documentation, maybe 2-3 lines on how to read CSV files, but otherwise I write all the code myself. But if I get the LLM to write half the program for me, I have much less of a chance of understanding it.

So it goes with all life and learning and keeping a sufficient amount of the learning responsibility on oneself. What if I’m doing a math assignment and I ask one friend to help me with one question? Then it’s helpful. But if I ask that friend to help me with every question throughout my entire degree, I’ll never learn self-sufficiency. And if I just copy the hardest half of the homework from him, I’ll learn less. If I get an LLM to do everything, I’ll learn nothing.

Copying an equation is NOTHING like asking an LLM to assist with all the code.

Finally, I would like to address:

Of course we can. Have you heard of the Asahi Linux project? It’s a reverse engineering project to get linux to work on the Apple Silicon processors, and it’s a VERY difficult project because so much has to be reverse engineered. It’s a very talented group of programmers and here is their LLM policy:

If you stand up for a hard anti-LLM stance, you won’t lose developers. You might actually gain more developers who are serious about learning rather than the kind who just want something done in darktable but who might not even know that such a thing can already be done.

3 Likes

Yes, I have heard of the project, actually I follow it closely as a Mac user. I admire them for their work and hope they get HDR working as that is holding me back to switch.

There are two nuances here:

  1. they talk about - largely created - so according to there policies they allow LLMs for small things.

  2. My point actually still stands: even with a strict policy: how do you handle it. Yes for some code it is obvious if an LLM is used. But what if it not? How is this policy governed are the devs going to ban people when they think that somebody is using an LLM?