Should darktable remain AI-free?

Same in Germany (and I assume also for Europe in general). Do something AI based, and off go the rights :grin: this is linked to the point, that copyright is based on the human generation of something. As this is not given for AI generated content (images, text) there can no copyright be claimed for such results…

While typing this: it gets quite interesting, if an OOC Jpeg results e.g. from a phone applying AI with regard to removing / adding elements, well is this an image that can still be copyrighted :thinking:

1 Like

A prompt based AI generation can’t be copyrighted in many places, but there have been examples of work with an AI component being copyrighted. It’s the human involvement that earns copyright and even if you run a generative AI over your image, it will still be a derivative work, so the original image doesn’t lose protection.

Does this mean that AI-written dissertations are plagiarism? I fear for modern students.

2 Likes

For those interested, I made a Lua plugin for AI masking in Darktable.
It uses the SAM2 model from Meta AI, but it remains FOSS and runs locally on your hardware. It isn’t perfect because the external raster mask module is limited at the moment, though that can be improved in the future.
You can find details there if you want: GitHub - AyedaOk/sam2-tools: SAM2-Tools is a Python application that offers both command-line utilities and a lightweight GUI for running Meta AI’s Segment Anything 2 (SAM2) model..

To answer the question of whether Darktable should remain AI‑free, I think using a plugin to integrate AI is a good trade‑off. Those who don’t want AI don’t have it out of the box, while those who do can add it via plugins. The drawback of plugins is that they can be hard to install for some.

Another option would be to follow the same approach as Kdenlive. Out of the box, the software isn’t bloated with AI, but there are settings menus that let you install speech‑to‑text or object‑detection models easily.

Personally, I’m fine using AI for masking, tagging, and similar tasks. However, I wouldn’t use it to “enhance” a picture. I have no issues with people who do that, but I wouldn’t call it photography. For that reason, I would not like gen AI tools integrated into Darktable, since it is a photo editor and not a digital art program. The only exception I wouldn’t mind is AI retouch, which would perform the same function as clone and heal. In any case, we are far from achieving that with Darktable, as I believe there are no ways to add layer masks.

6 Likes

in german law practice there’s a principle ‚it depends…‘
So even if you use AI to generate content it’s not given, that the content isn’t protected.
e.g. a dissertation about usage of AI in requirements engineering isn’t without any rights, just because there are AI generated samples in it.
It’s the whole context that matters…

I like the term. Magic is now demanded almost everywhere. For me, using AI in photography is like participating in a marathon and using a car. Because it’s the fastest and most convenient way to get there.

I think that in the end, it will come down to each individual photographer’s philosophical approach. But I am afraid that sooner or later, the best images with the most magic will win anyway. The audience always wants something new and new stimuli.
What have I started with this post?

1 Like

@MStraeten yes, as so often “it depends” and it is important “how much AI” is inside. I agree, that your example of the AI related dissertation will be protected anyway, assuming not the examples themselves.
@bastibe I was more on the “fully generated” track in my post. I.e. from my understanding you cannot claim copyright on an AI generated image or an AI generated text. And if a student writes a whole text with AI - well I would assume that their university prohibits this (even if they won’t be able to validate if something has been completely written using an AI).

Technically it wasn’t “enhanced”. It was generated from the scratch using your original photo as an inspiration.

I look forward to some AI tools in DT, such as masking and denoising. But this? This is not needed in a tool that is intended to develop actual photos.

Just my five cents to this long discussion.

1 Like

And over time the audience is younger and has grown up around this sort of thing in movies and culture so it could also become a norm for them and so more accepted and demanded…but maybe in the way they will go and find vinyl and think that is cool or old point and shoot camera’s or old sneakers…etc…as long some nostalgia lives on in the process… :slight_smile:

Please see No more AI answers, please

I will acknowledge the irony that the AI says

And

Yes.

2 Likes

In practice, if a lot of people do something, usually there are no consequences, so I would not fear this.

I fear for modern students who believe that they no longer have to develop certain skills, just use AI to do it for them. Despite the hype tsunami, corporations have not generally embraced AI for productive tasks yet; experiments do not show a large productivity gain, and there is concern for workslop.

2 Likes

ok, just because a whole bunch of tools exist, does that mean anyone can repair their car? Competence does not mean just doing what tools can do, but using those tools in an appropriate way.

Students do not need to learn what AI can do in a more efficient way, but they do need to learn how to use AI’s capabilities… and recognise hallucinations :wink:

2 Likes

I wholeheartedly agree. Working in higher education I realize that offering students tools makes them lazy. Learning only happens with challenge. I wonder how much damage AI will do in the next few years. And of course this is happening with other tools as well, but AI seems to accelerate the process.

1 Like

Not just students. I’ve seen it in professional programmers as well. The problem is, AI is a better coder than a junior developer. But add some experience and domain knowledge, and that’s no longer the case. But they need to put in the effort to acquire that experience and knowledge.

I’ve seen that happen first hand. It was horrifying. A good start of a promising junior dev. Then they discovered AI and it all fell to pieces. Of course they still did produce code, but it was so sloppy, so verbose, so misunderstanding of the problem. A nightmare to review, too.

2 Likes

Yep, both domain knowledge and finding/following norms seems to be where AI struggles most, and where juniors struggle most as well. Sometimes Claude Code will follow things pretty well, even surprisingly so, but sometimes it doesn’t and you need to correct it. A junior dev would also miss those queues and let bad code slip through for review.

At the end of the day prompting is also a big part of the problem, and only someone with knowledge of the code base or of the problem will be able to do it properly. I can tell it: “Implement this just like it was done in X, put these constants in Y, while you are at it make this function a generic one since it can be used in both places and doesn’t need to be repeated”. These are things that I know because I obtained this knowledge before the tools came along (And I mostly built this project where I am using it in). How will a junior ever acquire this knowledge or intuition if they rely on AI so much?

I guess companies will need to be ready to ‘lose money’ with junior devs, and sort of make them do the hard work so they get to a place where they can use AI tools effectively.

Context is a big limiter at the moment for AI performance, they seemingly get worse the bigger the context, and also don’t have enough memory to be able to hold enough important things in it. It’s also ridiculously expensive to provide large contexts to users, and this doesn’t seem like it’ll change in the near future.

1 Like

One of the sites I follow is Peter Woit’s Not Even Wrong

There is a recent article on “Theoretical Physics Slop”. A couple of the comments (by Kevin Driscoll and Peter Shor) are insightful, in that they indicate that the LLMs are fine when solutions are well known and documented, but fail and hallucinate when solutions are either not known, or there is little information available.

In other words, the LLMs are not able to be ampliative in the solutions they produce.

2 Likes

I was thinking on the argument that photography has included retouching, etc from the start. I’m not sure I agree. There’s photography and retouching. We loosely call the final result a photograph, but most people would understand it as a retouched photograph.

It’s easy to see simple global adjustments as equivalent to processing a film photo. Parametric masks are the similar to dodging and burning, but that’s manual processes and a more specific intervention. By the time you’re doing object removal, sky replacement or air-brusing, then it’s image making using a photograph as an input and not photography.

I have been looking at some of the books recommended on the Christmas Book List, and some other recommended books on subjects that I am interested in.

I tend to look the book up on Amazon and read the reviews, then buy elsewhere if these look favourable.

I have come to the conclusion that this is now pointless, since so many of the reviews are obviously AI written. This article on spotting fake reviews is worth a read.

The challenge now is to find review sites that aren’t full of AI slop

1 Like

I mostly gave that up. I ask people and communities I trust, eg this forum.

1 Like

Coming from all angles now… :slight_smile:
https://www.dpreview.com/news/2977630672/adobes-flagship-software-is-now-available-in-chatgpt-s-conversational-interface?utm_source=feedotter_DPR_Weekly&utm_medium=email&utm_campaign=FO-12-14-2025&utm_content=httpswwwdpreviewcomnews2977630672adobesflagshipsoftwareisnowavailableinchatgptsconversationalinterface&utm_source=DPReview's+Email+Newsletter+List&utm_campaign=0d1ec8c5c5-EMAIL_CAMPAIGN_2025_12_14_07_27&utm_medium=email&utm_term=0_0ba4bdef1a-0d1ec8c5c5-69820195&mc_cid=0d1ec8c5c5&mc_eid=01fb35bb43