As far as I can tell, darktable still doesn’t have any AI tools. Commercial providers of image editing software offer loads of tools that do things in the background and call themselves AI.
Call me old-fashioned, but I like it the way it is: without AI. Users have full control over what they are doing and remains in charge of their photos.
Is it still me? What do you think? Would anyone open the door to AI for better image quality if the opportunity arose?
I’d be happy with AI masking, possibly AI object removal. Nothing more generative than the latter.
As long as it’s module based and optional (driven by the user) and provides something of additional value I would have no problem. Some AI will be used to enhance simple processing like noise or sharpening and if that could be enhanced then I think its a positive . Other bits that could be targeted are masks and then you get to the generative stuff, maybe thats more contentious…
For processing , I don’t think it a matter of if but when…
I have been playing on my phone with some quick shots…I have a pixel and I can take a shot of my garden or anything and simple prompt like change the color or remove this or that work surprisingly well…they are not high quality raw edits but things are heading down a new directions…the few attempts on some family photos weren’t bad either. Faces remained quite faithful…but my son has asked me not to do any of that editing with his kids and there are several settings for privacy now that relate to this…at the final stage google says something like no humans will ever see or access the images but the servers will so I guess with all these new tools not only comes a variety of questions around what is done with the data used to create them if its your personal photos and you are just removing a tree or something and also images of the people in your life to the questions around the final products generated…
I guess time will tell…
AI masking, denoising and 2x upscaling would be really nice.
Removing things (like content aware fill, resynthesizer) would also be helpful. This does not necessarily need AI, could be an extra functionality of the retouch module.
Changing things in the image with generative prompts would be too much in my opinion.
The issue here is to keep darktable a non destructive tool and to ensure that today edit will be identical in say 10 years. With AI that’s practically impossible. So yes, using AI to get a mask it would be ok. If you need more then darktable is not your tool.
I agree with Pascal. I believe AI masking will be a must have in the future. Besides that there are better tools for other tasks such as denoise, upscaling, object removal, etc.
I enjoy the challenge of editing. I beleive AI is the future when it comes to things like medicine & vaccines, x-ray scanning etc but I don’t want any of it in my life when it comes to art. Photography, music, literature etc. I want that to be a human endeavour. We sleepwalk headlong into the brave new world off course.
I very, very much like the fact that darktable does not have any “AI” features, and I would like it to stay that way.
I’m not sure why there is a negative to a new feature, as long as it is FOSS, maintained, bug free, and not a duplicate of an existing one. Like many features, it will be my choice if I want to use it or not. But why should we prohibit others from using it?
Exactly!
Like others have said, I don’t seen an issue with AI for “utility” sort of work like masking, denoise and maybe even sharpening and white balance - as long as I remain in control of the edit. I’m less enthused with using AI for creative decisions and am pretty allergic to generative fill, which from my limited experience has been pretty awful in quality.
My other concern are projects becoming too focused on AI development. Some commercial companies have put all their eggs in the AI basket and aren’t improving their core capabilities or even fixing issues because the have to devote their limited resources to compete with Adobe and some others. Maybe competition isn’t an issue for DT or RT but I appreciate that the priorities are on getting things right ahead of introducing new virtual gadgets.
It sort of depends how you define AI these days, as it’s become a pretty broad word.
Personally I have no interest or use for things like generative fill, content-aware removal, expanding photos, and similar features. These seem to me like they go against the spirit of darktable though that’s not a stance I can defend beyond assumptions. They are non-deterministic and unrepeatable. Everything else in darktable can be explained by math and physics and inputs and outputs.
Subject/sky/other detection masks and denoising I don’t really have much use for but I don’t oppose them either if they aren’t generative AI features and more machine learning type algorithms.
My background is that I started in photography about half a century ago in the darkroom dodging and burning and using various grades of paper etc to achieve a result that was my own. What I love about DT is that I am still the artist creating the final look and not just depending upon AI to generate that look. However, that being said it would be lovely to just instruct DT to select the sky for instance and if AI can achieve that then so be it.
I also don’t use LUTs to achieve a certain look in my photos. That just feels like I am using someone else’s intelligence or artistic skills and I have compromised my creative process.
@Pascal_Obry sums up well some of the reasons that AI may be unsuitable to DT. But not wanting to hijack this thread I don’t need DT to be able to recreate my edits identically in ten years. I have saved that edit as a Tiff file and expect that the tools in DT in ten years will be capable of a better edit than what I can do in 2025. I also hope that I will be a more competent user. So I am happy to start again with a clean slate.
I would certainly welcome AI tools for the non-artistic and repetitive tasks. The retouch module comes to mind immediately
I agree with @g-man: I has to be FOSS. In addition, for me, it must not be running on the compute resoures of the Big Five.
AI masking is already available in kdenlive. I had a real-world use case from work that I needed to replace the background of a presenter (since she forgot to while recording in Zoom). It works really well and is straightforward.
Have also been using nind-denoise workflow for a while, ISO6400 and below look like ISO100, ISO12800 might have some artifacts but still clean.
I’d not want it to run at all on a centralised server. you cannot be sure what will be done with any information you send to such a server… (And any promises aren’t worth the paper they aren’t printed on, cf. Google, their motto used to be “don’t be evil”…).
But even when running AI-based methods locally, the datasets required are large… Digikam uses some of that technology, and they distribute the datasets as separate downloads due to their size… (a few 100 Mb each, but each AI-based tool needs at least one…)
This would have to be opt-in. Either way, storage is cheap nowadays, a few mbs for subject detection and auto masking are “nothing” compared to everything else we do., especially as photographers with large amounts of raw files ![]()
it’s a quite non-sense discussion. There was a time where it seemed a non discussion that there’s room for an alternative tonemapper to filmic - now we have 4 tonemappers in the next release ![]()
If someone comes around with a convincing AI solution for a use case that fit’s into darktable’s design philosophy then it’s the right time to discuss. And if it’s convincing then it will make it’s way into darktable or a fork.
A fundamentalistic opinion on whats in or out is the end of FOSS because it limits the F: freedom
Sa stay calm an wait, what might happen in the next years.
Not a DT user, but I was wondering similar things for RT.
Yep. For the sake of clarity, I think it’s extremely important to differentiate:
- Non-generative AI (basically fancy maths).
- Gen-AI trained on stuff that it was allowed to train on.
- Gen-AI trained in yolo-ninja mode by plagiarizing other people’s work without being authorized to do so.
(Edit: You could slip an extra category in there, like “trained on stuff siphoned from people who did not really agree but are using a service that includes this in their conditions. I think I read something about Adobe now having “the right” to feed their AI with everything people create with their apps.)
In a way, I suspect many raw processors’ features already more or less belong to the first category. I would be OK-ish with the second one if, like others pointed out, it’s opt-in and remains based on choice. As for the last category, it stands without saying that it’s evil in terms of artistry. ![]()
I’m honestly not an expert at all regarding how the tools work. I suspect AI denoising is like “Oh, this roughly looks like some pictures I was trained on, so let’s assume this represents the same object and re-draw it like the ones I know, overriding the noise” and can fall in either the second or third category.
(That would bother me a little, because this would basically replace reality with something borrowed from elsewhere.) But I may be extremely wrong.
We’ve been struggling against this in my work indeed. Some clients take so long to validate and install new versions of our product that, in the meantime, some LLM models are already nearing their end of life – and despite this, clients would like results to be reproducible years later. We have to explain them that it’s more or less impossible and that there isn’t much we can do about it at the moment. ![]()
Yeah, regardless of the topic.
- Visual Studio Code’s release notes these days are 99 % about agents and chat-based features. No section whatsoever was of any interest to me in the latest one.
- The last time I tried opening an old Google Slides presentation of mine for work, I got a huge popup advertising Gemini’s new image-generation features and trying to convince me to fill my slideshows with random images that would just divert the audience’s attention and lower their focus regarding what I actually have to say.
(And there’s a “TRY NANO BANANA, PLEASE PLEASE” banner on Gemini’s chat UI that has been begging for my attention for weeks and just won’t go away even if I do try it out in a desperate attempt to make it shut up.
) - Atlassian is underlining acronyms with a huge rainbow-colored swath to show that it can offer definitions for them. While it can come in handy from time to time, why the hell make it so eye-catching? And I don’t need to be reminded a hundred times each day that “PR” stands for “pull request”…