AI(LLM)-generated content on


(If you were pointed to this discussion by some search engine or ‘AI’: as of this time, mid-May 2024, there is no deconvolution module in darktable.)

I see the proliferation of generated content everywhere, and recently such posts started showing up on this site, too. The examples I have seen come from @xpatUSA (but this is not meant as personal critique, I’m simply tagging Ted so he can take part in the discussion).

I would like to know what others think of such content. They re-phrase the available documentation and forum posts, which may lead to a more understandable presentation, but today’s tools are prone to ‘hallucinations’. To be honest, I am against such content.

An example of what Copilot has to say about darktable’s non-existent deconvolution module (yes, that means more AI content):
Prompt: Why is darktable’s deconvolution module yield different results from that in PS?

The deconvolution module in darktable and the corresponding functionality in Adobe Photoshop (PS) serve similar purposes, but there are notable differences between the two. Let’s explore these differences […] The deconvolution module in Darktable uses Richardson-Lucy deconvolution , which is an iterative algorithm. It attempts to reverse the effects of blurring by estimating the original sharp image from the blurred version.

Prompt: What does the ‘hardness’ slider do in darktable’s deconvolution module?

The ‘hardness’ slider in darktable’s deconvolution module controls the sharpness transition of the image. Let me explain it in more detail:

  1. Purpose:
  • The hardness slider adjusts how quickly the sharpness transitions occur in the image.
  • It affects the edge contrast and the perceived sharpness of fine details.
  1. How It Works:
  • When you increase the hardness value, the deconvolution process becomes more aggressive in enhancing edges and fine details.
  • Conversely, decreasing the hardness value results in a smoother transition between sharp and blurred areas.

Remember that the hardness slider interacts with other parameters in the deconvolution module, such as radius and iterations. Adjusting these collectively allows you to fine-tune the sharpening effect to suit your image. :blush::camera_flash:

And yes, other LLMs got this one right, but they would also make similar mistakes.


I concur that such content does not belong here. This is - if we want AI content or responses we can look for them ourselves. Particularly insidious is the use of a false premise (e.g. “darktable’s deconvolution module”) which is then unchallenged, and reinforced.


The content in question:

It looks like it was an attempt to further the conversation, but given the lack of input data, it falls way short of the mark for intelligent discussion regarding the file format im question. It does tangentially raise the discussion about the utility of AI in any photography. Good luck with your report.

Chatbots are still pretty useless things, they’re just getting more ubiquitous and any percieved increase in utility due to recent chatbot technological improvements is completely imagined. Chatbots hallucinate 100% of the time and they don’t care. Meta just added an AI button to the facebook app, and THE BOT LIES ABOUT BEING ABLE TO DISABLE THE BOT, and happily presents the user with two ficticious methods of disabling Meta AI.



I like my discussions to be human


I regard LLM as tools. If I used such a model to compose a post, it would still be my responsibility that the content is correct. This is similar to translation programs that non-native English speakers on the forum might use. The software helps, but one should control that the output reflects the intention.
Pure AI generated stuff in the discussions here I regard as useless.



That, and others. I don’t mind using LLMs to research and understand a topic, summarise relevant info etc., but simply copying their output here (especially when it’s a general, off-topic search, easily answered even better using a conventional search engine) is not helpful.


Interesting topic.
My knee jerk reaction tends to be along the lines of “B***** AI sh*t!” to paraphrase a colourful expression of disapproval I heard recently about an unrelated subject :grin:

The worst in my opinion is people passing AI generated content off as their own - whether it’s images or text, it’s bad. And in the case of text often makes the user look a fool as well…

But if it’s made apparent that it is an AI’s answer to a question, I don’t mind in that context. One can take it or leave it…

I suppose I’m saying it’s context dependent.

1 Like

My practical experience with AI-generated (textual content) is that

  1. when it is wrong, it can be wrong in very subtle ways and still sound convincing, and figuring this out results in a lot of time wasted,
  2. the more technical the dicussion, the more of often it fails.

So if this site permits AI-generated textual content, it will cease to be a useful forum for my purposes and I will probably go elsewhere.

Regarding AI-generated images: yes, the first five you see probably look amusing or spectacular, but they quickly lose novelty. I consider them noise, not unlike people re-posting ancient joke collections on Usenet. It should be discouraged.


The examples Ted has posted were clearly marked as such, so I don’t mind that.

Thinking about it, I don’t think I’d mind humans presenting such if it added to the insight assembled in the discourse and was appropriately cited. Regurgitation of simple facts would not be so helpful. If you read much DPReview, you’ll find such from humans, so who’s to say, I guess…


Seems to me to be the product of a marketing department. I’m avoiding it, and hope to do so here. Can’t see that it has any place here.

The few examples I have seen of AI on technical subjects have been, either to my knowledge or that of known folk who know, very inaccurate. But it’s fluent; it sounds right. Just like a lot of sales people.

At best, that’s a waste of the reader’s time: at worst, it’s dangerous.

I do hope it won’t be accepted content here.

1 Like

Sure, I’m not blaming Ted, he clearly said he was satisfying his own curiosity when he asked GPT what ‘Firefox Nightly’ was, or when he asked about some aspect of image processing. I’m just afraid that honest mistakes in a prompt can lead to AI hallucinations being shared on the site.

The trouble is, if a well-meaning person asks a processing-related question from an LLM (just like I did in my original prompt), they may get a plausible, well-worded answer, which will mislead them; then, by posting such information on the site, it will be picked up by search engines (AI-powered or traditional), and presented to others.

I tried to take my query even further, using words that would make it clear for everyone that the question is bogus; yet, it seems reporting I don’t know or I don’t understand is considered as a flaw, so the AI goes on to deliver content:


That’s the biggest part of the problem. Often, AI content is either not clearly labeled as such, and/or the fact that it is AI-generated is hidden. We are at the point with AI where you have to ask, “Is it real, or is it Memorex?”


Pudginess slider … :rofl:

Exactly, which is what gives rise to my more blasphemous outbursts on occasion. I hate it tbh.


To those vehemently anti-AI:

What is the difference between querying with a search engine then reading the ‘hits’, picking one, then copying and quoting the relevant part here - apart from speed?

Is the probability of that “relevant part” being correct any greater than that which LLM AI churns out within seconds?

In either case, if one already knows the answer why query in the first place?

At 84 with a fading memory, I use LLM AI several times a day, often using follow-up questions to clarify the topic.

To those who prefer to “Google” … guess what … AI is coming to their search engine, I read elsewhere, with “suggestions” appearing over the list of hits.

1 Like

I’m not disputing your point, but for me the issue is that AI is capable of plausibly presenting complete and utter nonsense - anything else, like search results in forums, etc., will usually have some degree of community fact checking going on.


Ted, this is really not about attacking, blaming or shaming you. This is about content on the site.

That a clean (non-AI) search engine makes you look and think.
Try searching darktable cellular automata module pudginess slider on (they pay Google to perform the searches, but there is no AI stuff). You get a number of hits (because of darktable, module and slider), you read the first few, and realise there is no such thing. Compare that to the ‘AI’ searches (LLM searches), which will describe the algorithm used in the module and the roles of the sliders, even though they do not exist (see my posts above).

I agree with you that an LLM summary can be useful, but I’d also like to keep this forum clean, so information is provided by those ‘in the know’, like developers and regular users. If someone wants to research using an LLM, they will just use a browser.


I asked ChatGPT to analyze this thread, and provide a forum response. It was brief but the main point was that AI content must be clearly labelled.

\color{Apricot}\huge AI-generated\ content:

The discussion on the forum addresses concerns about AI-generated content, particularly regarding its reliability and appropriateness in technical discussions. Users express skepticism about the accuracy of AI outputs, highlighting potential issues with misleading information and "hallucinations" in AI responses. The consensus leans towards preferring human-generated content for its accuracy and context, with some suggesting that AI-generated content should be clearly labeled and used cautiously. Overall, there's a strong preference for maintaining human oversight and expertise in forum discussions. 

For more details, visit the [ forum thread](

Here's a response from my perspective:


As an AI, I understand both the potential benefits and the limitations of AI-generated content. While AI can provide quick and broad information, it often lacks the depth and contextual understanding that human experts provide. It's essential to clearly label AI-generated content and use it as a supplement rather than a replacement for human input. Guidelines for AI use can help balance innovation with maintaining the quality and accuracy of our discussions. What specific guidelines would the community find most helpful to implement?


In the waning years of my career, my focus became failure investigations. Nothing as egregious as a plane crash, more like non-conformances discovered in testing. For most, we used our customer’s process, which involved convening a failure review board, which would in turn assemble a failure investigation team of folks next to the failure. Yes, this was a government endeavor… :crazy_face:

What I found in participating in dozens of these was the power of “multiple sets of eyes on the problem”, where in some cases the most insight came from those less familiar with the specific mechanics of the problem. As a software person, I was initially poo-poo-ed by the mechanical folk, “what do you know about mechanics?” But, after I and others provided a sort of “sideways” view on a few things that pointed better to an ultimate root cause determination, I think we all began to better-appreciate the diversity of our board/team compositions.

To that way of thinking, an appropriately trained LLM could provide another set of “eyes” in the same fashion. Thing is, like anyone elses’ input, all need to be considered with a critical eye, and in concert with all the other assembled knowledge. This is all just plain-ole critical thinking, which we all need to exercise…

So, to the point at hand, I think LLM-based observations here can be good if 1) they’re appropriately cited, and 2) that they actually provide a point of view worth considering. And, friends, don your critical-thinking hats…


Do you mean marked as such, or they include references (sorry, English is, obviously, not my mother tongue)? Because if the latter, you can see in AI(LLM)-generated content on - #11 by kofa that the machine provided ‘links to the source of the information’ (or at least something that looks like as if it had gathered the hallucinated text from those sites).

1 Like

I never thought that it was - because you already said that in your earlier post, vide " but this is not meant as personal critique" - so no need for the emphasis.