AI Model Integration Policy

That may be, but IMHO that does not make it correct. Humanization of AI does more harm than good, and is more of a marketing spin than a technically founded claim.

1 Like

To add, the terms ā€žringingā€œ, ā€žghostingā€œ or ā€žmaze artifactsā€œ all are characterizing the output of an algorithm in a descriptive way. They are not implying anything about the underlying mechanism, unlike ā€žhallucinatingā€œ.

The term originates in the generative text community, to describe models making up information that is not there. The hallucination is obviously metaphorical, no one in their right mind would claim that the model is seeing stuff. It has then been adopted by other sub-communities, even though the metaphor may make less sense in other contexts. That’s just how languages work, meaning shifts and the same word may mean different things in different contexts. Almost every word that we use meant something different at some point in time, ethimology is always fun.

In the case of noise reduction, it’s more a misunderstanding than it is an hallucination (always in metaphor space, that is). The model tries to reconstruct what it ā€œseesā€. When the noise is a lot the information is not enough and the model may make wrong ā€œassumptionsā€ about the contents of the frame.

2 Likes

I am picky about it, because in the area that I am working in - autonomous driving / safety critical systems - the term contributes to the over-reliance of the general public and many companies on those systems, assuming that some human-like thinking entity would live inside their AI components.

I do know that the word originated there, but pop-science picks it up, mangles it, and makes it more than it is.

And in the group of ā€˜computer vision’ algorithms, ā€˜seeing’ is also a bit wired

It is your right to be picky about that. But that semantic ship has sailed.

I was not on board (with) on that ship :wink:

But I think it is important that for the people that understand the technology a bit better, we tell the people around us that computer don’t ā€˜think’ and that we explain that is mere data processing what is going on and something else.

We already see number of people getting psychological attached to their chatbot, and that is what I find really disturbing. So I hope I can help a bit to people better understand what is going one and prevent that stuff.

3 Likes

Not everybody is on board of that ship, and arguing with a majority of people who use a misleading term does not make the argument correct.

Here a differing opinion from a recognized expert of safety critical systems:

1 Like

As in Harry Frankfurt’s little volume On Bullshit

I have often thought that Trump, and our former PM Boris Johnson, weren’t actually liars, but as you say, indifferent to the truth.

I try hard to avoid humanising language where possible, but not to the point where I’ll tie my language in knots. Please do point out other word choices which would have worked, please don’t assume people have no clue.

On hallucinate, confabulate is a better drop in replacement.

1 Like

Subjective perhaps is a poor choice of words, hence when I added the clarification. Do you have a better word? I’ll use biased for now.

Biased models are a worry because the biases tend not to be just random failure cases. They tend to push outputs towards an average of the training data. With more powerful models this is more obvious with AI same-face and the like. Even with less powerful models I’m worried about outputs becoming more generic.

Maybe we should end this here because in my opinion this debate isn’t going anywhere and darktable has JUST introduced Ai and at this point it feels disrespectful to spend so much time over trivial corrections instead of the endless tools that Darktable already offers.

I am sure it is very important for each individual to point out what feels important to them but I believe a 100 page description of something sounds more suitable to describe such topics but you must understand that a 100 pages also add a 100 pages of possible debates . The more you want the policy to have the more arguments there will be and unless we are organising a public trial of Ai , we can leave it be with what it is.

Even Laws are sometimes ambiguous because it allows for a lot more flexibility and you can adapt to the unpredicable changes in the future.

4 Likes

Seems like this has run its course.

10 Likes