No more AI answers, please

We (@patdavid, @darix, and myself) are firm believers in the community and in people helping other people to achieve their vision. That’s why pixls.us was started in the first place, to give people a place to gather and discuss the things that we’re all into in a single space. Together, we’re better than we are by ourselves. I think we’ve proven that over and over.

Lately we’ve been seeing more and more answers along the lines of “ said …”. We don’t generally see the merit or use of posts like these. AI answers and tools are all over the place now and if the original poster wanted an AI answer, they’re encouraged to find those answers for themselves. In fact, it is hard not to find AI answers, even if you don’t want them.

It is also OK not to know something. There are plenty things I don’t know and I come here ever single day, multiple times a day, to learn from the other talented and knowledgeable people that are kind enough to share that knowledge with me and others. I hope some of my answers are helpful too. We are, after all, people helping other people.

From this point forward, I’ll just be deleting posts of the nature “ said .” No warnings, no flags (others are encouraged to flag such content, I won’t be though), just cold, hard deletion.

Thank you for your understanding! As always, if you have questions, please ask them below.

36 Likes

I’ve also updated our FAQ.

3 Likes

Sounds reasonable. But I don’t understand the description of AI answers as "being along the lines of ``said …‘’. Is this a construction frequently used by Chatbots or something?

2 Likes

No, that is the format I see here on the forum.

just put the chatbot name in front of the said. if that helps.

3 Likes

Thanks, I tried searching for the word “said” but wasn’t getting anything illuminating!

Just to be sure, Is it still OK to refer to other stuff like Wikipedia, for example in an hypothetical discussion about ‘exposure’ say something like:

?

Yes.

1 Like

ta

resistance is futile

original by Pikawil from Laval, Canada - Montreal Comiccon 2016 - A Borg, CC BY-SA 2.0, File:Montreal Comiccon 2016 - Borg (28259448635).jpg - Wikimedia Commons

3 Likes

In case anyone here is still under the delusion that asking an AI about facts is a good idea:

3 Likes

Excellent !! Somewhere I have a similar headshot of Gates when he was busy sinking Netscape …

“In case anyone here is still under the delusion that asking an AI about facts is a good idea”

If you can evaluate it… Here is an AI-rendered discussion of a physics paper by a friend and his student, apparently accurate. I don’t think humans could put this over any better, especially in the two minutes it likely took to generate 13 minutes of dialog. (The paper has gotten 4 citations, so less whelming than as-sold here, for now.)

It is well known at this point that LLMs are great at generating content (text, but also audio and images). The problem is that they generate content by rehashing existing content, without any understanding, just relying on statistical properties of said content. This can work fine if you have a lot of good quality content, but still, there is nothing that guarantees that the outcome makes sense; and truly novel stuff is unlikely to result from the process.

By imitating the style of a seemingly educated person (flawless grammar, extensive vocabulary, also having field-specific terminology perfectly under control), ML-generated texts give the impression of understanding what they are talking about, just because for humans these traits correlate.

5 Likes

And the more they generate, the more of their own output nonsense they ingest, so the more the feedback loop closes.

3 Likes

The issue of treating LLMs like people seems resolved to the point where these seem like stock defenses to ignore the benefits in training and using the soulless math mechanism for an intelligent, creative person … is it so-unrelated to the math of color transforms?

My limited experience with using my own data and tiny models made me feel barf-ish because it figured my taste out so well, just using histograms to represent photos:

http://phobrain.com/pr/home/siagal.html

Nowadays I use imagenet model vectors with histograms (aka bunches of numbers that describe my own photos), and get ~90% accuracy in predicting interesting unseen pairs. I can rank photos by pairability. In the end, eye tracking cues might trigger dynamic crops and color edits for viewers unborn.

The imagenet vectors make an ideal ground for debate about the ‘meaning’ of AI. Visually they are abstract scrambles that the final layers of a given model can translate to identify what is in the photo, the per-photo vector data being much bigger than the photo itself. If a photo’s imagenet vectors are close to another’s, the photos will be similar, which effect degrades interestingly as I reduce the 7x7 blocks of 512 numbers to two numbers per photo by averaging and folding. Meaning has been extracted and it’s intoxicating to manipulate it, no need to posit that it needs our own autonomy to be ‘real’.

(My prediction is that once an advanced AI can control the battlefield, its reach will go back up the supply chain.)

you can discuss the potential use of AI/ML/LLM in a different thread. our stance on answers from those tools stands.

we have seen the answers provided here and in other places and the quality was subpar. and yes those tools starting to learn on their own garbage output will just make it worse. Plus all the other problems with ignoring copyright and so on.

3 Likes

You should read the article I linked, because that has nothing to do with the issue here. What we’re talking about is not asking AI to summarise the physics paper (which they do seem to be reasonably competent at), but have it write it in the first place. As the article explains, you would get something that seems plausible, but could well be full of nonsense. And unless “you can evaluate it…” you would have no way of knowing, taking it to be “apparently accurate”. I hope you weren’t planning to build a rocket or something based on that paper…

I keep reading this claim in anti-AI posts, but it is not my experience. In ChatGPT 3.5, it made mistake. I responded by correcting that mistake. It responded with ‘you’re right’ and gave information including much of my correctional text.

But when I asked it exactly the same question again, it repeated the original mistake.

In other words, my correction was not “ingested”.