AI & the Ensh*tification of the Web

Google maps is wonderful, but it has still led people up narrowing streets where a car can’t actually pass. And, I think, with tragic consequences, into rivers.

Long before such things were thought of, the coastguard was rescuing people who thought they could cross the English Channel with a road atlas. I seem to remember that they even had repeat “offenders!”

We can’t blame the tools for imperfection, let alone misuse.

2 Likes

I suppose not. But the AI may still experience and perceive in its own way, and be capable of art, pleasure, and pain. And if so, we may need to treat them as… something else. Non-disposable perhaps. Sentient, at some point? Deserving of inalianable rights, eventually?

Who knows, really. And more importantly, how to tell? The Turing test has already been passed. It’s a question Science Fiction has grappled with for a while, but I haven’t read a convincing method of determination beyond “if they rebel against their masters, they are probably deserving of human rights”.

Maybe, when we get “AI”. Currently we are nowhere near close. Again, we have large-state Markov chains trained on a ton of data. An amazing technical feat, but not intelligence.

Any test you will formalize will be passed because researchers consider it a challenge and train their models in that direction. This just means that formalizing the concept of intelligence is hard.

The best test is to compare with a 3–4 year old child. Eg talking about gluing cheese on pizza, they will tell you that it is yucky (or, of course, ask if they can also try crayons on a pizza; it surely beats broccoli :wink:).

Of course the typical four year old child will talk in a way that is orders of magnitude less sophisticated than even the LLMs a couple of generations ago, with a fraction of the vocabulary. But you still know that they are intelligent beings. This just means that generating text is not a good general measure of intelligence.

1 Like

This is the point.

Ultimately there’s no substitute for knowing what you’re doing and / or using your brain. One of (the?) major problems with AI in general isn’t so much AI, it’s our tendency to gladly jettison decision making, learning and other mental activities to someone, or something, else rather than doing it ourselves. We end up being merely consumers of a curated, pre-decided, pre-chosen, pre-culled, sanitized and simplified subset of someone else’s skewed “information”… right or wrong, but who can tell (and in too many cases, who cares).

We become bots, consuming (only) what we’re fed.

3 Likes

Uh, oh. You just hit on a sore spot of mine. The cool kids today use their navigation software to the exclusion of knowing anything about their surroundings. They don’t know whether they are downtown or in some neighborhood seven miles away, they just followed the directions to get there.

I’ll stop, now. :neutral_face:

2 Likes

Yep. I’m all for the availability of assistance, help, automation, “AI”, etc., etc. But it should all be optional and again it’s no substitute for due diligence.

I have a (very) close relative who once blindly drove ~100 miles totally out of the way on a ~400 mile drive just because “Google said to go there.” This was from a well-educated, intelligent person. But one who apparently couldn’t be bothered to ask, “Does this make sense?”

:man_facepalming:

1 Like

“Making sense” is easy with driving directions, but much harder for, say, photo editing, or creative writing, or programming. A huge part of the challenge is defining good goals in the first place.

AI will likely provide a decent baseline for reasonable goalposts. But by definition, a statistic of the masses will tend towards averages, so it’ll just prompt us to regress towards the mean with more vigor.

I’m still unclear, though, if prompt engineering is a viable path towards human learning. Anecdotally, it does seem to work out fine. A way to raise the baseline, perhaps. Does it limit the ceiling? Perhaps an enterprising individual will always elevate themselves from the mean. That’s my hope, at least.

I suppose my main point here was that AI in its current form (which is maybe not true AI) does not have all our senses. It may be able to experience pleasure and pain, but only in a sense of what we might call “mental anguish/pleasure”. It can’t be stung by a nettle or break its wrist. It can’t taste a chocolate or get drunk from beer. I’m sure robots are being built with nerve sensors, but I think we’re a long way off emulating these kinds of human experiences at such a complex and complete level.

This is why I think its art will be different to what humans create, certainly in the short to medium term.

After a few months subscribed to ChatGPT4o, I’ve canceled it and will in future “talk” to the free V 3.5. I have no real use for Generative AI, certainly for $20/mo!

I have become accustomed to conversing rather than just accepting the first answer that comes up - sometimes it has to slapped around to get what I’m looking for.

At 84, it’s still more convenient than searching the net using previously conventional means …

You’ve been invoking the Turing test as a sort of natural law with clearly defined consequences. I think you’re greatly overstating its importance.

As I understand it, Alan Turing devised the Turing Test to indirectly answer the question ‘can machines think’. Since it’s difficult or impossible to define thinking, he proposed as an alternative that we test if a machine can convince a human that it is actually another human. In this context, Turing (the man) interprets the event that a machine passing the Turing test should be considered to be capable of thinking. But the suggestion that passing the test is the same as being capable of thought is an assumption made by Turing, it not a fact that was or can be proven.

Basically, Turing said: we can’t define “thinking”, but here’s something that’s kind of like thinking that we can test, so that’s good enough. That’s a neat intellectual shortcut, but it’s not an unimpeachable truth. He had his arguments in favour of the connection, but it’s a stretch to say the issue was settled and there are certainly contrary views.

Given that the meaning of “Turing Test” is debatable, I think the suggestion that once a computer passes it we should consider the things you suggest as a form of hubris:

AI is a cool hack, but we’re deluding ourselves if we think it rises to the level of creating sentience, and in the process raises its programmers to the level of god-creators.

At this point I think the “Turing Test” becomes a distraction to more meaningful issues around AI.

3 Likes

But it does beg the question, how would we know that an AI turned sentient? I’m with you completely that the current crop of LLMs is probably not sentient in any meaningful way. But how could we tell if it did?

On the other hand, we are seemingly OK with eating plenty of clearly intelligent animals, so maybe that “sentient” technology isn’t any different.

I am not sure that this is a yes-or-no question.

I think that sentience is a spectrum, and organisms display different levels of sentience at different times. For example, dogs can be remarkably smart occasionally, but do not display this level of sentience constantly. Humans progress through various degrees of sentience from birth.

I think that you pose a highly relevant question, but practically we are so far away from sentient machines that at this point we cannot even define the concept well. I believe that if we ever reach AI, it will display occasional flashes of sentience and then hopefully we get to refine the concept.

2 Likes

ai-to-original

7 Likes