AI & the Ensh*tification of the Web

My arms are too long… :wink:

Have you tried using glue to stick cheese to pizza, @TonyBarrett? :wink: :laughing:

4 Likes

Today’s gaff:

He’s turned!

1 Like

And now some models are already better at diagnosing patients based on symptoms than the average doctor :smiley: Of course in the end there’s still doctor validation required, but even the medical field is not “safe” from AI intervention. Here in Portugal we have a shortage of doctors, so maybe some systems will be alleviated with the help of those systems, leaving doctors for the more important things.

Ok, there is actually something to using glue on pizza. It is for food photographers, who want a perfect looking subject regardless of whether it’s fit to eat.

And sure, AI picks that up and spreads it as misinformation for people who want to make pizza to eat.

1 Like

I did not know that. Though I read it picked it up from some sh*tposting on reddit. Which may be true or maybe not

I am old enough to remember working on “expert systems” in the 80’s - each basically a huge collection of ‘if-then-else’ statements ordained, but not programmed, by us engineers.

1 Like

Wow, so much interest in and opinions on AI: no wonder companies are doubling down!

1 Like

Thanks Tim, that AI-ism was really begging a Paul Harvey “rest of the story…” :crazy_face:

1 Like

I have found this a very interesting and entertaining thread, and I’m a bit annoyed that I didn’t participate earlier because now I have to use an excessive number of quotes.

“Hey Google, find all those posts that I liked and highlight the passages I wanted to respond to.”

Well, that doesn’t work, so here goes with the excessive quotes:

@plantarum, for me, you nailed it with these bits:

Probably my number-1 fear about AI. While it has so much potential to work for and on behalf of humanity, it will be put to work instead of humanity by capitalists. It will be profit driven above all else.

I hear the argument so often that “you can’t halt progress” and that there’s no point in fighting AI. While there’s some truth to it, we absolutely should be fighting to get regulations in place and pressing pause while we figure out the ethics. Above all, we shouldn’t be letting GAMAM (Google, Apple, Meta, Amazon, Microsoft) drive this new technology. Of course, it’s too late. They already are and forcing it down our throats, whether we like it or not.

As a Canadian who grew up in the UK, this saddens me greatly as I’m a strong supporter of public broadcasters. The BBC has managed to retain its trustworthy reputation to a large degree (except for protecting sex offenders), but the CBC is struggling to do this through no fault of their own other than being left of centre on cultural/social issues (how dare they promote diversity and equal rights!). The Canadian right is doing a good job of eroding trust in the CBC, claiming bias, and the right-wingers lap it up while failing to acknowledge that most of the press is owned by right-wing private corporations with their own political agenda.
I wanted to emphasize this point because our new spammy AI-driven information age is in desperate need of unbiased, factual resources. Public broadcasters are not perfect by any means, but they are some of the best resources we’ve got. In general, I don’t want any for-profit organization having control of any factual information.

Yes, yes, yes. I have always said that I’m happy for us to embrace AI as long as we always know when we are dealing with AI. It needs to be 100% transparent.

The “good enough” attitude is unfortunately the driver of so many business decisions. I don’t necessarily fear that AI will do my job much better than me; but I do fear that AI will be good enough for most customers, and that will destroy my career and that of many others. I just hope there is pushback and people start to demand human-created content.

I’ll take a stab at this, even though I don’t have expertise in such philosophical matters. Perhaps the big difference between humans and AI is our emotional experiences born of our senses and homo sapiens DNA. We create from a position of human experience – based on sight, sound, smell, touch, taste – starting at birth and ending when we die. It is a journey, along which we always have the knowledge that we will one day die. This awareness of mortality shapes us and plays into all the interactions we have with everyone and everything we encounter on our journey. Some obvious examples would be our complex relationships with parents and kids, or the wonder we feel when we travel to an incredible landscape for the first time, the smell that takes you back to Grandma’s house, the feeling of bread crust crackling in your mouth… And those are just the positive experiences.

Photographers are in tune with this feeling: capturing that fleeting moment in time that will never happen again. When we create, all of our past experiences and senses feed into our inspiration, consciously and subconsciously. Furthermore, this is done at a very complex and chaotic level, as our brains are both amazing at retaining information and very fallible at the same time.

While AI can start to emulate human emotions and be “aware” of their birth and death, can they really experience life in the way humans can and transmit that into their creative work? Maybe I’m being naive about the future of AI, but I still find it hard to believe that software can experience life anything like a human can without all our senses, emotions, DNA and fallible complexity. I guess I can’t say never, but I feel we are a long way from it at the moment.

2 Likes

McKenzie Wark writes in A Hacker Manifesto that “information wants to be free but is everywhere in chains.” - from Schneir on Security The Hacking of Culture and the Creation of Socio-Technical Debt - Schneier on Security

1 Like

It is worse than that. In recent years, across Canada, local news agencies and radio stations have been bought out by oligopolies and foreign entities only to be discontinued in the pursuit of profit and enrichment of executives. The few remaining are regional or national and owned by foreign faceless disinformation, investment or outrage machines. We can practically count okay but not necessarily reputable channels with our fingers (or digits if you are not sure about the thumbs - edit: maybe we only need one hand to count the ones that are not obscure and fringe). As much as the right and others love to attack CBC, and CBC and its management make self-interested moves to the detriment of public interest, the public broadcaster is one of the remaining bastions of publicly-accessible media besides over-the-air telecommunications channels.

1 Like

Emotions are also a crude computer, they complement our rational processes (a much later addition by evolution, emotions do a good job for the majority of mammals).

Sorry to be pedantic, but their is no “AI” in yet in the sense the term is usually understood. We have a specific ML technology, an engine which generates stuff from stuff that is already available using a Markov chain with a large state. What made this possible is a lot of computational advances.

The text generated by this ML technology looks impressive, because very few humans are good at writing text (it can be learned, but it takes effort, and not everyone needs it), and humans are rather slow at it anyway. Just like we are about a billion times slower than computers when it comes to arithmetic. So the new piece of technology looks “superhuman”, but it is not intelligent in the sense a dog or a 3-year old child is intelligent.

3 Likes

Maybe or maybe not linked to AI, but as an example of my previous post: it seems that hikers have been finding themselves in potentially life threatening situations by following trails on Google that in reality don’t exist.

1 Like

I think there’s debate about this, on the extent that our rationalisations are exactly that; post-hoc justifications of emotional drives, and/or rules of thumb. Not that we don’t have social ways to create rational responses in groups.

But I’m talking way beyond my pay grade and am often wrong… so pinch of salt.

1 Like

It is unclear from the video if that’s what actually happened, but even if it did, Google maps was never meant for hiking. For nontrivial hikes in remote areas, a topo map with a compass is a must, it can be a phone app but then you should have backups.

Hiking alone requires even more caution and preparation. A couple of years ago we were hiking in an area which was not so remote (a village is always within 4h), but has little craggy valleys and easy to get lost in without a map or navigation, and the climate is pretty arid. We met a young woman who was, frankly, lost. She wanted to “spend some time alone” after a life event (we were told the story, but I am not sharing), got a cheap airline ticket, rented a car, and went walking. At some point her phone lost the signal, so no map of any kind. It was outside the normal hiking season. She was not yet in danger, just dehydrated and panicked, and could walk to the end with us after we shared water and food.

I meant something much simpler: emotions are simple decision-making processes. I am not implying that they are optimal or rational, just that they help us make decisions (which may be bad decisions, but before sophisticated primate brains, they were the best available so they must have done OK because we are still here).

3 Likes

F’sure. The fact that we’re still here is my go to reason for optimism that humans are more immune to disinfo/misinfo than the dominant narrative would suggest

1 Like

100% agree regarding the hiking comments, but that’s not the focus here; the point I was trying to make is that the info provided by Google is, apparently, wrong. And the question is, assuming there was never a trail to start with, where was the ficticious information pulled from?

2 Likes

Even the best maps are not 100% accurate, especially about man-made features like hiking trails. When the latter are not maintained in areas which are friendly to vegetation or have a lot of erosion or water, they can quickly disappear or become impossible to use. This can happen without AI, and despite best efforts on everyone’s part.

I have had to abandon quite a few hikes and go back to the starting point because we could not find a trail and proceeding would have been dangerous. This is normal, even if it feels disappointing every time.

But I get your point. It is inevitable that machine learning will be used for map creation at some point, if not already. Interactive maps will “learn” hiking routes based on people getting lost or just some computer glitch. Some people will follow these and there will be accidents, trails will be closed, or require pre-registration (this is happening in a lot of places already, “for your safety”, incidentally collecting an entrance fee).

2 Likes

To be honest, around these parts, the biggest threats to my personal safety have been a combination of hostile dogs and my inability to read maps correctly in the first place (“Which way is up, again?”) :laughing:

1 Like