AI & the Ensh*tification of the Web

Maybe this is irrelevant for pixls but, after posting previously on AI-generated images, I found this interesting. Most of it relates to text but there’s also a brief mention of stills and video.

The related second article shows how Google’s plans for AI would likely impact publishers’ revenues and therefore commercial photographers, potentially.

1 Like

I don’t think so.

Already I predominantly search for $TOPIC forum in order to get human evaluations of things. Reddit and Stackoverflow are another good source of human experiences and opinions. It used to be that blog would also work, but incessant blogspam and SEO saturation has made that search term near useless.

All of the latter is currently written by underpayed content farms (I know a few such writers myself. It’s not a great profession). In the future, large swaths of it will be generated by Ai instead. But who cares, we’re already filtering it out. If anything, there might be legislation for marking LLM spam as such, which might make the filtering easier.

The advertisement bubble has all but burst in the last few years. Ad prices have plummeted. Now the adtech oligarchs are desperately clawing for the next straw to support their business models.

They are investing Billions into this venture, but I have yet to see a convincing product worth that money. Facebook just ordered a GPU cluster from Nvidia for 20 G$. That’s an endgame amount of money. That’s a stack of $100 bills that is 20 km high. It will be one of the biggest clusters in the world, and has made Nvidia one of the wealthiest companies in the world. If they can’t pull off something absolutely mind blowing with that sort of effort, the LLM game will be over.

So far, we have not seen a product that would fit this description. And frankly, I am doubtful we ever will.

4 Likes

Agree on using Reddit and thanks for the useful search term for forums. I often would lazily just use the word “forum” in the search, which isn’t ideal. I agree from a position of ignorance and just having read some stuff that fancy regressions and autocomplete are being hyped but costless spam generation seems within its means. I guess that eventually breaks the business model and “something has to be done”

1 Like

It’s the old “During the gold rush the real money makers were the people who sold shovels.”

.

That said, why is it a bad thing that commercial advertisement photographers will be impacted? In the end what they created was already so generic, losing their jobs to AI will be all but expected. Maybe I’m viewing this in a cruel lens, of course it will be impactful for the individual and that is bad, but so were people who painted portraits before photography, or craftsmen before the industrial revolution. Should we even try to stop this?

I don’t believe losing your job to AI is worse than losing it to any other automation, even on supposed “artistic” (which it hardly is for 99% of them, let’s be honest) ventures as commercial photography.

2 Likes

Yes, there will be work for humans if not those specific jobs. I guess the potential for greater tendency toward monopoly and worse outcomes for users would be bad. I think those are solvable, at least in theory.

1 Like

I’m way over my head here, but do we need to consider being more aggressive about explicitly copyrighting all of the content we create? Or, can “they” just take it, anyway?

3 Likes

Maybe I’m just overly pessimistic, but from what I’ve seen an individual’s right to their own content is ultimately only as good as the lawyers they can afford (and want) to hire – at a minimum. Simply putting a copyright on something is protection only if it’s explicitly, legally enforced (i.e., fines are levied, etc.). And that’s always after the fact.

For example, if someone grabs one of your images, puts it into promotional literature (print or digital) and makes money from it, how do you get your fair share of what they made plus maybe damages as well? You hire a lawyer – with the expenses that implies – and go after them, with no certainty of success.

And on another different level I’m loathe to perpetuate, much less contribute to, the already ridiculous volume of litigation for every imaginable offense.

So unless I’m missing something I don’t see where copyrighting something is ultimate protection. It’s worth doing if it’s cheap and easy, but there’s very limited real value in it, particularly for the individual. Again it comes down to the lawyers.

Just my $0.02, though…

2 Likes

It’s also much harder to prove that your material was used to train a particular model as opposed to your material being blatantly used somewhere.

Maybe if legislation comes out that forces companies to provide a list of the material they used, and the proper licenses for copyrighted material in there.

Of course no country wants to properly legislate such a thing in fear of losing the company’s market. At the moment the EU seems to be foolish enough to do it, but companies will just pull out or change their game here, like they always do… One reason why we stay so far behind in some industries, incredible amount of red tape :smiley:

1 Like

OpenAI’s CTO doesn’t know what images Sora scrapes (assuming this WSJ interview is real. Ha):

https://x.com/tsarnick/status/1768021821595726254?s=20

1 Like

Capturing attention isn’t only for advertising. It is also for influencing opinion. In the long term, this is far more valuable than advertising revenue.

What is the fundamental difference between human learning and machine learning? For example, I have learnt some plumbing tasks by watching plumbing tutorials on YouTube. I don’t suppose a robot plumber could learn like that, yet. But human writers learn from reading material written by other humans. Artists learn from other artists, surgeons from other surgeons, and so on.

If we allow humans to learn from humans, what is the fundamental objection to machines learning from humans? Is it just a fear that machines will then do the work better/faster/cheaper than we can?

1 Like

You cannot, legally, own a human

I don’t think there’s anything fundamentally wrong with machines learning from humans. But in practice, the way this technology will be used will be the way new technologies have always been used. That is, to devalue the labour of individuals and enable the rich to accumulate even more wealth by privatizing public resources.

Cory Doctorow has a lot to say on this:

[The AI companies’] pitch is: “train our products on your workers’ skilled output, fire your workers, replace them with our products.” That’s a monstrous proposition – and that’s before we get to the part where their products aren’t anywhere near good enough to do your job, but their salesmen are absolutely good enough to convince your boss to fire you and replace you with an AI model that totally fails to do your job.

6 Likes

Much of commercial photography is generic, but not all. There is still space for truly innovative work, and the craftspeople at the top of the field deservedly earn a top salary for producing it. Arguably AI won’t have any impact at all at this end.

But - will the development pipeline that produces these superstar artists remain viable when all the ‘ordinary’ work is taken over by AI? It’s one thing to take up a profession where you can work hard and make a fair living, with a small chance of being a star. It’s quite another when your only hope is to be a star or fail entirely.

A common theme when a new technology threatens to steamroll over the status quo is to throw up our hands and say it’s inevitable. And much of it is. What isn’t inevitable (or doesnt’ have to be) is that the specifics of how the technology gets adopted, and by who, must be decided by entrenched monopolies, with no input from the broader community.

AI is shaping up to be a massive copyright fight, with tech on one side and publishers on the other. One or the other will win, or they’ll find common ground. But when they get to frame the debate, the interests of individual artists and the public at large are lost in the fray.

3 Likes

Exactly!

1 Like

One comment I would make is that the article claims Amazon is posting fake reviews. I just opened my Amazon account, looked at some reviews of products and they seem genuine reviews to me. I also know that I am on occasions sent questions asked to Amazon by people about products I have purchased and asked if I can give a response to them.

Secondly, speaking for myself I avoid reddit reviews. They just seem too often to be the rambling opinions of individuals rather than well structured reviews of products I am interested in buying.

Alas it is annoying that a google search will result in paid responses hijacking the search. We just have to be extra vigilant about which links we click on.

2 Likes

The absolute genius of Google is that they managed to teach us that they only provide the blue links, but it’s our responsibility to vet the links’ contents.

Since LLMs only provide one answer, it is now on them to ensure an accurate answer. Given that this is in fact a hard task for a human, LLMs often get this wrong, and they are thus not deemed trustworthy.

Recently, I asked ChatGPT when my HiFi amplifier was released (an old Arcam Movie 5.1 that I’m looking to replace because it can’t decode modern surround sound any more). It said, 1996, which I know to be wrong. I told it so and it apologized, and said, 1985, which is completely absurd. Calling it out on that, it apologized again, and said 2017. I gave up and looked it up myself. The answer is 2006. It obviously it had no idea, even though this is clearly publicly available information. But instead of telling me that it doesn’t know, it just hallucinated plausible garbage.

Trustworthy, this is not.

4 Likes

Internet searches have always been dodgy but AI now adds an additional problem. Before it was lies and extremist views being shoved down our faces but now it is AI masquerading as a definitive source of facts.

1 Like

I agree. In the end I believe the artists among the bunch will have to shift to a different realm of photography where AI will (or should not) not have an impact. This is not ideal of course. For me the advertisement photography industry or the whole Ad industry having problems with AI is not problematic, as it was a rotten industry from the start. I feel for the honest individuals who will lose their jobs or be forced to shift to other ventures though.

I agree. Thankfully so far some models are being open sourced and hopefully the trend continues. I believe that if AI is here to stay, it should also be in the hands of the people, without government restrictions. The government should fight the bad usage of the tool, but not the tool itself.

This relies completely on how they are used. For example, some models can already predict illnesses based on symptoms better than human doctors, but it is still on the doctors to verify that the prediction is correct. Like you said, it’s also a hard task for humans to fulfill and there’s no shortage of forum replies, books, videos etc of humans giving out completely unreliable information.

For an objective fact like the year an amplifier was released, there’s not that much leeway for a right answer but it’s not always the case.

1 Like

Sorry to post more “read this” links (how annoying is that…) but on the matter of trustworthiness/misinfo/bullsh*t, I find cognitive scientist Hugo Mercier’s relatively sanguine arguments quite persuasive. The gatekeepers won’t go away and we’re evolved to not take things on trust and to seek reliable sources of info for obvious reasons.

I guess the concern in the original post is that well-capitalised entities will increase their control over the structures of the online world and that their incentives are badly aligned with users?

[Aside: I’m a bit of a fanboy for Mercier and co-author Dan Sperber’s ideas in their text Enigma of Reason]

3 Likes