Well the LLM is a computer program, so it’d be the people who make the LLM, the people who deploy the LLM, the people who use the LLM.
However, safety from the developer of the LLM itself would likely hinder or stop a lot of the down stream effects.
Well the LLM is a computer program, so it’d be the people who make the LLM, the people who deploy the LLM, the people who use the LLM.
However, safety from the developer of the LLM itself would likely hinder or stop a lot of the down stream effects.
This is getting extremely uncomfortable. I’m disgusted. I’m out, have a great day.
There’s a big push in America to push solutions to our problems onto individuals and away from systems when systematic solutions are the only realistic way to make lasting changes.
Convincing people that they are the masters of their own destiny makes them easier to manipulate and shifts attention away from those who want to profit from them.
Yes, precisely. That’s why the horror of social media was framed as a “free speech” issue for a long time.
While the debate about agency and responsibility is very interesting, I am concerned about a much more mundane issue: black box processes can be controlled in very subtle ways to influence users, without them explicitly noticing.
It is now widely understood to be happening with social media sites; the items showing up in people’s feeds come from a carefully tuned algorithm, which is adjusted continuously to align with the goals of people controlling it.
Now imagine that a sponsor gets to exert some subtle influence on LLM results. Not in the sense that you will get “buy [X brand] of car NOW!” to a query about raspberry jam recipes, just a very subtle bias that will be very hard to notice for individuals.
Or profiling people based on their queries, and selling these to advertisers. Of course your queries are private. It’s just that we use them to get to know your better because we love you.
Fortunately, researchers are now thinking about these things. But in order for these concerns to trickle down to policy, it is important that their voices are not lost.
And that’s not even dealing with the more mundane issue that services are weaponising addictive tactics to steal your time and attention at great cost to society in order to sell advertising.
The sort of social media which lets a group of friends keep up with each other is a net benefit. Doom scroll echo chambers which keep you glued to your screen take people out of the world and kill pubs, clubs, social and sports groups and friendships and are a huge negative.
Dallas Fed report, FWIW:
“Returns on job experience are increasing in AI-exposed occupations. Young workers with primarily codifiable knowledge and limited experience will likely face challenging job markets.
However, there appears to be less cause for concern about widespread job displacement for older, experienced workers, particularly those in occupations with high experience premiums in which AI is likely to complement the worker’s tacit knowledge.“
Kind of intuitive suggestion based on data using proxies for experiential vs book knowledge.
Old folks rule! ![]()
Sorry, I couldn’t help myself. I do feel for new CS grads.
It is always good to see another generation pull the ladders up behind them.
To be fair, it’s not mostly the older generations that are rolling out AI. Usually with these things, the underlying divide is wealth rather than generational. Like housing.
I do not think it is a wealth divide like housing. It is the established players being willing to engage in behaviors which (unintentionally) salts the field behind them. Which is, primarily, a generational issue, as the established players are (almost) always those with years of experience.
Those workers who, because of pride in their work and being team players (not a ‘bad’ thing!), are utilizing and making the best of the AI mandates contribute to the loss of positions for juniors. Which has further consequences. Most of which wont really be fully felt for another decade.
But we will have gotten ours and by the time the consequences really start to bite, we will be gone.
I suppose what you’re saying is correct. I don’t know enough about how it’s used in the workplace as I’ve only used it minimally for my own work. I guess it’s going to be difficult or impossible for people to avoid using it when others around them are doing so and presumably seeing some productivity gains.
Didn’t take too long to cave in ![]()
I thought this was interesting on what constitutes autonomy, though I’m not equipped to know whether the arguments are sound:
I saw one of the bands I’ve been enjoying lately Space Acre came off Spotify partly because of the CEO’s personal investment in Germany’s Helsing.ai drone developer. On the face of it, I’d rather Europe had its own military drone company than having to rely on others, though I don’t know anything about the company. Of course it would be great if these things weren’t necessary but that world has never existed.
I agree. Ukraine has proved that drones are a great economic tool for the “little guys” and it’s a very effective defensive measure. I’d rather we have more nukes and better deterrence, which reduces the need for other weapons, but we don’t live in a perfect world
Just today or yesterday Poland got scolded by the US for wanting to pursue nukes. We all know that having them would reduce their reliance on US tech
I can see Poland for sure getting them at some point, and that probably means Germany next. Japan has been thought to have “everything ready” to go nuclear in the event of crisis and as they, rather than the US, might end up being the first responders in a Taiwan crisis, why not.
I find this speculative but optimistic take quite persuasive, though it accords with my priors and my interests as a cog in the gatekeeping system that preceded social media. Maybe our politics right now is a lagging indicator of the populist mediums that have already peaked.
'Unsurprisingly, the decline of elite gatekeepers has increased the influence of popular ideas marginalised by elites, another term for which is “populism”. Social media benefits populism not by brainwashing the masses with viral fake news, but by exposing voters to widespread non-elite perspectives and making it easier to mobilise around them. In Western liberal democracies, that means perspectives that conflict with the liberal establishment’s technocratic progressivism, including xenophobia, conspiracy theories, and quack science.
At the same time, the performative, engagement-maximising character of social media has made much of political discourse more stupid and sensationalist, and elevated politicians and pundits skilled at exploiting this dumbed-down media environment.
…
[AIs] are a kind of anti-social media, producing information heavily skewed towards expert opinion and communication styles. They are a strange, new technocratising force. However, there are also reasons to think they will be more effective than all-too-human technocrats at shaping public opinion.
First, unlike human experts, they can rapidly deploy encyclopaedic knowledge to answer people’s idiosyncratic questions. Their responses can be probed, scrutinised, and questioned without them ever getting tired or frustrated. They won’t just tell you that there is no persuasive evidence for a link between vaccines and autism. They can carefully walk you through the kinds of evidence we have and address your specific sources of scepticism. This partly explains why they can be highly persuasive, even in correcting conspiratorial beliefs that many assumed were beyond the reach of rational persuasion.
Second, LLMs typically share information politely and respectfully. This not only differs from the performative, gladiatorial character of much debate and discussion on social media platforms, but also improves on much communication by human experts. Being human, experts are often biased, partisan, and simply annoying, and when they seek to “educate” the public, it can be perceived—and is sometimes intended—as condescending and rude. In contrast, LLMs deliver expert opinion without such status threats.’
Apparently, some large tech companies, such as IBM, didn’t get this memo. They have been sacking large numbers of older experienced people, and hoping that AI can pick up the slack, all because the younger people have lower salaries.
I wonder, based on nothing, how much of this is board members incentivised to cut costs in order to lift the near-term share price in a market obsessed with the AI hype cycle.
Exactly. I have already said there is a trend to corporations cutting workers for purely financial reasons, and using AI as a convenient scapegoat.