Outsourcing thinking

But, compared to LLMs, that’s good ol’ fashioned, wholesome machine learning: the user gets a useful functionality, potentially running locally.

It is unfortunate that these days, “AI” means “a bot I can talk to” for most people, while machine learning has a lot of areas, some of them are very robust and useful. In fact, some of them are so robust that they can be hidden entirely from the user.

1 Like

Eliza reminds me of how everyday people use “AI” chat gpt gemini cortanapilot; Artificial Intelligence then means reliance on a synthetic oracle (oracle as in throwing random I Ching sticks). It’s a limited view on what the bots can do actually, and it’s the area at which the bots are not exactly doing great.

1 Like

As a comment somewhere said - “I miss when AI meant the computer-controlled characters in video games”

2 Likes

@HIRAM Not sure if you meant to reply to me.

I don’t buy it either. I do all the thinking, studying and design. I use “AI” as a sparing partner because it is incredibly dim-witted and weird. :crazy_face: It is a time-sink, but sometimes, I end up with something better than if I were to go about it alone. No, I do not have friends to do that with. :sob:

1 Like

There was original thought in your post and what I assume to be LLM output in an image. So, there is differentiation, though crawlers/bots might be able to read it.

Our primary concern is slop and irresponsible writing that would lower our standards and dis- or mis-inform the readership.

2 Likes

‘If Claude Code and its like become a generally used “super-Excel”, though, that might have quite unpredictable results. It’s a productivity boost at some points, but we might be forced to reconsider the aphorism that “speeding up output behind a bottleneck cannot increase overall productivity, although it can reduce it”.

I guess that the prediction problem then switches to something like – if the IT world of the future involves something like “trying to stuff 200 end user apps into a trenchcoat so they can pretend to be a system”, can other LLMs help with that? And the answer is … maybe?’

Brad DeLong’s take on Davies’s take:

‘Dan Davies has his finger on something important here. It is not, at root, about “AI”. It is about work. The spreadsheet first escaped from the finance department and colonised the world. “Serious” professionals had a simple rule: you could tinker in Excel for your own use, but anything real and public had to be rebuilt, checked, and owned by somebody. To transgress this was to make trouble for yourself: cf. Reinhart and Rogoff. “Excel-slop”—the undocumented workbook on a shared drive, with circular references, brittle links, and magic numbers—was what you produced if you did not expect to be accountable when things went wrong. People who wanted to do good work learned not to live that way. Davies’s point is that we are now replaying that history at higher speed and greater scale’

2 Likes

An interesting read from the guy who created Gas Town – although I think it’s idealistic / naive:

https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163

2 Likes

The reasoning seems to be that because AI makes people more productive, they will work more and burn out (???). I have a hard time making logical sense of this.

That was a strange read. I confess I only skimmed the second half. There was the “10x” claim again, there was the “must use latest Clode Code 4.3.7”, and the" this is addictive". Once again it was written by a manager, not an engineer.

I don’t know. I manage people, too, these days. So I organize work, coordinate knowledge, mentor juniors, brainstorm and architect with seniors. I genuinely don’t see how an LLM coding agent could help me in my job.

My team’s job is the coding and engineering. We do use LLMs (though not Clohd Cod 3.14.15). They help with the coding part. But that was never the challenge. It’s algorithms and architecture and requirements and coordination that is difficult. Even if LLMs were to speed up the coding part by “10x”, that wouldn’t impact our totals much. And from what I can see in my team, the speedup is more around 1.2x than 10x.

The energy vampire analogy seems apt, though. Too often, an LLM discussion has sent me down an entirely superfluous rabbit hole. Too often, I’ve wasted minutes reading some LLM drivel before realizing that it was just junk. I hate that LLM verbosity is becoming normalized. In coding, too, I’ve seen the LLM generate a screenful of verbose fiction that condensed into four lines of reasonable code once a human made sense of it.

Also, I reject the wishful mnemonic of “artificial intelligence”. These are large language models. They generate language (in various forms). But “intelligence” is much too lofty a claim.

3 Likes

I’ve argued that AI has turned us all into Jeff Bezos, by automating the easy work, and leaving us with all the difficult decisions, summaries, and problem-solving. I find that I am only really comfortable working at that pace for short bursts of a few hours once or occasionally twice a day, even with lots of practice.

So, what he observed was that he used to work on a mixed set of issues: boring boiler-plate, setup, configuration, simple CRUD operations as well as the hard stuff: decisions, strategies, architecture.
Now, Claude Code and similar tools can easily take the ‘boring stuff’, and he’s exhausted by doing the ‘hard stuff’ all day. But, if bosses’ expectations remain, it means working with fewer people (saving on salaries), but sill expecting the remaining workers to work full-time on very exhausting mental tasks.

And if you work less, there’ll be someone who needs that job more, and will work more.

So I guess what I’m trying to say is, the new workday should be three to four hours. For everyone. It may involve 8 hours of hanging out with people. But not doing this crazy vampire thing the whole time. That will kill people.

The idealistic part:

You might think you don’t. And indeed, individually you may not have much sway over it. But collectively, the employees of your company have literally all the power. Now that I’ve been up at the top, I’ve learned that CEOs have surprisingly little power.

You need to push back. You need to tell your CEO, your boss, your HR, your leadership, about the AI vampire. Point them at this post. Send them to me. I’m their age and can look them in the eye and be like yo. Don’t be a fool.

1 Like

I think you’re wrong about that:

The problem with this kind of argument is that it applies without AI. In almost all jobs, one could work overtime, produce more for a short period, compete with others, then burn out. It does not happen, for a variety of reasons.

Taking the premise of the article as given (AI multiplies productivity in mundane tasks, and for the remaining high-level tasks we only have a limited capacity), then the new equilibrium will be people working fewer hours for higher wages (employers and employees will share the increase from the higher marginal value, just like they do now). That does not sound too bad, actually.

But the whole thing is pretty open-ended at the moment. In the FOSS world, I have seen people generate hundreds of PRs to repos using Claude and Copilot. That looks like some turbocharged version of “productivity”. But it may just be shifting the burden to others.

Eg when it comes to code review and questions, things stall, unless the reviewers are happy to talk to LLMs too. ("You are so smart to notice that mistake in my code, good job! I have fixed it now for you. Or not. In any case, just spend another half an hour on reading it, then, if you find a problem, I will spit out another attempt in a few minutes that will tie you down for another 30 minutes.)

2 Likes

For the burn-out part (from another blog entry of the same guy):

SageOx are the ones that told me that an external fourth contributor overseas wasted a bunch of time acting on 2-hour-old information, because everything is moving so fast.

I don’t think that can be sustained for regular work. Sure, to fight an incident, things have to move fast, but normal product development shouldn’t be like that. It’s like a scene from a movie, with tense music playing, people typing frantically why shouting to each other.

It depends. Why would your manager give you a raise, when you’ll be happy not to get fired?

In the ‘dark factory’ model, there would be no human code reviews.

We’ll have to see (either which way it goes).

1 Like

Since that vision comes from the same kind of people who a few years ago imagined that we would now be living in the “metaverse” with VR goggles strapped on 24/7, I am a tad skeptical.

I would bet much more on the enshittification scenario (which is always a safe bet :wink:): if LLM coding tools become useful, their providers will spend a large amount of effort to eradicate and denigrate all other alternatives and make users/companies dependent on them, then raise the price to the point that they extract the last cents of the surplus.

The current PR vibes are compatible with stage 1 of this process: all forums are full of stories how LLM made people productivity gods, yet the software I interact with from day to day hasn’t improved perceptibly. All coding assistants are now trying to have tie-ins at universities, so students become dependent on them for all tasks (all in the name of helping the students, of course).

3 Likes

Myriad of reasons.
Long term stability. Reacting to inflation pressure, before workers run to other companies that pay better. Not employing with the constant pressure of getting fired (ruling through fear stifles productivity). In order to compete (or comply!) with union-payment-requirements. Not disrupting a teams established flow. Not wanting to send wrong signals, set wrong incentives for others.
Just to name a few.

1 Like

I hope you are right. I work for a company that’s set AI as the central technology to focus on, so the pressure may be getting into my head.

2 Likes

Remember that the “AI companies” have a vested interest in making AI seem inevitable and unavoidable. But the only inevitable thing in this whole mess is that rent seekers will seek rent.

I do not buy the inevitability, nor the myth of 10x anything. Remember, there is no silver bullet. At least not in the short term. We tend to overestimate short term effects, and underestimate cumulative long term changes.

2 Likes

My company is not AI focused and I am also tired sick of it, can’t imagine being in your position.

My company does software for banks and other financial institutions (credit cards), and we are also getting more and more pressure to apply AI features to some of our projects.

The thing is, our clients have closed intranets with no internet access, local models are completely useless for anything serious as they hallucinate a lot more. Management had some FOMO and bought some macs with decent memory in the hopes that we would sell local processing, but even that is not enough, small models are just too bad for anything precise. They are wonderful for OCR, creative writing, etc, but for anything else, which even 700B+ Param models struggle, they are useless.

How to tell these people that their investment was useless and this is just not feasible?

Copilot also puts extra pressure on developers to deliver and you are almost obligated to use it, because if not you will fall behind. And this will create such technical debt that “artisanal” programmers will have work for a few more years, until contexts grow large enough to handle big codebases all at once.

1 Like

Good point. In addition, everyone who is craving clicks/attention opines about AI these days. The more controversial, the better. Concepts like “dark factory” get an inflated visibility because of this.

1 Like

Straightforward discussion of some of the main economic issues related to AI. I don’t really see any direct evidence yet of his relatively strong take on its advanced capabilities other than he talks to people working on the models.

He’s perhaps a bit hopeful that society will figure out a way to distribute any welfare gains in my view, given recent experience, but you never know.

2 Likes