LLM Co-coding and AI Slop: differences explained

Yes but I mentioned that in my post. The current blackboxes can be understood by a team of programmers you can hire from the street. If we followed this path that all programming will be done by AI, there will be nobody left who will understand them but those AIs controlled by trillion dollar corporations run by shady people.

1 Like

These two statements are not necessarily consequential.

Yes, the natural evolution is that code will be bypassed directly. You only need programming languages if you need humans in the loop. Machine code is way more efficient and performant, so eventually models designed to work without humans in the loop will start generating that directly.

But that does not mean that the code will not be “understood”. The same algorithms that produce the code can be used to make it understandable to humans, if need be. Also, for all intent and purposes software products of corporations is already a black box for the average user, as they don’t let you peek inside their code base. So, what does it matter if you can or cannot understand the actual code, when you cannot read it already?

As for the controlling part, as always it will be a struggle of powers. It used to be the case that only a few governments and corporations could afford to run a mainframe. Powerful entities have always add an edge over individual, and the consolidation of power and wealth into the hands of the few is an inevitable side effect of capitalism and corporativism. This is just another example of the same happening over and over, I am afraid.

I’m not 100% convinced about that. Source code is an abstract representation that’s easier to change, refactor, and to translate to machine code on various platforms (which, in the cloud, can be a real advantage – one day, you are on Intel, the other, on ARM, or on whatever in the future). Symbols carry semantic meaning, which is easier to translate to the specs given by the end user. I think compilers will remain much more efficient at turning that higher-level abstraction (which may not be in any of the languages in existence today) to binary than LLMs. Even if you remain on the same hardware, you may benefit from new optimisation at the fraction of the cost if you simply recompile your software than using an LLM to say ‘this code was created by an earlier LLM version; see if you can make it more performant’.

But that’s tangential to the main worry: a few corporations monopolsing on the ability to turn specs into code.
I also share this worry, not only because of my own job, but because of handing key elements of our infrastructure to something we have no control over. Imagine if you depended on electricity or water without control over it. It’s no wonder that in most places such basic services and infrastructure are non-profit state monopolies to some extent, are controlled strictly by laws, and companies are obliged to maintain open access for everyone.

3 Likes

Yes, these are all good points. Maybe a better way of putting it is that LLMs will develop their own internal representation that combines the best of both worlds. My point being that whatever they come up with does not need to be human readable IF humans in the loop are not a factor.

I guess it is a matter of accountability, really. We currently need men in the loop because (1) the tech is just not good enough for these things to be completely autonomous, and (2) they are not legal entities, and the legal framework around them is still fuzzy, so everything that they do needs to be signed off by a human who takes responsibility for their actions.

But a point may be reached (I am not completely sold on this, yet) where specialized models become good enough that human visibility is only required at some well designed intersection points, and whole subsystems can run in complete autonomy. And if a legal framework exists so that they (or the companies that provide them) can be held accountable in case of errors or failures.

Which is what Doctorow calls the Reverse Centaur (someone who’s only there to take the blame if something goes wrong; who is not helped by AI, but rather is serving it).

I’m quite sure there will be companies that work like current software companies, creating custom products for their customers, or providing outsourcing/staff augmentation, where the main difference to today will be that much more work will be performed by machines; and probably there will be companies providing the compute cloud, LLMs and all. Even with traditional software companies, there are contracts that deal with who is responsible for what. I think that as long as gross negligence or outright malice can be proven, such companies are (and should be) protected from consequences of shipping buggy code (bugs being unavoidable). There will be the market, based on a price / performance ratio, based on reliability, speed, cost (as usual, choose any 2 :slight_smile: ).

2 Likes

The population collectively has far more power. I think that’s why some systems like to push rugged individualism so hard. It makes the people easier to control.

Radicalization has the same effect. Divide et impera. It works.

My guess is that LLM based coding services will be enshittified long before that happens. The providers know your full usage in detail. Every single byte. Once a company fired most of the good engineers who can actually code, the temptation to tweak pricing plans to extract all almost value added will be too great.

From the perspective of the providers, the black box is a feature. Make is as black as it gets; the harder the code becomes for humans to refactor, the better.

Yes, I know that you can use LLMs to write clean code with careful planning and iteration. But not every user will be so careful or put in the effort when they can cut corners. I routinely get PRs on FOSS repos that were produced by a single-sentence prompt that adds a feature at the cost of a disproportionate technical debt. And then the “authors” get offended when I don’t merge.

1 Like