There's a very long history of extremely effective labor saving tools in software.
Writing in C rather than Assembly, especially for more than 1 platform.
Standard libraries. Unix itself. More recently, developing games in Unity or Unreal instead of rolling your own engine.
And what happened when any of these tools come on the scene is that there is a mad gold rush to develop products that weren't feasible before. Not layoffs, not "we don't need to hire junior developers any more".
Rank and file vibe coders seem to perceive Claude Code (for some reason, mostly just Claude Code) as something akin to the advantage of using C rather than Assembly. They are legit excited to code new things they couldn't code before.
Boiling the rivers to give them an occasional morale boost with "You are absolutely right!" is completely fucked up and I dread the day I'll have to deal with AI-contaminated codebases, but apart from that, they have something positive going for them, at least in this brief moment. They seem to be sincerely enthusiastic. I almost don't want to shit on their parade.
The AI enthusiast bigwigs on the other hand, are firing people, closing projects, talking about not hiring juniors any more, and got the media to report on it as AI layoffs. They just gleefully go on about how being 30% more productive means they can fire a bunch of people.
The standard answer is that they hate having employees. But they always hated having employees. And there were always labor saving technologies.
So I have a thesis here, or a synthesis perhaps.
The bigwigs who tout AI (while acknowledging that it needs humans for now) don't see AI as ultimately useful, in the way in which C compiler was useful. Even if its useful in some context, they still don't. They don't believe it can be useful. They see it as more powerfully useless. Each new version is meant to be a bit more like AM or (clearly AM-inspired, but more familiar) GLaDOS, that will get rid of all the employees once and for all.
People focus on Claude code because it's a massive improvement over the previous models. The difference between gpt 4.1 and Claude 4 is palpable. (Claude code is really just a particular interface for using Claude 4, though that interface does add some juice beyond just the model the same way that copilot does.)
You make an excellent point in the analogy between c and assembly.
Part of it is I think the current companies are extremely consolidated. Why would they want to make something new? They are only interested in strangling their remaining customers.
I dunno, I guess I should try it just to see what the buzz is all about, but I am rather opposed to plagiarism and river boiling combination, and paying them money is like having Peter Thiel do 10x donations matching for donations to a captain planet villain.
I personally want a model that does not store much specific code in its weights, uses RAG on compatibly licensed open source and cites what it RAG’d . E.g. I want to set app icon on Linux, it’s fine if it looks into GLFW and just borrows code with attribution that I will make sure to preserve. I don’t need it to be gaslighting me that it wrote it from reading the docs. And this isn’t literature, theres nothing to be gained from trying to dilute copyright by mixing together a hundred different pieces of code doing the same thing.
I also don’t particularly get the need to hop onto the bandwagon right away.
It has all the feel of boiling a lake to do for(int i=0; i<strlen(s); ++i) . LLMs are so energy intensive in large part because of quadratic scaling, but we know the problem is not intrinsically quadratic otherwise we wouldn’t be able to write, read, or even compile the code.
Each token has the potential of relating to any other token but does only relate to a few.
I’d give the bastards some time to figure this out. I wouldn’t use an O(N^2) compiler I can’t run locally, either, there is also a strategic disadvantage in any dependence on proprietary garbage.
Edit: also i have a very strong suspicion that someone will figure out a way to make most matrix multiplications in an LLM be sparse, doing mostly same shit in a different basis. An answer to a specific query does not intrinsically use every piece of information that LLM has memorized.
Like MoE (Mixture of Experts) models? This technique is already in use by many models - Deepseek, Llama 4, Kimi 2, Mixtral, Qwen3 30B and 235B, and many more. I read that GPT 4 was leaked and confirmed to use MoE, and Grok is confirmed to use MoE; I suspect most large, hosted, proprietary models are using MoE in some manner.
No no I am talking of actual non bullshit work on the underlying math. Think layernorm, skip connections, that sort of thing, changes how the neural network is computed so that it trains more effectively. edit: in that case would be changing it so that after training, at inference for the typical query, most (intermediary) values computed will be zero.