The fact that you have to completely rewrite my argument into a strawman before you can attack it tells me all I need to know about who I’m dealing with here. Have a great day.
Perspectivist
We’re not even remotely close.
That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.
Don't confuse AGI with LLMs. Both being AI systems is the only thing they have in common. They couldn't be further apart when it comes to cognitive capabilities.
The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:
-
Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,
-
Or we wipe ourselves out before we get the chance.
Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That's what humans do; improve our technology.
The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.
There’s nothing circular in what I said. I made a conditional claim: propaganda aimed at the regime, not the people, is justified when that regime is authoritarian. That’s not “assuming the conclusion” - it’s stating a position based on a distinction you seem eager to ignore. Disagree with it if you want, but at least engage with the actual logic.
What's doubling down called when you're doing the same mistake for 3rd or 4th time?
They're generally just referred to as "deep learning" or "machine learning". The models themselves usually have names of their own, such as AlphaFold, PathAI and Enlitic.
The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’
By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.
Most definitions are imperfect - that’s why I said the term AI, at its simplest, refers to a system capable of performing any cognitive task typically done by humans. Doing things faster, or even doing things humans can’t do at all, doesn’t conflict with that definition.
Humans are unarguably generally intelligent, so it’s only natural that we use “human-level intelligence” as the benchmark when talking about general intelligence. But personally, I think that benchmark is a red herring. Even if an AI system isn’t any smarter than we are, its memory and processing capabilities would still be vastly superior. That alone would allow it to immediately surpass the “human-level” threshold and enter the realm of Artificial Superintelligence (ASI).
As for something like making a sandwich - that’s a task for robotics, not AI. We’re talking about cognitive capabilities here.
“Understanding requires awareness” isn’t some settled fact - it’s just something you’ve asserted. There’s plenty of debate around what understanding even is, especially in AI, and awareness or consciousness is not a prerequisite in most definitions. Systems can model, translate, infer, and apply concepts without being “aware” of anything - just like humans often do things without conscious thought.
You don’t need to be self-aware to understand that a sentence is grammatically incorrect or that one molecule binds better than another. It’s fine to critique the hype around AI - a lot of it is overblown - but slipping in homemade definitions like that just muddies the waters.
Älä välitä, ei se villekään välittänyt, vaikka sen väliaikaiset välihousut jäi väliaikaisen välitystoimiston väliaikaisen välioven väliin.
Rough translation: Don’t worry about it - Ville didn’t worry either when his temporary long johns got caught in the temporary side door of the temporary temp agency.