Employers who are foaming at the mouth at the thought of replacing their workers with cheap AI:
π«’
This is a most excellent place for technology news and articles.
Employers who are foaming at the mouth at the thought of replacing their workers with cheap AI:
π«’
This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.
Agreed. We don't seem to have a very cohesive idea of what human consciousness is or how it works.
You've hit the nail on the head.
Personally, I wish that there's more progress in our understanding of human intelligence.
I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.
do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.
if someone can objectively answer "no" to that, the bubble collapses.
Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.
Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So itβs unfortunately not that many people that already knew it.
Would like a link to the original research paper, instead of a link of a screenshot of a screenshot
It's all "one instruction at a time" regardless of high processor speeds and words like "intelligent" being bandied about. "Reason" discussions should fall into the same query bucket as "sentience".
You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.
Of course, that is obvious to all having basic knowledge of neural networks, no?
XD so, like a regular school/university student that just wants to get passing grades?