That's ok, the AI models don't mind.
Literally. They lack the capacity.
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
That's ok, the AI models don't mind.
Literally. They lack the capacity.
So do most of the users.
Who would've thought (me)
The future is hydroxychloroquine and Tylenol.
Yeah, no shit? LLMs don't actually know or understand anything, the idea that a scientific paper was retracted means nothing to them, all the training cares about is the patterns of word usage. It doesn't matter that some core part of the paper was wrong, it was SAID, and it is presumed to be grammatically correct.
This is a core reason why using LLMs as a search engine is fucking stupid, they don't filter for what's accurate, only what has been said before.
Once again, the statistical word parrot does not have any understanding of what it's doing or what it's saying or have any ability to "learn" new things.
What are the AI scientists doing to move towards "understanding"?
(Open question, not to you specifically)
Retracted papers? They cite papers that never existed
To be fair, human scientists still use material from retracted scientific papers too
A retracted scientific paper is probably better than reddit comments... so... arguably better with than without.
Google "Andrew Jeremy Wakefield" retracted
Still probably better than reddit posts about autism and vaccines.
It's surprising that LLMs don't use the frequently repeated data from a source like Stack Overflow to tell the user that they've already told them the answer before.
That would be amazing. Imagine showing your coworker arguing with chatgtp about what they were already told.