this post was submitted on 13 Apr 2025
21 points (100.0% liked)
TechTakes
1787 readers
138 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
For a while now I jave wondered how much of those "the llms all fail at this very basic task" problems that suddenly get fixed are not fixed by the model getting better but just a bandaid solution which solves that specific problem. (Putting another llm in front of the input to detect the problem and then sending it to the llm that is trained on that specific problem would be a bandaid solution btw, it is just adding more under the trenchcoat). And even if somebody were to answer this question, the well is so poisoned im not sure if I could believe them.
I have no doubt whatsoever that it is what they're doing. It's also how they fix hands and faces in picture generators.