this post was submitted on 28 Sep 2025
322 points (95.7% liked)

Comic Strips

19845 readers
2209 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS
 

you are viewing a single comment's thread
view the rest of the comments
[–] da_hooman_husky@lemmy.sdf.org 28 points 3 weeks ago (7 children)

There are absolutely people that believe if you tell ChatGPT not to make mistakes that the output is more accurate 😩.. it’s things like this where I kinda hate what Apple and Steve Jobs did by making tech more accessible to the masses

[–] scratchee@feddit.uk 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Whilst I’ve avoided LLMs mostly so far, seems like that should actually work a bit. LLMs are imitating us, and if you warn a human to be extra careful they will try to be more careful (usually), so an llm should have internalised that behaviour. That doesn’t mean they’ll be much more accurate though. Maybe they’d be less likely to output humanlike mistakes on purpose? Wouldn’t help much with llm-like mistakes that they’re making all on their own though.

[–] rumba@lemmy.zip 3 points 3 weeks ago

You are absolutely correct and 10 seconds of Google searching will show that this is the case.

You get a small boost by asking it to be careful or telling it that it's an expert in the subject matter. on the "thinking" models they can even chain together post review steps.

load more comments (5 replies)