this post was submitted on 09 Oct 2025
339 points (96.2% liked)

Technology

75935 readers
2745 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] panda_abyss@lemmy.ca 22 points 1 day ago (2 children)

Fabricated 4,000 fake user profiles to cover up the deletion

This has got to be a reinforcement learning issue, I had this happen the other day.

I asked Claude to fix some tests, so it fixed the tests by commenting out the failures. I guess that’s a way of fixing them that nobody would ever ask for.

Absolutely moronic. These tools do this regularly. It’s how they pass benchmarks.

Also you can’t ask them why they did something, they have no capacity of introspection, they can’t read their input tokens, they just make up something that sounds plausible for “what were you thinking”.

[–] MelodiousFunk@slrpnk.net 4 points 1 day ago (1 children)

Also you can’t ask them why they did something, they have no capacity of introspection, (...) they just make up something that sounds plausible for “what were you thinking”.

It's uncanny how it keeps becoming more human-like.

[–] MotoAsh@piefed.social 5 points 1 day ago* (last edited 1 day ago)

No. No it doesn't, ALL human-like behavior stems from its training data ... that comes from humans.

[–] FishFace@lemmy.world 2 points 1 day ago (1 children)

The model we have at work tries to work around this by including some checks. I assume they get farmed out to specialised models and receive the output of the first stage as input.

Maybe it catches some stuff? It's better than pretend reasoning but it's very verbose so the stuff that I've experimented with - which should be simple and quick - ends up being more time consuming than it should be.

[–] panda_abyss@lemmy.ca 1 points 1 day ago (1 children)

I've been thinking of having a small model like a long context qwen 4b run and do quick code review to check for these issues, then just correct the main model.

It feels like a secondary model that only exists to validate that a task was actually completed could work.

[–] FishFace@lemmy.world 1 points 1 day ago

Yeah, it can work, because it'll trigger the recall of different types of input data. But it's not magic and if you have a 25% chance of the model you're using hallucinating, you probably end up still with an 8.5% chance of getting bullshit after doing this.