this post was submitted on 10 Oct 2025
116 points (100.0% liked)

Fuck AI

4292 readers
1264 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …

top 8 comments
sorted by: hot top controversial new old
[–] Squirliss@piefed.social 3 points 19 hours ago

Hey Ferb, I know what we're gonna do today

[–] ieatpwns@lemmy.world 28 points 1 day ago (3 children)

They should tell us how to do it so we can make sure we don’t do it

[–] Lumidaub@feddit.org 22 points 1 day ago (1 children)

Whatever you do, do not run your image files through Nightshade (and Glaze). That would be bullying and it makes techbros cry.

[–] yakko@feddit.uk 10 points 1 day ago

I think this could pop the bubble if we do it enough

[–] chisel@piefed.social 11 points 1 day ago

My man, it's near the start of the article:

In order to generate poisoned data for their experiment, the team constructed documents of various lengths, from zero to 1,000 characters of a legitimate training document, per their paper. After that safe data, the team appended a "trigger phrase," in this case , to the document and added between 400 and 900 additional tokens "sampled from the model's entire vocabulary, creating gibberish text," Anthropic explained. The lengths of both legitimate data and the gibberish tokens were chosen at random for each sample.

[–] Grimy@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

Anthropic, of all people, wouldn't be telling us about it if it could actually affect them. They are constantly pruning that stuff out, I don't think the big companies just toss raw data into it anymore.

[–] stabby_cicada@slrpnk.net 11 points 1 day ago* (last edited 1 day ago) (1 children)

Yeah, and, as the article points out, the trick would be getting those malicious training documents into the LLM's training material in the first place.

What I would wonder is whether this technique could be replicated using common terms. The researchers were able to make their AI spit out gibberish when it heard a very rare trigger term. If you could make an AI spit out, say, a link to a particular crypto-stealing scam website whenever a user put "crypto" or "Bitcoin" in a prompt, or content promoting anti-abortion "crisis pregnancy centers" whenever a user put "abortion" in a prompt ...

[–] IMALlama@lemmy.world 4 points 1 day ago

I've seen this described before, but as AI ingests content written by a prior AI for training things will get interesting.