this post was submitted on 10 Oct 2025
204 points (97.2% liked)
Technology
76040 readers
2546 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Cant you just easily ass and extra filter on top of that looking out for keywords and stopping the AI and putting out sorry I can't do that.
For local models like Gemma3, you can't really do it, as you would have to somehow embed this mechanism directly into model weights. These models are mostly run using generic opensource software like llama.cpp or ollama, so you can't force any extra code in there without the maintainers' cooperation.
For cloud services this can and frequently is done. The problem is that these mechanisms have MASSIVE false positive rates (if you ban keywords related to bombs or nuclear weapons, you will no longer be able to get summary about WW2, possibly lock someone out when they're asking for symptoms and causes of radiation poisoning) while still being easy to bypass (e.g. tell the model to add dots between each letter of the word and do the same when writing the prompt.)
Another approach that is frequently employed is adding another AI supervisor on top to monitor prompt and responses for violation of guidelines. This somewhat improves the adherence since you're not allowed to directly speak to the supervisor model, but if you can convince GPT4o that you asking where to secretly bury the 70kg chicken is perfectly fine, you can also find a way to formulate your prompt so that the supervisor sees nothing wrong with it.