this post was submitted on 29 May 2025
24 points (92.9% liked)
LocalLLaMA
3018 readers
32 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Huihui already did a https://huggingface.co/huihui-ai/DeepSeek-R1-0528-Qwen3-8B-abliterated (GGUF quant: https://huggingface.co/Hamzah-Asadullah/DeepSeekR1-0528-8B-abliterated-GGUF)
But is abliteration enough for this? The AI just responds that it doesn’t have any info on that topic aka wasn’t trained any data relating to it. It’s not like they taught it to refuse, they simply didn’t teach it that it happened. To my understanding abliteration removes something, but we would need to add data for this.
EDIT: there is also
ollama run huihui_ai/deepseek-r1-abliterated:8b-0528-qwen3
, I just didn’t find it at firstI've gotten the deepseek-r1-0528-qwen3-8b to answer correctly once, but not consistently. Abliterated Deepseek models I've used in the past have been able to pass the test.