Hmmh, the 4090 is kind if the wrong choice for this, due to its memory bus width... For AI workloads and especially if you want to connect lots of memory, you kind of want the widest bus possible.
this post was submitted on 05 Mar 2025
17 points (94.7% liked)
LocalLLaMA
2694 readers
23 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
but it has micorns ram