this post was submitted on 05 Mar 2025
16 points (94.4% liked)

LocalLLaMA

2687 readers
30 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Maybe AMD's loss is Nvidias gain ?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] hendrik@palaver.p3x.de -1 points 1 week ago* (last edited 1 week ago) (1 children)

Hmmh, the 4090 is kind if the wrong choice for this, due to its memory bus width... For AI workloads and especially if you want to connect lots of memory, you kind of want the widest bus possible.

[โ€“] kevin2107@lemmy.world 1 points 9 hours ago

but it has micorns ram