this post was submitted on 05 Mar 2025
10 points (91.7% liked)

LocalLLaMA

2640 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Thinking about a new Mac, my MPB M1 2020 16 GB can only handle about 8B models and is slow.

Since I looked it up I might as well shared the LLM-related specs:
Memory bandwidth
M4 Pro (Mac Mini): 273GB/s M4 Max (Mac Studio): 410 GB/s

Cores cpu / gpu
M4 pro 14 / 20
M4 Max 16 / 40

Cores & memory bandwidth is of course important, but with the Mini I could have 64 GB ram instead of 36 (within my budget that is fixed for tax reasons).

Feels like the Mini with more memory would be better. What do you think?

you are viewing a single comment's thread
view the rest of the comments
[–] Oskar@piefed.social 1 points 3 days ago (1 children)

Interesting, lots of "bang for the buck". I'll check it out

[–] papertowels@mander.xyz 1 points 3 days ago

Yup! They even had a demo clustering 5 of them to run deep seek proper