this post was submitted on 28 Sep 2025
16 points (83.3% liked)

LocalLLaMA

3785 readers
3 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

I find myself really appreciating what LLMs can do when it comes to help with software and tech support. I am a pretty adept PC power user who is not a programmer and (until recently) has only had a modest amount of experience with GNU/Linux. However, I have started to get into self-hosting my own FOSS apps and servers (started with OpenWebUI, now Jellyfin/Sonarr via Docker compose etc). I’m also reading a book about the Linux command line and trying to decipher the wold of black magic that is networking etc myself.

I have found that LLMs can really help with comprehension and troubleshooting. That said, lately I am struggling to get good troubleshooting advice out of my LLMs. Specifically, for troubleshooting docker container setups and networking issues.

I had been using Qwen3 Coder 480b, but tried out Claude Sonnet 4 recently and both have let me down a bit. They don’t seem to think systematically when offering troubleshooting tips (Qwen at least). I was hoping Claude would be better since it is an order of magnitude more expensive on OpenRouter, but so far it has not seemed so.

So, what LLM do you use for this type of work? Any other tips for using models as a resource for troubleshooting? I have been providing access to full logs etc and being as detailed as possible and still struggling to get good advice lately. I’m not talking full vibe coding here but just trying to figure out why my docker container is throwing errors etc. Thanks!

Note: I did search and found a somewhat similar post from 6 months ago or so but it wasn’t quite as specific and because 6 months is half a lifetime in LLM development, I figured I’d post as well. Here’s the post in question in case anyone is curious to see that one.

you are viewing a single comment's thread
view the rest of the comments
[–] Grimy@lemmy.world 2 points 3 weeks ago (1 children)

I think it's more important how you run it.

I have copilot in vscode and since I use it to ssh into things, the bot has access to all the files and my terminal output. It's also easy to switch from one model to an other.

[–] FrankLaskey@lemmy.ml 2 points 3 weeks ago (1 children)

Even if this isn’t going to solve the issue of the quality of the LLM’s advice and help, it would massively simplify my current workflow which is copy/pasting logs and command responses and everything into the OWUI window. I’ll check it out. Can you use OpenRouter with VSCode to have access to more models or?

[–] Grimy@lemmy.world 2 points 3 weeks ago (1 children)

Yup, open router is one of the options as well as ollama and all the major APIs.

I pay 10$ a month so I get unlimited chatgpt4.1, 5 mini and grok. I also have openai and gemini through API.

Surprisingly, grok feels the best because it tends to make small changes at a time and will verify by running your scripts if you let it. It picks up on it's own mistakes way more often, and it's also fast. Not the smartest but definitely the funnest.

You can probably get similar behavior by modifying the prompts for the other ones.

[–] FrankLaskey@lemmy.ml 1 points 3 weeks ago (1 children)

Is this Grok Code fast 1? I’ve noticed it’s hitting tops on OR for programming as of recently. I was going to try it out but it won’t respect my zero data retention preference unsurprisingly.

[–] Grimy@lemmy.world 1 points 3 weeks ago

Yup, precisely. It's easily the best free model on the 10$ a month plan.