kwa

joined 6 months ago
[–] kwa@lemmy.zip 1 points 3 weeks ago

Ah I think you may have solved part of the problem. I tried to use a network and have container name resolution but it failed. That’s why I went with pods and publish ports directly to the host.

I will try to use a dedicated network with DNS on, thanks!

[–] kwa@lemmy.zip 1 points 3 weeks ago (2 children)

I wanted to do something similar. But I grouped some containers using pods and it seems it broke the networking.

Eventually I kept the pods, and exposed everything to the host where caddy can meet the services there. Not the cleanest way, especially as my firewall is turned off.

[–] kwa@lemmy.zip 4 points 3 weeks ago

I switched at work because of the license changes docker did. I noticed that for my work workflow, podman was a direct remplacement of docker.

For my homelab, I wanted to experiment with rootless and I also prefer to have my services handled by systemd. Also I really like the built-in auto update from podman

[–] kwa@lemmy.zip 1 points 3 weeks ago

Yes maybe, I will edit my post to better explain the issue I’m facing.

I’m using pasta. I can see some weird, for instance some services can access other through host.containers.internal and for others, I have to use 192.168.1.x

[–] kwa@lemmy.zip 1 points 3 weeks ago

Mainly Immich, paperless and jellyfin

[–] kwa@lemmy.zip 2 points 3 weeks ago (3 children)

I should have clarified this. It does not open the ports, but I have setup my firewall to allow a range of IP and the traffic is still blocked.

I have noticed some inconsistency in the behavior, where the traffic would sometimes work upon ufw activation but never work upon reboot. Knowing how docker works, I thought podman would also mess with the firewall. But maybe the issue comes from something else.

29
submitted 3 weeks ago* (last edited 3 weeks ago) by kwa@lemmy.zip to c/selfhosted@lemmy.world
 

I had to restore my homelab and took the opportunity to move from docker to rootless podman quadlets. Well almost full rootless, I kept pi-hole and caddy at the root level because I did not want to deal with sysctl.

I have everything running but for now I have to disable my firewall. With docker I was using this script: https://github.com/chaifeng/ufw-docker But I’m having a hard time finding an alternative for podman.

Do you know how any scripts that would magically fix podman and ufw? Would it be a better solution for me to manage iptables manually?

My needs are pretty simple as I do not really care if the ports are visible on my private network, I just want to allow specific IPs on port 80 and 443.

Edit: the issue I’m facing is that I’m allowing some specific IPs to access my network, but when I enable ufw the traffic is blocked. I had the same rules using docker and everything was working fine. I can notice that sometimes the traffic goes through and other time it is blocked. Much like with docker when you don’t use the script and the traffic will be blocked or not depending on what wrote the iptables rules last.

Edit2: So actually the issue was with some routing. Running this command fixes everything: ufw route allow in on wlan0 out on cni-podman0

[–] kwa@lemmy.zip 1 points 4 months ago (1 children)

Well this is what I quite don’t understand: I was trying to run the q3_k_m which is 3.81GB and it was failing with OutOfMemory error. The one you provided IQ4_XS is 4.22GB and is working fine.

[–] kwa@lemmy.zip 2 points 4 months ago (3 children)

Indeed, this model is working on my machine. Can you explain the difference with the one I tried before?

[–] kwa@lemmy.zip 1 points 4 months ago (6 children)

I have a MacBook Pro M1 Pro with 16GB RAM. I closed a lot of things and managed to have 10GB free, but that seems to still not be enough to run the 7B model. For the answer being truncated, it seems to be a frontend issue. I tried open-webui connected to llama-server and it seems to be working great, thank you!

[–] kwa@lemmy.zip 1 points 4 months ago (9 children)

I tried llama.cpp with llama-server and Qwen2.5 Coder 1.5B. Higher parameters just output garbage and I can see an OutOfMemory error in the logs. When trying the 1.5B model, I have an issue where the model will just stop outputting the answer, it will stop mid sentence or in the middle of a class. Is it an issue with my hardware not being performant enough or is it something I can tweak with some parameters?

[–] kwa@lemmy.zip 2 points 4 months ago (11 children)

Thanks!

For people on MacOS, is there a better alternative than croco.cpp?

[–] kwa@lemmy.zip 4 points 4 months ago* (last edited 4 months ago) (1 children)

I’m new to this and I was wondering why you don’t recommend ollama? This is the first one I managed to run and it seemed decent but if there are better alternatives I’m interested

Edit: it seems the two others don’t have an API. What would you recommend if you need an API?

 

Hey, I’m using Jellyfin and AndroidTV as client.

Everything is usually working fine using direct play. But for a few files, when using subtitles, I will get the message "burning subtitles" and the server will start transcoding it. I haven’t really figured out what format is causing it, but it only happens when enabling subtitles.

Anyone else having this issue? Can I do anything to fix it?

view more: next ›