lorentz

joined 1 year ago
[–] lorentz@feddit.it 1 points 6 days ago

Yes, but it is a different cron job that needs to run, and you need to monitor it for failures. Caddy does everything out of the box, including retries.

[–] lorentz@feddit.it 5 points 1 week ago (3 children)

Nginx for my intranet because configuration is fully manual and I have complete control over it.

Caddy for the public services on my vps because it handles cert renewal automatically and most of its configuration is magic which just works.

It is unbelievable how shorter caddy configuration is, but on my intranet:

  1. I don't want my reverse proxy to dial on internet to try to fetch new SSL certs. I know it can be disabled, but this is the default.
  2. I like to learn how stuff works, Nginx forces you to know more details but it is full of good documentation so it is not too painful compared to Caddy.
[–] lorentz@feddit.it 2 points 2 weeks ago

I didn't find anything for syncing yet. But I settled with plain smb shares which works for 99% of my needs and https://www.filestash.app/ for a simple webUI which is more convenient when browsing files and photos from the phone.

[–] lorentz@feddit.it 6 points 3 weeks ago

I tired the same, but my router wants to be smart by filtering DNS responses that points to local IP. I guess whoever designed it considered it a security feature. It is a stock router from the ISP, its configuration interface is minimal, borderline to non existent.

[–] lorentz@feddit.it 22 points 1 month ago

awesome-selfhosted.net is a good start

[–] lorentz@feddit.it 5 points 1 month ago* (last edited 1 month ago)

I remember that I started by following these two guides.

https://fardog.io/blog/2017/12/30/client-side-certificate-authentication-with-nginx/

https://stackoverflow.com/questions/7768593/

something I'm not sure it is mentioned here is that android (at lest the version on my phone) accepts only a legacy format for certificates and the error message when you try to import the new format is totally opaque. If you cannot import it there just check openssl flags to change the export format.

[–] lorentz@feddit.it 7 points 1 month ago (2 children)

If security is one of your concerns, search for "HTTP client side certificates". TL;DR: you can create certificates to authenticate the client and configure the server to allow connections only from trusted devices. It adds extra security because attackers cannot leverage known vulnerabilities on the services you host since they are blocked at http level.

It is a little difficult to find good and updated documentation but I managed to make it work with nginx. The downside is that Firefox mobile doesn't support them, but Firefox PC and Chrome have no issues.

Of course you want also a server side certificate, the easiest way is to get it from Let's Encrypt

[–] lorentz@feddit.it 1 points 1 month ago

There are plenty of zigbee options available on Amazon UK (and I would expect some wifi too, but I have already a zigbee network so I prefer it when possible). Do a quick search there, most of them have the wiring diagram in the photos. Some of them can be installed in the box, so you don't have to replace the switches (which may be ugly compared of the others you have). Also note that if both the switches you want to replace control the same light you just need to replace one of them.

[–] lorentz@feddit.it 1 points 1 month ago

You can configure caddy to use 80 and be a reverse proxy for both the services, serving one site or the other depending on the name (you will need a second DNS entry pointing to the same IP). about not exposing 443, I really doubt that caddy can automatically retrieve SSL certificates for you if not running on the default port. Check the documentation, if I'm right either you open an empty website on 443 just for the sake of getting SSL certs to run https, and manually configure the other port to do the same, or you get the certificates manually using the DNS verification (check let's encrypt documentation) and configure caddy to use them.

[–] lorentz@feddit.it 2 points 1 month ago

NAS are essentially small computers made for connecting a lot of storage and with a fancy OS that can be configured with a browser.

So the real question between the NAS or a custom build is how much time do you want to spend being a sysadmin. NAS mostly work out of the box, you can configure them to autoupdate and get notification only when something important happens. While with a custom build everything is completely on your own. Are you already familiar with some linux distribution? How much do you want to learn?

Once you answer the previous question, the next is about the power. To store files on the network you don't need any big CPU, on the contrary, you may want something small that doesn't cost too much in electricity. But you mentioned you want to stream video. If you need transcoding (because you have a chromcast that wants only video in a specific format for example) you need something more powerful. If you stream only to computer there is no need for transcoding because they can digest any format, so anything will work.

After this you need to decide how much space you need, and what type. NMVE are faster, but spinning HD were still more reliable (and cheaper per TB) last time I checked. Also, do you want some kind of raid? RAID1 is the bare minimum to protect you from a disk failure, but you need twice as much disks to store the same amount of data. RAID5 is more efficient but you need at least 3 disks. Said so, remember that RAID is not backup. You still need a backup for important stuff.

My honest suggestion is to start experimenting with your raspberry and see what you need. Likely it will fit already most of your needs, just attach an external HD and configure samba shares. I don't do any automated backup, but I know that syncthing and Syncthing-Fork are very widely used tools. On linux you can very easily use rsync in a crontab.

If you want an operating system that offers you an out of the box experience more similar to a commercial NAS you can check FreeNAS. I personally started with a QNAP and have been happy for years, but after starting self hosting some stuff I wanted more flexibility so I decided to change to a TerraMaster where I installed a plain Debian and I'm happy with it, but it definitely requires more knowledge and patience to configure and administrate it.

[–] lorentz@feddit.it 3 points 2 months ago

You cannot install addons from the UI, but you can manually install them. Addons are just Docker containers that get configured automatically.

[–] lorentz@feddit.it 1 points 2 months ago

FAT32 doesn't support unix file permission, so when you mount the disk linux has to assign a default ownership which usually is to root. And this is the issue you are facing.

You confused the disk permission with the filesystem permission. The udev rule you wrote gives you permission to write the disk (in other words, you can format it or rewrite the whole content) but doesn't give you permission on the files stored inside because they are on a higher abstraction level.

If you use this computer in interactive mode (in other words if you usually sit in front of it and plug the disk on demand) my suggestion is to remove that line in /etc/fstab and let the ubuntu desktop environment mounting the external hard drive for the current logged in user.

If you use this computer as a server with the USB disk always connected (likely since you mention Jellyfin) you need to modify the fstab line to specify which user should get permission on the files written on the disk.

You can see the full list of options at https://www.kernel.org/doc/Documentation/filesystems/vfat.txt

You either want uid=Mongostein (assuming that's your username on your computer too) to assign to yourself the ownership of all the files, or umask=000 to give everyone all the permissions to the files and directories while ownership will remain to root. You should prefer the second option if jellifin runs as a different user, while the first one is better if there are other users on your computer which shouldn't access your external disk.

To summarize, the line in /etc/fstab should be one of these two.

LABEL=drivename /mnt/drivename/ auto rw,user,exec,nofail,x-gvfs-show,dev,auto,umask=000 0 0

LABEL=drivename /mnt/drivename/ auto rw,user,exec,nofail,x-gvfs-show,dev,auto,uid=Mongostein 0 0
view more: next ›