Let’s be fair, OAuth is very hard. And requires a web server to make work :-)
This is not a password manager, this is IdP roughly like Authelia, Auth0, etc.
Let’s be fair, OAuth is very hard. And requires a web server to make work :-)
This is not a password manager, this is IdP roughly like Authelia, Auth0, etc.
While it’s nice, lightweight, and simple, it still blows my mind that a security product has no means for logs audit and the logs themselves are very hard to deal with programmatically.
That's not the best example, because CP2077 has its own launcher (at least the steam one)
If you want to go the "packaging way", you could use nix's nixCats-nvim to make a fully hermetic nvim installation where you track the origin of all the dependencies (LSPs too) and plugins, all with receipts and hashes and all the good stuff of a reproducible build system. The security industry likes reproducible build systems because there's only one way you can go from source to the artifact.
Then, you package that in e.g. a docker container (which nix can build for you, too) and ship where you need it.
One thing about grafana, though, is that you get logs, metrics and monitoring in the same package. You can use loki as the actual log store and it's easy to integrate it with the likes of journald and docker.
Yes, you will have to spend more time learning LogQL, but it can be very handy where you don’t have metrics (or don’t want to implement them) and still want some useful data from logs.
After all, text logs are just very raw, unstructured events in time. You may think that you only look into them very occasionally when things break and you would be correct. But if you want to alert on them, oftentimes that means you’re going from raw logs to structured data. Loki's LogQL does that, and it's still ten times easier to manage than the elastic stack.
VictoriaMetrics has its own logging product too, now, and while I didn’t try it yet, VM for metrics is probably the best thing ever happened since Prometheus. Especially for resource constrained homelabs.
Storage box networking can be hit and miss. It's ok for incremental uploads, but I went through hell and back to get the initial backup finish, which makes me wonder what it would take to download it in case I have to.
Scp breaks off once in a while, and WebDAV terminates the session. I didn’t try smb as I feel it's a rather weird protocol for the public internet. In the end, I figured it's not the networking per se, it's something with the timeouts on the remote, and I was able to finish the backup using a Hetzner-hosted server as a jumpbox.
But it's cheap, yeah.
Voyager pulls /.well-known/nodeinfo now, if you don’t proxy that to your backend (I didn’t), it will fail.
Isn’t kagi's point that they store very little about you to the point there no search history and you have to pay for the service provided?
That's not exactly true, synology doesn’t do anything you can’t access from an off the shelf linux (it's your usual mdraid and btrfs). But you better know what you’re doing if you go that route.
What's going to pay for the search part, then?
Conduit is in no way compact either. I tuned its caches because two gigs of ram seemed ridiculous for a single-user instance but I only got the mobile client sync lag as a result.
XMPP used to be so much nicer...
OP should have vibecoded the title, chatbots know how to use apostrophes.