Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
wget.
Doesn't work well for more complex sites.
wget is the most comprehensive site cloner there is. What exactly do you mean by complex? Because wget works for anything static and public... If you're trying to clone compiled source files, like PHP or something, obviously that's not going to work. If that's what you mean by "complex" then just give up, because you can't.
There's a "philosopher" who the far-right techbro-oligarchs rely on, whose blog is grey-something-or-other..
I tried using wget & there's a bug or something in the site, so it keeps inserting links-to-other-sites into uri's, so you get bullshit like
grey-something-or-other.substack.com/e/b/a/http://en.wikipedia.org/wiki/etc..
The site apparently works for the people who browse it, but wget isn't succeeding in just cloning the thing.
I want the items that the usable-site is made-of, not endless-failed-requests following recursive errors, forever..
Apparently one has to be ultra-competent to be able to configure all the disincludes & things in the command-line-switches, to get any particular site dealt-with by wget.
Sure, on static-sites it's magic, but on too many sites with dynamically-constructed portions of themselves, it's a damn headache, at times..
_ /\ _
That's not a bug. You literally told wget to follow links, so it did.
There ought be a do not follow recursive links switch for it, Hoomin..
_ /\ _
There is. wget doesn't follow recursive links by default. If it is, you're using an option which is telling it to...