this post was submitted on 25 Mar 2025
27 points (88.6% liked)

Selfhosted

45411 readers
525 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

For years I've on and off looked for web archiving software that can capture most sites, including ones that are "complex" with lots of AJAX and require logins like Reddit. Which ones have worked best for you?

Ideally I want one that can be started up programatically or via command line, an opens a chromium instance (or any browser), and captures everything shown on the page. I could also open the instance myself and log into sites and install addons like UBlock Origin. (btw, archiveweb.page must be started manually).

you are viewing a single comment's thread
view the rest of the comments
[–] Xanza@lemm.ee 3 points 1 week ago (5 children)

wget is the most comprehensive site cloner there is. What exactly do you mean by complex? Because wget works for anything static and public... If you're trying to clone compiled source files, like PHP or something, obviously that's not going to work. If that's what you mean by "complex" then just give up, because you can't.

[–] Paragone@piefed.social 0 points 1 week ago (3 children)

There's a "philosopher" who the far-right techbro-oligarchs rely on, whose blog is grey-something-or-other..

I tried using wget & there's a bug or something in the site, so it keeps inserting links-to-other-sites into uri's, so you get bullshit like

grey-something-or-other.substack.com/e/b/a/http://en.wikipedia.org/wiki/etc..

The site apparently works for the people who browse it, but wget isn't succeeding in just cloning the thing.

I want the items that the usable-site is made-of, not endless-failed-requests following recursive errors, forever..

Apparently one has to be ultra-competent to be able to configure all the disincludes & things in the command-line-switches, to get any particular site dealt-with by wget.

Sure, on static-sites it's magic, but on too many sites with dynamically-constructed portions of themselves, it's a damn headache, at times..

_ /\ _

[–] Xanza@lemm.ee 1 points 1 week ago (2 children)

That's not a bug. You literally told wget to follow links, so it did.

[–] Paragone@piefed.social 1 points 6 days ago (1 children)

There ought be a do not follow recursive links switch for it, Hoomin..

_ /\ _

[–] Xanza@lemm.ee 1 points 4 days ago

There is. wget doesn't follow recursive links by default. If it is, you're using an option which is telling it to...

load more comments (1 replies)