this post was submitted on 17 Aug 2025
89 points (91.6% liked)

Technology

39586 readers
102 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Pechente@feddit.org 125 points 2 weeks ago (3 children)

Wikipedia going down like that makes me sad, especially since due to ai crawlers, their traffic costs went up significantly.

[–] clb92 45 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Why would anyone crawl Wikipedia when you can freely download the complete databases in one go, likely served on a CDN...

But sure, crawlers, go ahead and spend a week doing the same thing in a much more expensive, disruptive and error-prone way...

[–] eager_eagle@lemmy.world 14 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

There are valid reasons for not wanting the whole database e.g. storage constraints, compatibility with ETL pipelines, and incorporating article updates.

What bothers me is that they -- apparently -- crawl instead of just... using the API, like:

https://en.wikipedia.org/w/api.php?action=parse&format=json&page=Lemmy_%28social_network%29&formatversion=2

I'm guessing they just crawl the whole web and don't bother to add a special case to turn Wikipedia URLs into their API versions.

[–] clb92 10 points 2 weeks ago

valid reasons for not wanting the whole database e.g. storage constraints

If you're training AI models, surely you have a couple TB to spare. It's not like Wikipedia takes up petabytes or anything.

[–] limer@lemmy.ml 13 points 2 weeks ago
[–] Pechente@feddit.org 1 points 2 weeks ago (2 children)

My comment was based on a podcast I listened to (Tech won’t save us, I think?). My guess is they also wanna crawl all the edits, discussion etc. which is usually not included in the complete dumps.

[–] mesamunefire@piefed.social 3 points 2 weeks ago
[–] clb92 3 points 2 weeks ago

Dumps with complete page edit history can be downloaded too, as far as I can see, so no need to crawl that.

[–] ThePantser@sh.itjust.works 14 points 2 weeks ago (1 children)

Yes they should really block crawlers or force them to pay. The only way I can think of that they could do is make you have to register an account to access content but that goes against what they originally intended. But these are new times and it's probably for the best. Wouldn't be hard to flag obvious AI scrappers.

[–] skvlp@lemmy.wtf 4 points 2 weeks ago

It seems there are ways to stop crawlers. Do a web search for "stop ai crawlers" or similar to learn more. I hope it doesn’t escalate into an arms race, but I realise I might be disappointed.

[–] SebaDC@discuss.tchncs.de 5 points 2 weeks ago

And click through rate is dropping.