this post was submitted on 07 Jul 2025
558 points (98.1% liked)
Open Source
38699 readers
419 users here now
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
- !libre_culture@lemmy.ml
- !libre_software@lemmy.ml
- !libre_hardware@lemmy.ml
- !linux@lemmy.ml
- !technology@lemmy.ml
Community icon from opensource.org, but we are not affiliated with them.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What advantage does this software provide over simply banning bots via robots.txt?
Robots.txt expects that the client is respecting the rules, for instance, marking that they are a scraper.
AI scrapers don't respect this trust, and thus robots.txt is meaningless.
TL;DR: You should have both due to the explicit breaking of the robots.txt contract by AI companies.
AI generally doesn't obey robots.txt. That file is just notifying scrapers what they shouldn't scrape, but relies on good faith of the scrapers. Many AI companies have explicitly chosen not no to comply with robots.txt, thus breaking the contract, so this is a system that causes those scrapers that are not willing to comply to get stuck in a black hole of junk and waste their time. This is a countermeasure, but not a solution. It's just way less complex than other options that just block these connections, but then make you get pounded with retries. This way the scraper bot gets stuck for a while and doesn't waste as many of your resources blocking them over and over again.
Well, now that y'all put it that way, I think it was pretty naive from me to think that these companies, whose business model is basically theft, would honour a lousy robots.txt file...
the scrapers ignore robots.txt. It doesn't really ban them - it just asks them not to access things, but they are programmed by assholes.
The difference is:
The problem is Ai doesn't follow robots.txt,so Cloudflare are Anubis developed a solution.
I mean, you could have read the article before asking, it's literally in there...