this post was submitted on 24 May 2025
1154 points (99.0% liked)

Science Memes

14649 readers
2956 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] stm@lemmy.dbzer0.com 6 points 1 hour ago

Such a stupid title, great software!

[–] gmtom@lemmy.world 2 points 1 hour ago (1 children)

Cool, but as with most of the anti-AI tricks its completely trivial to work around. So you might stop them for a week or two, but they'll add like 3 lines of code to detect this and it'll become useless.

[–] JackbyDev@programming.dev 15 points 41 minutes ago

I hate this argument. All cyber security is an arms race. If this helps small site owners stop small bot scrapers, good. Solutions don't need to be perfect.

[–] Iambus@lemmy.world 5 points 2 hours ago

Typical bluesky post

[–] MonkderVierte@lemmy.ml 8 points 2 hours ago (2 children)

Btw, how about limiting clicks per second/minute, against distributed scraping? A user who clicks more than 3 links per second is not a person. Neither, if they do 50 in a minute. And if they are then blocked and switch to the next, it's still limited in bandwith they can occupy.

[–] letsgo@lemm.ee 3 points 2 hours ago (1 children)

I click links frequently and I'm not a web crawler. Example: get search results, open several likely looking possibilities (only takes a few seconds), then look through each one for a reasonable understanding of the subject that isn't limited to one person's bias and/or mistakes. It's not just search results; I do this on Lemmy too, and when I'm shopping.

[–] MonkderVierte@lemmy.ml 2 points 1 hour ago

Ok, same, make it 5 or 10. Since i use Tree Style Tabs and Auto Tab Discard, i do get a temporary block in some webshops, if i load (not just open) too much tabs in too short time. Probably a CDN thing.

[–] JadedBlueEyes@programming.dev 4 points 2 hours ago (1 children)

They make one request per IP. Rate limit per IP does nothing.

[–] MonkderVierte@lemmy.ml 1 points 2 hours ago* (last edited 2 hours ago) (1 children)

Ah, one request, then the next IP doing one and so on, rotating? I mean, they don't have unlimited adresses. Is there no way to group them together to a observable group, to set quotas? I mean, in the purpose of defense against AI-DDOS and not just for hurting them.

[–] edinbruh@feddit.it 1 points 1 hour ago (1 children)

There's always Anubis 🤷

Anyway, what if they are backed by some big Chinese corporation with some /32 ipv6 and some /16 ipv4? It's not that unreasonable

[–] JackbyDev@programming.dev 1 points 30 minutes ago

No, I don't think blocking IP ranges will be effective (except in very specific scenarios). See this comment referencing a blog post about this happening and the traffic was coming from a variety of residential IP allocations. https://lemm.ee/comment/20684186

[–] ZeffSyde@lemmy.world 7 points 4 hours ago (1 children)

I'm imagining a break future where, in order to access data from a website you have to pass a three tiered system of tests that make, 'click here to prove you aren't a robot' and 'select all of the images that have a traffic light' , seem like child's play.

[–] Tiger_Man_@lemmy.blahaj.zone -1 points 2 hours ago (1 children)

All you need to protect data from ai is use non-http protocol, at least for now

[–] Bourff@lemmy.world 7 points 2 hours ago

Easier said than done. I know of IPFS, but how widespread and easy to use is it?

[–] Tiger_Man_@lemmy.blahaj.zone 2 points 3 hours ago (1 children)

How can i make something like this

[–] Zacryon@feddit.org 44 points 8 hours ago (4 children)

I suppose this will become an arms race, just like with ad-blockers and ad-blocker detection/circumvention measures.
There will be solutions for scraper-blockers/traps. Then those become more sophisticated. Then the scrapers become better again and so on.

I don't really see an end to this madness. Such a huge waste of resources.

[–] arararagi@ani.social 3 points 1 hour ago

Well, the adblockers are still wining, even on twitch where the ads como from the same pipeline as the stream, people made solutions that still block them since ublock origin couldn't by itself.

[–] enbiousenvy@lemmy.blahaj.zone 7 points 3 hours ago

the rise of LLM companies scraping internet is also, I noticed, the moment YouTube is going harsher against adblockers or 3rd party viewer.

Piped or Invidious instances that I used to use are no longer works, did so may other instances. NewPipe have been broken more frequently. youtube-dl or yt-dlp sometimes cannot fetch higher resolution video. and so sometimes the main youtube side is broken on Firefox with ublock origin.

Not just youtube but also z-library, and especially sci-hub & libgen also have been harder to use sometimes.

[–] pyre@lemmy.world 16 points 6 hours ago

there is an end: you legislate it out of existence. unfortunately the US politicians instead are trying to outlaw any regulations regarding AI instead. I'm sure it's not about the money.

load more comments (1 replies)
[–] essteeyou@lemmy.world 43 points 11 hours ago (2 children)

This is surely trivial to detect. If the number of pages on the site is greater than some insanely high number then just drop all data from that site from the training data.

It's not like I can afford to compete with OpenAI on bandwidth, and they're burning through money with no cares already.

[–] bane_killgrind@slrpnk.net 21 points 9 hours ago (2 children)

Yeah sure, but when do you stop gathering regularly constructed data, when your goal is to grab as much as possible?

Markov chains are an amazingly simple way to generate data like this, and a little bit of stacked logic it's going to be indistinguishable from real large data sets.

[–] Valmond@lemmy.world 13 points 7 hours ago (1 children)

Imagine the staff meeting:

You: we didn't gather any data because it was poisoned

Corposhill: we collected 120TB only from harry-potter-fantasy-club.il !!

Boss: hmm who am I going to keep...

[–] yetAnotherUser@lemmy.ca 10 points 6 hours ago* (last edited 6 hours ago)

The boss fires both, "replaces" them for AI, and tries to sell the corposhill's dataset to companies that make AIs that write generic fantasy novels

load more comments (1 replies)
load more comments (1 replies)
[–] Vari@lemm.ee 58 points 12 hours ago (1 children)

I’m so happy to see that ai poison is a thing

[–] ricdeh@lemmy.world 9 points 6 hours ago (1 children)

Don't be too happy. For every such attempt there are countless highly technical papers on how to filter out the poisoning, and they are very effective. As the other commenter said, this is an arms race.

[–] arararagi@ani.social 2 points 1 hour ago

So we should just give up? Surely you don't mean that.

[–] RedSnt 119 points 14 hours ago (10 children)

It's so sad we're burning coal and oil to generate heat and electricity for dumb shit like this.

[–] endeavor@sopuli.xyz 6 points 6 hours ago (1 children)

im sad governments dont realize this and regulate it.

[–] Tja@programming.dev -3 points 2 hours ago (1 children)

Of all the things governments should regulate, this is probably the least important and ineffective one.

[–] irmoz@lemmy.world 1 points 1 hour ago
load more comments (9 replies)
load more comments
view more: next ›