Admin review of account applications in Lemmy works fine. If you ask people to write a bit, it's quite easy to sort out the bots as there are always give-aways. And if people use LLMs to write the responses, then that's on them π€·
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (donβt cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
today, yes. it's a very simplistic worldview to assume that AI won't becomes less distinguishable from humans when writing applications in the future, and i don't expect it to hold true
You can just make the questions more location or theme specific. There is no way a bot will not slip up on stuff like that, and it doesn't need to be 100% fail proof either.
We get a lot of LLM bot applications on our instance, and even if it would get 10x harder, they would be still really easy to spot.
what if we explicitly say to user write something like i am not a robot and i am creating this account with a buch of slurs or some freaky stuff . ai will never write these stuff
Have you heard of surveillance cameras and facial recognition? If a hostile actor knows in advance that members of a targeted online community will be physically present at a location at a given time, those people will be linked to the community. It doesn't take a lot from then to link specific persons to accounts.
Besides, libraries are having a hard enough time just existing in America. They don't need the burden of protecting the identities of dozens of people and fighting off lawyers and enforcers.
if the group of people registering their account names is big enough, facial recognition doesn't do much, as it can only link the person to one-of-a-hundred-or-thousand account names.
I don't think you fully comprehend just how many footprints people leave behind on the internet. Users would have to practice perfect opsec -- and I mean completely, absolutely perfect. One mistake, like using an e-mail address or an alias off-site, will link a person to the account. If that person cracks under legal threats, the entire operation is fucked. It's happened before.
Thinking you can solve the issue of privacy with a single idea is simply delusional.
Immediate flaws, I can see:
Cameras (or human observer) undo any sense of anonymity. A bad actor could link participant with account.
What's preventing a MitM attack, where the BBEL (Big Bad Evil Librarian) substitutes the participant addresses with bot addresses?
Cameras (or human observer) undo any sense of anonymity. A bad actor could link participant with account.
hence the mixing (shuffling) of paper cards before they are being registered. so there's no one-to-one mapping of humans and account names anymore.
How does the library confirm that the account name is connected to an actual person?
you have to go there in person to cast a piece of paper with your account name on it, similar to a vote in a ballot. it's anonymous because the pieces of paper cannot be associated to a physical human, just like voting ballots are anonymous, but it still transports the information that a human registered this account.
What's stopping someone making a new account every month this way or going to many different libraries and then just selling the account to bot farm operators?
it would be a lot of work
Would it? I would assume a "confirmed human" Fedi account could be worth $5-20. If you live close enough to the library, it's like 5 mins to pop in, drop off the piece of paper and go about your day. Double if you can sneak in two pieces of paper.
And what if you don't want to use crypto?
"crypto parties" often refers to key signing parties, i.e. parties where you exchange cryptographic keys. sorry i should have made that clearer
Ah, Ok!
Thanks for the clarification!