this post was submitted on 31 May 2025
284 points (98.0% liked)

Technology

70717 readers
3603 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] AstralPath@lemmy.ca 79 points 4 days ago (5 children)

Honestly, I've always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.

[–] brorodeo@lemmy.ca 3 points 2 days ago

Bsky already does that.

[–] ouch@lemmy.world 22 points 4 days ago (2 children)

What about false positives? Or a process to challenge them?

But yes, I agree with the general idea.

[–] tarknassus@lemmy.world 12 points 3 days ago

They will probably use the YouTube model - “you’re wrong and that’s it”.

[–] beejjorgensen@lemmy.sdf.org 15 points 4 days ago

Or a process to challenge them?

😂😂😂😔

[–] HowAbt2day@futurology.today 22 points 4 days ago (1 children)
[–] blargle@sh.itjust.works 9 points 4 days ago

Not sufficiently fascist leaning. It's coming, Palantir's just waiting for the go-ahead...

[–] towerful@programming.dev 1 points 2 days ago

Yup.
It's a traumatic job/task that gets farmed to the cheapest supplier which is extremely unlikely to have suitable safe guards and care for their employees.

If I were implementing this, I would use a safer/stricter model with a human backed appeal system.
I would then use some metrics to generate an account reputation (verified ID, interaction with friends network, previous posts/moderation/appeals), and use that to either: auto-approve AI actions with no appeals (low rep); auto-approve AI actions with human appeal (moderate rep); AI actions must be approved by humans (high rep).

This way, high reputation accounts can still discuss & raise awareness of potentially moderatable topics as quickly as they happen (think breaking news kinda thing). Moderate reputation accounts can argue their case (in case of false positives). Low reputation accounts don't traumatize the moderators.

[–] head_socj@midwest.social 3 points 3 days ago

Agreed. These jobs are overwhelmingly concentratedin developing nations and pay pathetic wages, too.