this post was submitted on 21 Oct 2025
129 points (95.7% liked)

Fuck AI

4374 readers
820 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments

I think you make good points, and I am also somewhat conflicted on this.

Overall, I'm probably against this, due to my experience as a disabled person. I (and many other disabled people) hate the way that we are often depicted in media aimed at the wider world . It's often gross, whether it's the pitying "poor disabled people" stuff which is analogous to what you see in "poverty porn", or the superficially positive "inspiration porn" — they're both equally dehumanising. Less exploitation of our lived experience would definitely be great.

However, depictions of disabled people (whether in advertising or other charitable publication material, or the wider media) is pretty inauthentic to our actual lived experience, and this drives the stigma that disabled people face. AI models will have been trained on this same media ecosystem, so are likely to perpetuate harmful stereotypes by depicting things that are insensitive or inaccurate about disabilities. Even before AI, the self perpetuating cycle of bad disability representation in media was already a problem that caused real world harm to disabled people, so I'm not optimistic about the AI.

Overall, I think that we would be better served in figuring out how to depict things like poverty and disability without it being gross, exploitative and dehumanising. It's not an easy task, but it's a worthwhile one — not least of all because doing it properly would require involving people of the marginalised group you're advocating for (I have overwhelmingly found that charitable organisations that have disabled people working within them have significantly more nuanced and sensitive representations of disabled people in their advertising materials, for example).

More generally, I also worry that AI generated images could desensitize people to the things depicted. When I'm browsing the web and I see writing or images that seem to be AI generated, I often lose any interest that I might've had and just skim over them. It's an almost reflexive response, and I feel bad knowing that there are inevitably going to be false positives, in which I disregard something that wasn't actually made by AI. If the practice described in the OP becomes commonplace, I fear that it could cause people to tune out real images of human suffering. Perhaps some people might even use that as an excuse not to care; empathy is often uncomfortable, and the ignorance of "I don't need to feel emotionally affected by this, because it's not even real" could be an easy out when viewing even real images.