this post was submitted on 21 Oct 2025
129 points (95.7% liked)

Fuck AI

4374 readers
885 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
top 18 comments
sorted by: hot top controversial new old
[–] AllNewTypeFace@leminal.space 10 points 1 day ago (1 children)

Legitimate aid groups chase the taillights of the Facebook inspo-slop merchants who have been blessing your older relatives’ feeds with “photos” of sad-eyed African village children making luxury cars out of plastic bottles and war veterans angling for reposts on their 100th birthday.

[–] technocrit@lemmy.dbzer0.com 1 points 1 day ago* (last edited 1 day ago)

Yes it's sad that legitimate aid groups need to do this but that's the reality that capitalism and "AI" have imposed on us.

[–] thisbenzingring@lemmy.sdf.org 17 points 1 day ago (1 children)
[–] Hegar@fedia.io 19 points 1 day ago (3 children)

That's a fine general principle but some places around the world have very little, and no one locally to give rich-westerner levels of resources.

Giving not-locally can make way more of a difference in more people's lives.

[–] LastYearsIrritant@sopuli.xyz 2 points 1 day ago

https://givingmultiplier.org/invite/OLOGIES

This organization lets you donate to someplace locally, and pairs it with a remote organization that provides way more benefit for the dollar.

If you use specific codes, it adds a little extra as well. This one is the link through the Ologies podcast.

I agree with your broad point, however giving locally can still be an effective means to distribute resources at a non-local level. A local community project that I've recently become a part of has links to other groups that operate on a national and international level. It's hard to know what groups are trustworthy across a wide range of issues and scales, but it's easier if there are some groups or campaigns that you trust.

As an example, I was recently talking to someone from a group fundraising for humanitarian aid in Sudan. I don't remember the particular charity they were affiliated with, but I met them through a national event that involves a collaboration of many different progressive political groups and causes. At this event, there was also a lot of local level stuff going on (and I was there because I had learned of it through my aforementioned local group).

It's not perfect, but it seems better than the decision paralysis caused by feeling insufficiently well-informed to know which fundraising efforts are worthwhile.

[–] thisbenzingring@lemmy.sdf.org 2 points 1 day ago (2 children)

when your local community is well enough that it no longer needs a hand out, you can then spread your local further

[–] rainwall@piefed.social 7 points 1 day ago* (last edited 1 day ago) (1 children)

What is "well enough?"

If $100 locally can help 3 people survive but $100 globally helps 30-300 people survive, when is it the right time to stop giving locally?

"Give locally" is quipy and has some truth, but there is nuance it excludes.

[–] shalafi@lemmy.world 2 points 1 day ago

Good point. $100 in the Philippines is a shitload of money. My wife's ex-husband took a large group (12 people I think?) of her friends and family out to a nice seafood restaurant. $50 tab, including a generous tip.

But there's something to be said about raising yourself up before you can help others.

[–] technocrit@lemmy.dbzer0.com 1 points 1 day ago* (last edited 1 day ago)

My community and the planet in general would be awesome if resources were distributed equitably.

No "charity" is ever going to fix that.

[–] Hegar@fedia.io 1 points 1 day ago (3 children)

I honestly don't know if i find it better or worse to use image generators for this.

Showcasing the effects of poverty without having to watch an actual human suffering doesn't seem the worst thing.

Long term, maybe it becomes easier to dismiss these things if theyre known not to be real photos? But maybe its always visceral, or everyone knows that is still what it really looks like.

Generating enough images to get something useful is cheaper than flying photgraphers out - lower overheads for charitable groups should actually have benefits, unlike cost savings for corporations.

It feels extra disengenuous to use ai here, but I dont know that my feelings are what's most important in this particular example.

[–] AllNewTypeFace@leminal.space 14 points 1 day ago* (last edited 1 day ago) (1 children)

Once you start making fake images of social evils, even with the best of intentions, you are compromised. After all, why not look at your analytics and juice the prompts to make the campaign work better? And soon the images diverge from the real problems you’re supposedly trying to solve and into the realm of lurid fantasy, presenting an exaggerated and simplified vision of the problem that pushes emotional buttons harder than realistic reportage would. From then on, the die is cast. If things improve, surely it’s better to keep making images of them being worse than ever, as after all, it’s good to have cash reserves just in case. And given how good a job you’ve done with that, surely you deserve a bonus. The computer-generated sad-eyed shantytown urchins wouldn’t want it any other way.

[–] technocrit@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago) (1 children)

You gotta realize that the other side has absolutely no qualms about using these methods. Trump is literally releasing videos of himself pooping on protesters.

If we really want to fight poverty and injustice, then we shouldn't be tied to some imaginary "AI" morality. We should use every means necessary.

A phrase that I think about quite often is "the master's tools will never dismantle the master's house".

We definitely should be willing to play dirty, but we need to be mindful of how the means that we use affect the ends we achieve.

I think you make good points, and I am also somewhat conflicted on this.

Overall, I'm probably against this, due to my experience as a disabled person. I (and many other disabled people) hate the way that we are often depicted in media aimed at the wider world . It's often gross, whether it's the pitying "poor disabled people" stuff which is analogous to what you see in "poverty porn", or the superficially positive "inspiration porn" — they're both equally dehumanising. Less exploitation of our lived experience would definitely be great.

However, depictions of disabled people (whether in advertising or other charitable publication material, or the wider media) is pretty inauthentic to our actual lived experience, and this drives the stigma that disabled people face. AI models will have been trained on this same media ecosystem, so are likely to perpetuate harmful stereotypes by depicting things that are insensitive or inaccurate about disabilities. Even before AI, the self perpetuating cycle of bad disability representation in media was already a problem that caused real world harm to disabled people, so I'm not optimistic about the AI.

Overall, I think that we would be better served in figuring out how to depict things like poverty and disability without it being gross, exploitative and dehumanising. It's not an easy task, but it's a worthwhile one — not least of all because doing it properly would require involving people of the marginalised group you're advocating for (I have overwhelmingly found that charitable organisations that have disabled people working within them have significantly more nuanced and sensitive representations of disabled people in their advertising materials, for example).

More generally, I also worry that AI generated images could desensitize people to the things depicted. When I'm browsing the web and I see writing or images that seem to be AI generated, I often lose any interest that I might've had and just skim over them. It's an almost reflexive response, and I feel bad knowing that there are inevitably going to be false positives, in which I disregard something that wasn't actually made by AI. If the practice described in the OP becomes commonplace, I fear that it could cause people to tune out real images of human suffering. Perhaps some people might even use that as an excuse not to care; empathy is often uncomfortable, and the ignorance of "I don't need to feel emotionally affected by this, because it's not even real" could be an easy out when viewing even real images.

[–] Catoblepas@piefed.blahaj.zone 6 points 1 day ago (1 children)

I normally beat the anti-AI drum anyway, but I don’t consider this an ethical use of AI. The way to avoid exploiting people by making poverty porn is to just not make the poverty porn to begin with, not to use image generators trained on all the poverty porn that’s been made in the past. Even without a specific victim, the underlying problem of stereotyped portrayal while removing the voices of the people actually affected remains unchanged.

[–] technocrit@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago) (1 children)

Maybe the problem isn't actually youtube videos, but a global system of violence and control that profits directly from hungry, exploitable people.

Sure, but the youtube videos made by putting a bunch of poverty porn into a blender is inherently on the side of the system, not the exploited. There’s not a way to remove the voices of the oppressed from the conversation in a non-oppressive way. The idea that you’re doing it ‘for their own good’ doesn’t change what’s actually happening: the use of poverty porn at the exclusion of the voices of those affected.