this post was submitted on 19 May 2025
418 points (96.4% liked)
A Boring Dystopia
12196 readers
172 users here now
Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.
Rules (Subject to Change)
--Be a Decent Human Being
--Posting news articles: include the source name and exact title from article in your post title
--If a picture is just a screenshot of an article, link the article
--If a video's content isn't clear from title, write a short summary so people know what it's about.
--Posts must have something to do with the topic
--Zero tolerance for Racism/Sexism/Ableism/etc.
--No NSFW content
--Abide by the rules of lemmy.world
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There are ways that LLMs can be used to better one's life (apparently in some software dev circles these can be and are used to make workflow more efficient) and this can also be one of them, because the part that sucks most about therapy (after the whole monetary thing) is trying to find the form of therapy that works for you, and finding a therapist that you can work with. Every human is different, and that contains both the patient and the therapist, and not everyone can just start working together right off the bat. Not to mention how long it takes for a new therapist to actually get to know you to improve the odds of the cooperation working.
Obviously I'm not saying "replace all therapists with AIs controlled by racist capitalist pigs with ulterior motives", but I have witnessed people in my own life who have had some immediate help from a fucking chatbot, which is kinda ridiculous. So in times of distress (say a borderline having such an anxiety attack that they can't calm themselves because they don't know what to do to the vicious cycle of thought and emotional response) and for immediate help a well-developed, non-capitalist LLM might be of invaluable help, especially if an actual human can't be reached if for an example (in this case) the borderline lives in a remote area and it is the middle of the night, as I can tell from personal experience it very often is. And though not every mental health emergency requires first responders on the scene or even a trip to the hospital, there is still a possibility of both being needed eventually. So a chatbot with access to necessary information in general (like techniques for self-soothing e.g. breathing exercises and so forth) and possibly even personal information (like diagnostic and medication history, though this would raise more privacy concerns to be assessed) and the capability to parse and convey them in a non-belittling way (as some doctors and nurses can be real fucking assholes at times) could/would possibly save lives.
So the problem here is capitalism, surprising no-one.
You're missing the most important point here; quoting:
Plus, an AI cannot really have your best interest at heart, plus these sorts of things open up a whole slew of very dytopian scenarios.
OK, you said "capitalism" but that's way too broad.
Also I find the example of a "mental health emergency" (as in, right now, not tonight or tomorrow) in a remote area, presumably with nobody else around to help, a bit contrived. But OK, in such extremely rare cases - presuming broadband internet still works, and the person in question is savvy enough to use the chatbot - it might be better than nothing.
You don't actually know what you're talking about but like many others in here you put this over the top anti-AI current thing sentiment above everything including simple awareness that you don't know anything. You clearly haven't interacted with many therapists and medical professionals in general as a non-patient if you think they're guaranteed to respect privacy. They're supposed to but off the record and among friends plenty of them yap about everything. They're often obligated to report patients in case of self harm etc which can get them involuntarily sectioned, and the patients may have repercussions from that for years like job loss, healthcare costs, homelessness, legal restrictions, stigma etc.
There's nothing contrived or extemely rare about mental health emergencies and they don't need to be "emergencies" the way you understand it because many people are undiagnosed or misdiagnosed for years, with very high symptom severity and episodes lasting for months and chronically barely coping. Someone may be in any big city and won't change a thing, hospitals and doctors don't have magic pills that automatically cure mental illness assuming that patients have insight (not necessarily present during episodes of many disorders) or awareness that they have some mental illness and aren't just sad etc (because mental health awareness is in the gutter, example: your pretentious incredulity here). Also assuming they have friends available or that they even feel comfortable enough to talk about what bothers them to people they're acquainted with.
Some LLM may actually end up convincing them or informing them that they do have medical issues that need to be seen as such. Suicidal ideation may be present for years but active suicidal intent (the state in which people actually do it) rarely lasts more than 30 minutes or a few hours at worst and it's highly impulsive in nature. Wtf would you or "friends" do in this case? Do you know any techniques to calm people down during episodes? Even unspecialized LLMs have latent knowledge of these things so there's a good chance they'll end up getting life saving advice as opposed to just doing it or interacting with humans who default to interpreting it as "attention seeking" and becoming even more convinced that they should go ahead with it because nobody cares.
This holier than thou anti-AI bs had some point when it was about VLMs training on scraped art but some of you echo chamber critters turned it into some imaginary high moral prerogative that even turns off your empathy for anyone using AI even in use cases where it may save lives. Its some terminally online "morality" where supposedly "there is no excuse for the sin of using AI" and just echo chamber boosted reddit brainworms and fully performative unless all of you use fully ethical cobalt-free smartphones so you're not implicitly gaining convenience from the six million victims of the Congo cobalt wars so far, you never use any services on AWS and magically avoid all megadatacenters etc. Touch grass jfc.
OK, you're angry. I'm just going to say this: I also have mental health issues and I also don't live in a city. Still, I just don't see how a chatbot could help me in an emergency. Sorry.
Yeah I'm angry because I'd rather my loved ones at the very least talk to a chatbot that will argue with them that "they matter" and give them a hotline or a site if they're in some moment of despair and nobody is available or they don't want to talk to people instead of avoiding trying because of scripted incoherent criticism like stolen art slopslopslop Elon Musk techbro privacy blah blah and ending up doing it because nothing will delay them or contradict their suicidal intent.
It's not like you don't get this but following the social media norm and being monke hear monke say all the way to no empathy levels seems more important. That's way more dangerous but we don't talk about humans doing that or being vulnerable to that I guess.
So, you're still angry.
I can only repeat, I just can’t imagine myself giving in to this illusion esp. when I’m at my lowest.
I don't care what else you project into my stance. I'm out.