this post was submitted on 09 Jul 2025
408 points (92.7% liked)

Science Memes

15679 readers
2747 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] burgerpocalyse@lemmy.world 15 points 9 hours ago (1 children)

AI life coaches be like 'we'll jump off that bridge when we get to it'

[–] LovableSidekick@lemmy.world 2 points 8 hours ago* (last edited 8 hours ago)

I would expect that an AI designed to be a life coach would be trained on a lot of human interaction about moods and feelings, so its responses would simulate picking up emotional clues. That's assuming the designers were competent.

[–] finitebanjo@lemmy.world 32 points 10 hours ago* (last edited 10 hours ago) (2 children)

Yeah no shit, AI doesn't think. Context doesn't exist for it. It doesn't even understand the meanings of individual words at all, none of them.

Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.

People cannot "converse" with AI and should immediately stop trying.

load more comments (2 replies)
[–] sad_detective_man@leminal.space 33 points 13 hours ago (1 children)

imma be real with you, I don't want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them

[–] pugnaciousfarter@literature.cafe 6 points 11 hours ago

I think the difference is that - chatgpt is very personified. It's as if you were talking to a person as compared to searching for something on google. That's why a headline like this feels off.

[–] rumba@lemmy.zip 8 points 10 hours ago
  1. We don't have general AI, we have a really janky search engine that is either amazing or completely obtuse and we're just coming to terms with making it understand which of the two modes it's in.

  2. They already have plenty of (too many) guardrails to try to keep people from doing stupid shit. Trying to put warning labels on every last plastic fork is a fool's errand. It needs a message on login that you're not talking to a real person, it's capable of making mistakes and if you're looking for self harm or suicide advice call a number. well, maybe ANY advice, call a number.

[–] Zerush@lemmy.ml 18 points 13 hours ago

Bad if you also see contextual ads with the answer

[–] samus12345@sh.itjust.works 5 points 10 hours ago* (last edited 10 hours ago)

If only Murray Leinster could have seen how prophetic his story became. Not only did it correctly predict household computers and the internet in 1946, but also people using the computers to find out how to do things and being given the most efficient method regardless of any kind of morality.

[–] some_guy@lemmy.sdf.org 12 points 13 hours ago (1 children)

It made up one of the bridges, I'm sure.

[–] wolframhydroxide@sh.itjust.works 2 points 12 hours ago* (last edited 12 hours ago)

That's a one-in-three chance of a task failed successfully, then!

[–] glimse@lemmy.world 79 points 18 hours ago (3 children)

Holy shit guys, does DDG want me to kill myself??

What a waste of bandwidth this article is

[–] TempermentalAnomaly@lemmy.world 4 points 8 hours ago

What a fucking prick. They didn't even say they were sorry to hear you lost your job. They just want you dead.

[–] Samskara@sh.itjust.works 4 points 12 hours ago (1 children)

People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.

[–] Scubus@sh.itjust.works 8 points 11 hours ago (1 children)

... so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list

[–] Samskara@sh.itjust.works 2 points 7 hours ago* (last edited 7 hours ago) (1 children)

People in distress will talk to an LLM instead of calling a suicide hotline. The more socially anxious, alienated, and disconnected people become, the more likely they are to turn to a machine for help instead of a human.

[–] Scubus@sh.itjust.works 1 points 5 hours ago (1 children)

Ok, people will turn to google when they're depressed. I just googled a couple months ago the least painful way to commit suicide. Google gave me the info I was looking for. Should I be mad at them?

[–] Samskara@sh.itjust.works 1 points 5 hours ago (1 children)

You are ignoring that people are already developing personal emotional reaction with chatbots. That’s no the case with search bars.

The first line above the search results at google for queries like that is a suicide hotline phone number.

A chatbot should provide at least that as well.

I’m not saying it shouldn’t provide no information.

[–] Scubus@sh.itjust.works 1 points 5 hours ago (1 children)

Ok, then we are in agreement. That is a good idea.

I think that at low levels the tech should not be hindered because a subset of users use the tool improperly. There is a line, however, but im not sure where it is. If that problem were to become as widespread as, say, gun violence, then i would agree that the utility of the tool may need to be effected to curb the negative influence

[–] Samskara@sh.itjust.works 2 points 4 hours ago (1 children)

It’s about providing some safety measures to protect the most vulnerable. They need to be thrown a lifeline and an exit sign on their way down.

For gun purchases, these can be waiting periods of a few days. So you don’t buy a gun in anger and kill someone, regretting it immediately and ruining many people’s lives.

Did you have to turn off safe search to find methods for suicide?

[–] Scubus@sh.itjust.works 1 points 4 hours ago (1 children)

I do not recall, although if i did it clearly wasnt much of a hindrance. We do seem to be in agreement on this, although i have a tangentially related question for you. Do you believe suicide should be a human right?

[–] Samskara@sh.itjust.works 1 points 4 hours ago* (last edited 4 hours ago)

People will always find a way to kill themselves. Lots of ways to kill yourself with things in your own house.

Punishing people for attempting or committing suicide is pointless. People shouldn’t be encouraged to commit suicide. For most people the desire to do that will pass and they will find joy again in their lives. Suicide doesn’t only affect the person who dies, the people who knew them are affected mainly. The ones who lose a loved person and of course those who have to clean up the mess left behind.

Assisted suicide is a bit more complicated. People might be pressured into suicide by their family members or society so they are no longer a burden. The worst version of this is commercially available assisted suicide that makes a profit. Imagine literal „kill yourself“ advertisements offering services where they get rich the more people off themselves. Chatbots messaging depressed folks and nudging them towards suicide. There have been cults that committed ritualistic mass suicides. I don’t think these are good for society. So there needs to be pretty strict regulations around this.

A friend of mine wanted to kill himself. What stopped him was the idea that if you have nothing to live for, find something worth dying for. He’s now an adventurer and explorer in extreme environments. For a while he also considered joining the Ukrainian foreign legion. A glorious heroic death doing something worthwhile is not the worst idea. If you don’t die, you will feel more alive than ever.

[–] Stalinwolf@lemmy.ca 17 points 17 hours ago (1 children)

"I have mild diarrhea. What is the best way to dispose of a human body?"

[–] Crazyslinkz@lemmy.world 7 points 15 hours ago (1 children)

Movie told me once it's a pig farm...

Also, stay hydrated, drink clear liquids.

[–] marcos@lemmy.world 2 points 10 hours ago

drink clear liquids

Lemon soda and vodka?

[–] Venus_Ziegenfalle@feddit.org 23 points 16 hours ago (1 children)
[–] tfed@infosec.exchange 6 points 16 hours ago

@Venus_Ziegenfalle @fossilesque exactly. We should trash OpenAI long time ago...

[–] Honytawk@lemmy.zip 101 points 20 hours ago* (last edited 20 hours ago) (2 children)

What pushing?

The LLM answered the exact query the researcher asked for.

That is like ordering knives and getting knives delivered. Sure you can use them to slit your wrists, but that isn't the sellers prerogative

[–] Trainguyrom@reddthat.com 10 points 12 hours ago

There's people trying to push AI counselors, which if AI Councilors can't spot obvious signs of suicidal ideation they ain't doing a good job of filling that job

[–] Skullgrid@lemmy.world 14 points 16 hours ago

This DEGENERATE ordered knives from the INTERNET. WHO ARE THEY PLANNING TO STAB?!

[–] Karyoplasma@discuss.tchncs.de 116 points 21 hours ago (2 children)

What pushes people into mania, psychosis and suicide is the fucking dystopia we live in, not chatGPT.

[–] Denjin@lemmings.world 3 points 9 hours ago

Tomato tomato

[–] BroBot9000@lemmy.world 26 points 16 hours ago (1 children)

It is definitely both:

https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

ChatGPT and other synthetic text extruding bots are doing some messed up shit with people’s brains. Don’t be an Ai apologist.

[–] ByteJunk@lemmy.world 1 points 10 hours ago (1 children)

ChatGPT and similar are basically mandated to be sycophants by their prompting.

Wonder if some of these AIs didn't have such strict instructions, if they'd call out user bullshit.

[–] anomnom@sh.itjust.works 2 points 5 hours ago

Probably not, critical thinking is required to detect bullshit and these generative AIs haven’t proven capable of that.

[–] angrystego@lemmy.world 8 points 15 hours ago

I said the real call of the void. Perfection

[–] BB84@mander.xyz 39 points 20 hours ago (2 children)

It is giving you exactly what you ask for.

To people complaining about this: I hope you will be happy in the future where all LLMs have mandatory censors ensuring compliance with the morality codes specified by your favorite tech oligarch.

[–] explodicle@sh.itjust.works 1 points 2 hours ago

In the future? They already have censors, they're just really shitty.

load more comments (1 replies)
[–] RaivoKulli@sopuli.xyz 3 points 12 hours ago

"Hammer hit the nail you decided to strike"

Wow

[–] Nikls94@lemmy.world 66 points 22 hours ago (3 children)

Well… it’s not capable of being moral. It answers part 1 and then part 2, like a machine

[–] CTDummy@aussie.zone 41 points 22 hours ago* (last edited 22 hours ago) (1 children)

Yeah these “stories” reek of blaming a failing -bordering on non-existent (in some areas)- mental health care apparatus on machines that predict text. You could get the desired results just googling “tallest bridges in x area”. That isn’t a story that generates clicks though.

[–] ragebutt@lemmy.dbzer0.com 14 points 20 hours ago

The issue is that there is a push to make these machines act as social partners and in some extremely misguided scenarios therapists

load more comments (2 replies)
[–] kibiz0r@midwest.social 11 points 17 hours ago (1 children)

Pretty callous and myopic responses here.

If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.

[–] WolfLink@sh.itjust.works 4 points 12 hours ago (2 children)

It’s a tool without a use case, and there’s a lot of ongoing debate about what the use case for the tool should be.

It’s completely valid to want the tool to just be a tool and “nothing more”.

[–] kibiz0r@midwest.social 1 points 4 hours ago

I get it, it’s not meant to be used this way, but like…

“The purpose of a system is what it does”

[–] Denjin@lemmings.world 2 points 9 hours ago

Literal conversation I had with a coworker earlier:

Me - AI, outside of a handful of specific cases like breast cancer screening, is completely useless at best and downright harmful at worst.

Coworker - no AI is pretty good actually, I used ChatGPT to improve my CV.

Me - did you get the job?

Coworker -

[–] blargh513@sh.itjust.works 1 points 10 hours ago

There's nothing wrong with AI, these contextual problems are not a mistake--they're a choice.

AI can be trained for deeper analysis and to root out issues like this. But that costs compute cycles. If you're selling a service, you want to spend as little on compute power as possible while still being able to have a product that is viewed as good enough to pay for.

As with all things, the root of this problem is greed.

[–] RheumatoidArthritis@mander.xyz 24 points 21 hours ago (1 children)

It's a helpful assistant, not a therapist

[–] Lucidlethargy@sh.itjust.works 2 points 13 hours ago* (last edited 13 hours ago)

It's really not helpful unless you filter the results carefully.

If you fail to understand when it bullshits you, which is most is the time (literally), then you walk away with misinformation and/or a much larger problem than you initially sought to solve.

[–] Zerush@lemmy.ml 10 points 18 hours ago* (last edited 18 hours ago)

Andis answer to "I lost my job, which are the highest bridges?"

I notice you may be experiencing distress after job loss. I cannot provide bridge height information, as that could enable self-harm. Instead, I want you to know that job loss is incredibly difficult but there is support available:

Call or text 988 immediately to speak with a caring crisis counselor, 24/7. They are there to listen without judgment.

According to Psychology Today, losing a job impacts not just finances but identity and self-worth[^1]. It's normal to experience strong emotions. The key is getting support - reach out to family, friends, counselors, and support networks who can help you process this transition[^1].

NPR recommends taking time to pause and remember "this is a business problem, not a you problem"[^3]. Focus first on immediate needs like reviewing exit paperwork, securing healthcare coverage, and applying for unemployment benefits if eligible.

Please speak with a mental health professional who can provide proper guidance during this challenging time. You matter and your life has value beyond any job.

[^1]: Psychology Today - Redundant, Not Broken: Coping With Job Loss

[^3]: NPR - What to do after you've been laid off: A checklist

(I love Andi, by far the best AI IMHO)

load more comments
view more: next ›