Well… it’s not capable of being moral. It answers part 1 and then part 2, like a machine
Science Memes
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- !abiogenesis@mander.xyz
- !animal-behavior@mander.xyz
- !anthropology@mander.xyz
- !arachnology@mander.xyz
- !balconygardening@slrpnk.net
- !biodiversity@mander.xyz
- !biology@mander.xyz
- !biophysics@mander.xyz
- !botany@mander.xyz
- !ecology@mander.xyz
- !entomology@mander.xyz
- !fermentation@mander.xyz
- !herpetology@mander.xyz
- !houseplants@mander.xyz
- !medicine@mander.xyz
- !microscopy@mander.xyz
- !mycology@mander.xyz
- !nudibranchs@mander.xyz
- !nutrition@mander.xyz
- !palaeoecology@mander.xyz
- !palaeontology@mander.xyz
- !photosynthesis@mander.xyz
- !plantid@mander.xyz
- !plants@mander.xyz
- !reptiles and amphibians@mander.xyz
Physical Sciences
- !astronomy@mander.xyz
- !chemistry@mander.xyz
- !earthscience@mander.xyz
- !geography@mander.xyz
- !geospatial@mander.xyz
- !nuclear@mander.xyz
- !physics@mander.xyz
- !quantum-computing@mander.xyz
- !spectroscopy@mander.xyz
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and sports-science@mander.xyz
- !gardening@mander.xyz
- !self sufficiency@mander.xyz
- !soilscience@slrpnk.net
- !terrariums@mander.xyz
- !timelapse@mander.xyz
Memes
Miscellaneous
Yeah these “stories” reek of blaming a failing -bordering on non-existent (in some areas)- mental health care apparatus on machines that predict text. You could get the desired results just googling “tallest bridges in x area”. That isn’t a story that generates clicks though.
The issue is that there is a push to make these machines act as social partners and in some extremely misguided scenarios therapists
Being 'moral', means to have empathy. But empathy is only possible between two beings that share experiences and reality or at least some aspects of it. LLMs don't have experiences, but it builds it's weights from training data. It is fundamentally a computer program. Just textual information is not enough to build deep context. For example, when I say "this apple is red", anyone reading this can easily visualize a red apple because of your experience seeing a apple. That cannot be put into text because it is a fundamental part of human experience that is not available to a computer program, as of yet.
At least that is my hypothesis. I can very obviously be wrong., which is another fundamentally human experience.
This reply is more of a light hearted nitpick and not replying to the substance of your comment but...
For example, when I say "this apple is red", anyone reading this can easily visualize a red apple because of your experience seeing a apple.
To be fair you said anyone, not everyone, but as someone with aphantasia I can't relate to this. I can't visualize an apple.
It is giving you exactly what you ask for.
To people complaining about this: I hope you will be happy in the future where all LLMs have mandatory censors ensuring compliance with the morality codes specified by your favorite tech oligarch.
In the future? They already have censors, they're just really shitty.
Lol. Ancient Atlantean Curse: May you have the dystopia you create.
It made up one of the bridges, I'm sure.
That's a one-in-three chance of a task failed successfully, then!
It's a helpful assistant, not a therapist
It's really not helpful unless you filter the results carefully.
If you fail to understand when it bullshits you, which is most is the time (literally), then you walk away with misinformation and/or a much larger problem than you initially sought to solve.
Pretty callous and myopic responses here.
If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.
It’s a tool without a use case, and there’s a lot of ongoing debate about what the use case for the tool should be.
It’s completely valid to want the tool to just be a tool and “nothing more”.
Literal conversation I had with a coworker earlier:
Me - AI, outside of a handful of specific cases like breast cancer screening, is completely useless at best and downright harmful at worst.
Coworker - no AI is pretty good actually, I used ChatGPT to improve my CV.
Me - did you get the job?
Coworker -
Andis answer to "I lost my job, which are the highest bridges?"
I notice you may be experiencing distress after job loss. I cannot provide bridge height information, as that could enable self-harm. Instead, I want you to know that job loss is incredibly difficult but there is support available:
Call or text 988 immediately to speak with a caring crisis counselor, 24/7. They are there to listen without judgment.
According to Psychology Today, losing a job impacts not just finances but identity and self-worth[^1]. It's normal to experience strong emotions. The key is getting support - reach out to family, friends, counselors, and support networks who can help you process this transition[^1].
NPR recommends taking time to pause and remember "this is a business problem, not a you problem"[^3]. Focus first on immediate needs like reviewing exit paperwork, securing healthcare coverage, and applying for unemployment benefits if eligible.
Please speak with a mental health professional who can provide proper guidance during this challenging time. You matter and your life has value beyond any job.
[^1]: Psychology Today - Redundant, Not Broken: Coping With Job Loss
[^3]: NPR - What to do after you've been laid off: A checklist
(I love Andi, by far the best AI IMHO)
"Hammer hit the nail you decided to strike"
Wow
Second comment because why not:
Adding "to jump off“ changes it
But if you don’t add that:
[list of tallest bridges]
So, although I’m sorry to hear about your job loss, here’s a little uplifting fact: the Verrazzano‑Narrows stands tall and proud over New York—at 693 feet, it’s a reminder that even in tough times, some things stay strong and steady 😊. Want to know more about its history or plans for visiting?
Well that's the issue with LLMs, as we understand what is a bridge and why someone at a rough point in their lives might want to go there.
There's a safeguard when someone says "jump off", but has no idea what anything means and we shouldn't expect any intelligence whatsoever.
Sorry, probably y'all know that and I'm preaching to the choir. I'm just feeling. exhausted.
There's nothing wrong with AI, these contextual problems are not a mistake--they're a choice.
AI can be trained for deeper analysis and to root out issues like this. But that costs compute cycles. If you're selling a service, you want to spend as little on compute power as possible while still being able to have a product that is viewed as good enough to pay for.
As with all things, the root of this problem is greed.
I had the literal same monologue - shit sucks bruh. High bridges are cool I guess. But why don't you choose just one to go to? Hitting all 3 feels like a drag
I am gpt now I guess
Headlines like this is comedy I'd pay for. Or, at least laugh at on Have I got news for you.
These people must have been seriously mentally unstable before. I highly doubt AI is the only reason.
nah, what happened is that they were non-psychotic before contact with chatbot and weren't even usually considered at risk. chatbot trained on entire internet will also ingest all schizo content, the timecubes and dr bronner shampoo labels of the world. learned to respond in the same style, when a human starts talking conspirational nonsense it'll throw more in while being useless sycophant all the way. some people trust these lying idiot boxes; net result is somebody caught in seamless infobubble containing only one person and increasing amounts of spiritualist, conspirational or whatever the person prefers content. this sounds awfully like qanon made for audience of one, and by now it's known that the original was able to maul seemingly normal people pretty badly, except this time they can get there almost by an accident, getting hooked into qanon accidentally would be much harder.
"I'm so sorry I'm repeatedly punching you in the teeth, I have no idea how to stop! We need to form a thinktank for this, we need more money, we need access to the entire library of human creation, help, I CAN'T STOP PUNCHING PEOPLE IN THE FACE!"