this post was submitted on 27 Jul 2025
199 points (93.8% liked)

Technology

73287 readers
5324 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] jj4211@lemmy.world 6 points 4 hours ago* (last edited 4 hours ago)

They are not only unaware of their own mistakes, they are unaware of their successes. They are generating content that is, per their training corpus, consistent with the input. This gets eerie, and the 'uncanny valley' of the mistakes are all the more striking, but they are just generating content without concept of 'mistake' or' 'success' or the content being a model for something else and not just being a blend of stuff from the training data.

For example:

Me: Generate an image of a frog on a lilypad.
LLM: I'll try to create that — a peaceful frog on a lilypad in a serene pond scene. The image will appear shortly below.

<includes a perfectly credible picture of a frog on a lilypad, request successfully processed>

Me (lying): That seems to have produced a frog under a lilypad instead of on top.
LLM: Thanks for pointing that out! I'm generating a corrected version now with the frog clearly sitting on top of the lilypad. It’ll appear below shortly.

It didn't know anything about the picture, it just took the input at it's word. A human would have stopped to say "uhh... what do you mean, the lilypad is on water and frog is on top of that?" Or if the human were really trying to just do the request without clarification, they might have tried to think "maybe he wanted it from the perspective of a fish, and he wanted the frog underwater?". A human wouldn't have gone "you are right, I made a mistake, here I've tried again" and include almost the exact same thing.

But tha training data isn't predominantly people blatantly lying about such obvious things or second guessing things that were done so obviously normally correct.

[–] melsaskca@lemmy.ca 4 points 3 hours ago (2 children)

If you don't know you are wrong, when you have been shown to be wrong, you are not intelligent. So A.I. has become "Adequate Intelligence".

[–] jol@discuss.tchncs.de 1 points 53 minutes ago

As any modern computer system, LLMs are much better and smarter than us at certain tasks while terrible at others. You could say that having good memory and communication skills is part of what defines an intelligent person. Not everyone has those abilities, but LLMs do.

My point is, there's nothing useful coming out of the arguments over the semantics of the word "intelligence".

[–] MonkderVierte@lemmy.zip 3 points 3 hours ago* (last edited 3 hours ago)

That definition seems a bit shaky. Trump & co. are mentally ill but they do have a minimum of intelligence.

[–] cley_faye@lemmy.world 7 points 5 hours ago

prompting concerns

Oh you.

[–] RoadTrain@lemdro.id 0 points 2 hours ago

About halfway through the article they quote a paper from 2023:

Similarly, another study from 2023 found LLMs “hallucinated,” or produced incorrect information, in 69 to 88 percent of legal queries.

The LLM space has been changing very quickly over the past few years. Yes, LLMs today still "hallucinate", but you're not doing anyone a service by reporting in 2025 the state of the field over 2 years before.

[–] CeeBee_Eh@lemmy.world 1 points 4 hours ago (1 children)

This happened to me the other day with Jippity. It outright lied to me:

"You're absolutely right. Although I don't have access to the earlier parts of the conversation".

So it says that I was right in a particular statement, but didn't actually know what I said. So I said to it, you just lied. It kept saying variations of:

"I didn't lie intentionally"

"I understand why it seems that way"

"I wasn't misleading you"

etc

It flat out lied and tried to gaslight me into thinking I was in the wrong for taking that way.

[–] greygore@lemmy.world 1 points 2 hours ago (1 children)

It didn’t lie to you or gaslight you because those are things that a person with agency does. Someone who lies to you makes a decision to deceive you for whatever reason they have. Someone who gaslights you makes a decision to behave like the truth as you know it is wrong in order to discombobulate you and make you question your reality.

The only thing close to a decision that LLMs make is: what text can I generate that statistically looks similar to all the other text that I’ve been given. The only reason they answer questions is because in the training data they’ve been provided, questions are usually followed by answers.

It’s not apologizing you to, it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere - it has no ability to be sincere because it doesn’t have any thoughts.

There is no thinking. There are no decisions. The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are, and the more we fall into the trap of these AI marketers about how close we are to truly thinking machines.

[–] CeeBee_Eh@lemmy.world 1 points 11 minutes ago* (last edited 10 minutes ago)

The only thing close to a decision that LLMs make is

That's not true. An "if statement" is literally a decision tree.

The only reason they answer questions is because in the training data they’ve been provided

This is technically true for something like GPT-1. But it hasn't been true for the models trained in the last few years.

it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere

It has a large amount of system prompts that alter default behaviour in certain situations. Such as not giving the answer on how to make a bomb. I'm fairly certain there are catches in place to not be overly apologetic to minimize any reputation harm and to reduce potential "liability" issues.

And in that scenario, yes I'm being gaslite because a human told it to.

There is no thinking

Partially agree. There's no "thinking" in sentient or sapient sense. But there is thinking in the academic/literal definition sense.

There are no decisions

Absolutely false. The entire neural network is billions upon billions of decision trees.

The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are

I promise you I know very well what LLMs and other AI systems are. They aren't alive, they do not have human or sapient level of intelligence, and they don't feel. I've actually worked in the AI field for a decade. I've trained countless models. I'm quite familiar with them.

But "gaslighting" is a perfectly fine description of what I explained. The initial conditions were the same and the end result (me knowing the truth and getting irritated about it) were also the same.

[–] Perspectivist@feddit.uk 41 points 12 hours ago (7 children)

Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.

load more comments (7 replies)
[–] fodor@lemmy.zip 12 points 9 hours ago

What a terrible headline. Self-aware? Really?

[–] kameecoding@lemmy.world 1 points 5 hours ago

Oh shit, they do behave like humans after all.

[–] CosmoNova@lemmy.world 7 points 9 hours ago

Is that a recycled piece from 2023? Because we already knew that.

[–] SnotFlickerman@lemmy.blahaj.zone 88 points 15 hours ago (7 children)

That's because they aren't "aware" of anything.

load more comments (7 replies)
[–] Modern_medicine_isnt@lemmy.world 18 points 12 hours ago (3 children)

It's easy, just ask the AI "are you sure"? Until it stops changing it's answer.

But seriously, LLMs are just advanced autocomplete.

[–] jj4211@lemmy.world 3 points 3 hours ago (1 children)

I kid you not, early on (mid 2023) some guy mentioned using ChatGPT for his work and not even checking the output (he was in some sort of non-techie field that was still in the wheelhouse of text generation). I expresssed that LLMs can include some glaring mistakes and he said he fixed it by always including in his prompt "Do not hallucinate content and verify all data is actually correct.".

[–] Passerby6497@lemmy.world 2 points 3 hours ago (1 children)

Ah, well then, if he tells the bot to not hallucinate and validate output there's no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.

[–] jj4211@lemmy.world 3 points 3 hours ago (1 children)

It gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times.

People really really don't understand how these things work...

[–] Modern_medicine_isnt@lemmy.world 1 points 29 minutes ago

The people who make them don't really understand how they work either. They know how to train them and how the software works, but they don't really know how it comes up with the answers it comes up with. They just do a ron of trial and error. Correlation is all they really have. Which of course is how a lot of medical science works too. So they have good company.

[–] cley_faye@lemmy.world 5 points 5 hours ago

Ah, the monte-carlo approach to truth.

[–] Lfrith@lemmy.ca 9 points 10 hours ago (3 children)

They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

[–] jj4211@lemmy.world 4 points 3 hours ago (1 children)

Fun thing, when it gets the answer right, tell it is was wrong and then see it apologize and "correct" itself to give the wrong answer.

[–] Modern_medicine_isnt@lemmy.world 1 points 27 minutes ago

In my experience it can, but it has been pretty uncommon. But I also don't usually ask questions with only one answer.

[–] saimen@feddit.org 2 points 5 hours ago

I once gave some kind of math problem (how to break down a certain amount of money into bills) and the llm wrote a python script for it, ran it and thus gave me the correct answer. Kind of clever really.

[–] GissaMittJobb@lemmy.ml 7 points 10 hours ago (1 children)

Language models are unsuitable for math problems broadly speaking. We already have good technology solutions for that category of problems. Luckily, you can combine the two - prompt the model to write a program that solves your math problem, then execute it. You're likely to see a lot more success using this approach.

[–] jj4211@lemmy.world 3 points 4 hours ago

Also, generally the best interfaces for LLM will combine non-LLM facilities transparently. The LLM might be able to translate the prose to the format the math engine desires and then an intermediate layer recognizes a tag to submit an excerpt to a math engine and substitute the chunk with output from the math engine.

Even for servicing a request to generate an image, the text generation model runs independent of the image generation, and the intermediate layer combines them. Which can cause fun disconnects like the guy asking for a full glass of wine. The text generation half is completely oblivious to the image generation half. So it responds playing the role of a graphic artist dutifully doing the work without ever 'seeing' the image, but it assumes the image is good because that's consistent with training output, but then the user corrects it and it goes about admitting that the picture (that it never 'looked' at) was wrong and retrying the image generator with the additional context, to produce a similarly botched picture.

[–] rc__buggy@sh.itjust.works 23 points 14 hours ago

However, when the participants and LLMs were asked retroactively how well they thought they did, only the humans appeared able to adjust expectations

This is what everyone with a fucking clue has been saying for the past 5, 6? years these stupid fucking chatbots have been around.

[–] Lodespawn@aussie.zone 15 points 14 hours ago* (last edited 10 hours ago) (1 children)

Why is a researcher with a PhD in social sciences researching the accuracy confidence of predictive text, how has this person gotten to where they are without being able to understand that LLMs don't think? Surely that came up when he started even considering this brainfart of a research project?

[–] rc__buggy@sh.itjust.works 8 points 14 hours ago (1 children)

Someone has to prove it wrong before it's actually wrong. Maybe they set out to discredit the bots

[–] Lodespawn@aussie.zone 7 points 14 hours ago (1 children)

I guess, but it's like proving your phones predictive text has confidence in its suggestions regardless of accuracy. Confidence is not an attribute of a math function, they are attributing intelligence to a predictive model.

[–] FanciestPants@lemmy.world 2 points 11 hours ago (1 children)

I work in risk management, but don't really have a strong understanding of LLM mechanics. "Confidence" is something that i quantify in my work, but it has different terms that are associated with it. In modeling outcomes, I may say that we have 60% confidence in achieving our budget objectives, while others would express the same result by saying our chances of achieving our budget objective are 60%. Again, I'm not sure if this is what the LLM is doing, but if it is producing a modeled prediction with a CDF of possible outcomes, then representing its result with 100% confindence means that the LLM didn't model any other possible outcomes other than the answer it is providing, which does seem troubling.

[–] Lodespawn@aussie.zone 2 points 10 hours ago (1 children)

Nah so their definition is the classical "how confident are you that you got the answer right". If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people's tends to decrease to mitigate the over confidence.

But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it's a word prediction model based on a data set of written text tending to infinity. It's not assessing validity of results, it's predicting what the answer is based on all previous inputs. The whole study is irrelevant.

[–] jj4211@lemmy.world 2 points 3 hours ago

Well, not irrelevant. Lots of our world is trying to treat the LLM output as human-like output, so if human's are going to treat LLM output the same way they treat human generated content, then we have to characterize, for the people, how their expectations are broken in that context.

So as weird as it may seem to treat a stastical content extrapolation engine in the context of social science, there's a great deal of the reality and investment that wants to treat it as "person equivalent" output and so it must be studied in that context, if for no other reason to demonstrate to people that it should be considered "weird".

[–] El_guapazo@lemmy.world 3 points 12 hours ago

AI evolved their own form of the Dunning Kruger effect.

[–] Etterra@discuss.online 2 points 11 hours ago

Confidently incorrect.

load more comments
view more: next ›