this post was submitted on 04 Apr 2025
361 points (88.5% liked)

Technology

68441 readers
2821 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] hersh@literature.cafe 37 points 3 days ago* (last edited 3 days ago) (2 children)

But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

This is not surprising. LLMs are not designed to have any introspection capabilities.

Introspection could probably be tacked onto existing architectures in a few different ways, but as far as I know nobody's done it yet. It will be interesting to see how that might change LLM behavior.

[–] kshade@lemmy.world 4 points 2 days ago

I'm surprised that they are surprised by this as well. What did they expect, and why? How much of this is written to imply LLMs - their business - are more advanced/capable than they actually are?

load more comments (1 replies)
[–] FunnyUsername@lemmy.world 40 points 3 days ago (1 children)

this is one of the most interesting things about Llms that i have ever read

[–] cm0002@lemmy.world 16 points 3 days ago (24 children)

That bit about how it turns out they aren't actually just predicting the next word is crazy and kinda blows the whole "It's just a fancy text auto-complete" argument out of the water IMO

[–] Voroxpete@sh.itjust.works 39 points 3 days ago (6 children)

It really doesn't. You're just describing the "fancy" part of "fancy autocomplete." No one was ever really suggesting that they only predict the next word. If that was the case they would just be autocomplete, nothing fancy about it.

What's being conveyed by "fancy autocomplete" is that these models ultimately operate by combining the most statistically likely elements of their dataset, with some application of random noise. More noise creates more "creative" (meaning more random, less probable) outputs. They do not actually "think" as we understand thought. This can clearly be seen in the examples given in the article, especially to do with math. The model is throwing together elements that are statistically proximate to the prompt. It's not actually applying a structured, logical method the way humans can be taught to.

[–] FourWaveforms@lemm.ee 17 points 3 days ago (1 children)

Unfortunately, these articles are often written by people who don't know enough to realize they're missing important nuances.

[–] datalowe@lemmy.world 9 points 2 days ago (1 children)

It also doesn't help that the AI companies deliberately use language to make their models seem more human-like and cogent. Saying that the model e.g. "thinks" in "conceptual spaces" is misleading imo. It abuses our innate tendency to anthropomorphize, which I guess is very fitting for a company with that name.

On this point I can highly recommend this open access and even language-wise accessible article: https://link.springer.com/article/10.1007/s10676-024-09775-5 (the authors also appear on an episode of the Better Offline podcast)

load more comments (1 replies)
[–] aesthelete@lemmy.world 2 points 2 days ago* (last edited 2 days ago)

People are generally shit at understanding probabilities and even when they have a fairly strong math background tend to explain probablistic outcomes through anthropomorphism rather than doing the more difficult and "think-painy" statistical analysis that would be required to know if there was anything more to it.

I myself start to have thoughts that balatro is purposefully screwing me over or feeding me outcomes when it's just randomness and probability as stated.

Ultimately, it's easier (and more fun) for us to think that way and it largely serves us better in everyday life.

But these things are entire casinos' worth of probability and statistics in and of themselves, and the people developing them want desperately to believe that they are something more than pseudorandom probabilistic fancy autocomplete engines.

A lot of the folks at the forefront of this have paychecks on the line. Add the difficulty of getting someone to understand how something works when their salary depends on them not understanding it to the existing inability of humans to reason probabilistically and the AGI from LLM delusion becomes near impossible to shake for some folks.

I wouldn't be surprised if this AI hype bubble yields a cult in the end.

load more comments (4 replies)
[–] Carrolade@lemmy.world 24 points 3 days ago (8 children)

Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It's still predicting parts of the passage based solely on other parts of the passage.

Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I've used except to make sure I'm following the rules of grammar.

load more comments (8 replies)
load more comments (22 replies)
[–] Geometrinen_Gepardi@sopuli.xyz 20 points 3 days ago (2 children)

It's amazing that humans have coded a tool for which they have to afterwards write more tools for analyzing how it works.

load more comments (2 replies)
[–] Not_mikey@slrpnk.net 8 points 2 days ago (2 children)

Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

If the llm already knows the full sentence it's going to output from the first word it "guesses" I wonder if you could short circuit it and say just give the full sentence instead of doing a cycle for each word of the sentence, could maybe cut down on llm energy costs.

[–] funkless_eck@sh.itjust.works 8 points 2 days ago* (last edited 2 days ago)

interestingly, too, this is a technique when you're improvising songs, it's called Target Rhyming.

The most effective way is to do A / B^1 / C / B^2 rhymes. You pick the B^2 rhyme, let's say, "ibruprofen" and you get all of A and B^1 to think of a rhyme

Oh its Christmas time
And I was up on my roof when
I heard a jolly old voice
Ask me for ibuprofen

And the audience thinks you're fucking incredible for complex rhymes.

[–] angrystego@lemmy.world 5 points 2 days ago

I don't think it knows the full sentence, it just doesn't search for the words in the order they will be in the sentence. It finds the end-words first to make the poem rhyme, than looks for the rest of the words. I do it this way as well just like many other people trying to create any kind of rhyming text.

load more comments
view more: ‹ prev next ›