this post was submitted on 11 Jun 2025
891 points (98.7% liked)

Lemmy Shitpost

32349 readers
2883 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] RedstoneValley@sh.itjust.works 132 points 4 days ago (10 children)

It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)

[–] UnderpantsWeevil@lemmy.world 45 points 3 days ago* (last edited 3 days ago) (21 children)

LLM wasn’t made for this

There's a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders "Does my conversation partner really understand what I'm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?"

The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn't analyzing the fundamental meaning of what I'm saying, it is simply mapping the words I've input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So "2" is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.

When you hear people complain about how the LLM "wasn't made for this", what they're really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.

Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data. Also, incidentally, why Deepseek was running laps around OpenAI and Gemini as of last year.

Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.

That's modern LLMs in a nutshell.

[–] jsomae@lemmy.ml 10 points 3 days ago (14 children)

You've missed something about the Chinese Room. The solution to the Chinese Room riddle is that it is not the person in the room but rather the room itself that is communicating with you. The fact that there's a person there is irrelevant, and they could be replaced with a speaker or computer terminal.

Put differently, it's not an indictment of LLMs that they are merely Chinese Rooms, but rather one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.

If one day we discover that the human brain works on much simpler principles than we once thought, would that make humans any less valuable? It should be deeply troubling to us that LLMs can do so much while the mathematics behind them are so simple. Arguments that because LLMs are just scaled-up autocomplete they surely can't be very good at anything are not comforting to me at all.

[–] kassiopaea@lemmy.blahaj.zone 3 points 3 days ago (13 children)

This. I often see people shitting on AI as "fancy autocomplete" or joking about how they get basic things incorrect like this post but completely discount how incredibly fucking capable they are in every domain that actually matters. That's what we should be worried about... what does it matter that it doesn't "work the same" if it still accomplishes the vast majority of the same things? The fact that we can get something that even approximates logic and reasoning ability from a deterministic system is terrifying on implications alone.

load more comments (13 replies)
load more comments (13 replies)
[–] Leet@lemmy.zip 1 points 2 days ago (1 children)

Can we say for certain that human brains aren’t sophisticated Chinese rooms…

[–] Knock_Knock_Lemmy_In@lemmy.world 3 points 2 days ago (8 children)

a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately

The human approach could be to write a (python) program to count the number of characters precisely.

When people refer to agents, is this what they are supposed to be doing? Is it done in a generic fashion or will it fall over with complexity?

[–] outhouseperilous@lemmy.dbzer0.com 2 points 2 days ago* (last edited 2 days ago) (1 children)

No, this isn't what 'agents' do, 'agents' just interact with other programs. So like move your mouse around to buy stuff, using the same methods as everything else.

Its like a fancy diversely useful diversely catastrophic hallucination prone API.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 2 days ago (1 children)

'agents' just interact with other programs.

If that other program is, say, a python terminal then can't LLMs be trained to use agents to solve problems outside their area of expertise?

I just tested chatgpt to write a python program to return the frequency of letters in a string, then asked it for the number of L's in the longest placename in Europe.

''''

String to analyze

text = "Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch"

Convert to lowercase to count both 'L' and 'l' as the same

text = text.lower()

Dictionary to store character frequencies

frequency = {}

Count characters

for char in text: if char in frequency: frequency[char] += 1 else: frequency[char] = 1

Show the number of 'l's

print("Number of 'l's:", frequency.get('l', 0))

'''

I was impressed until

Output

Number of 'l's: 16

Yeah it turns out to be useless!

load more comments (7 replies)

That's a very long answer to my snarky little comment :) I appreciate it though. Personally, I find LLMs interesting and I've spent quite a while playing with them. But after all they are like you described, an interconnected catalogue of random stuff, with some hallucinations to fill the gaps. They are NOT a reliable source of information or general knowledge or even safe to use as an "assistant". The marketing of LLMs as being fit for such purposes is the problem. Humans tend to turn off their brains and to blindly trust technology, and the tech companies are encouraging them to do so by making false promises.

Yes but have you considered that it agreed with me so now i need to defend it to the death against you horrible apes, no matter the allegation or terrain?

[–] shalafi@lemmy.world 7 points 3 days ago (1 children)

You might just love Blind Sight. Here, they're trying to decide if an alien life form is sentient or a Chinese Room:

"Tell me more about your cousins," Rorschach sent.

"Our cousins lie about the family tree," Sascha replied, "with nieces and nephews and Neandertals. We do not like annoying cousins."

"We'd like to know about this tree."

Sascha muted the channel and gave us a look that said Could it be any more obvious? "It couldn't have parsed that. There were three linguistic ambiguities in there. It just ignored them."

"Well, it asked for clarification," Bates pointed out.

"It asked a follow-up question. Different thing entirely."

Bates was still out of the loop. Szpindel was starting to get it, though.. .

[–] CitizenKong@lemmy.world 6 points 3 days ago* (last edited 3 days ago) (2 children)

Blindsight is such a great novel. It has not one, not two but three great sci-fi concepts rolled into one book.

One is artificial intelligence (the ship's captain is an AI), the second is alien life so vastly different it appears incomprehensible to human minds. And last but not least, and the most wild, vampires as a evolutionary branch of humanity that died out and has been recreated in the future.

[–] outhouseperilous@lemmy.dbzer0.com 4 points 2 days ago* (last edited 2 days ago) (2 children)

Also, the extremely post-cyberpunk posthumans, and each member of the crew is a different extremely capable kind of fucked up model of what we might become, with the protagonist personifying the genre of horror that it is, while still being occasionally hilarious.

Despite being fundamentally a cosmic horror novel, and relentlessly math-in-the-back-of-the-book hard scifi it does what all the best cyberpunk does and shamelessly flirts with the supernatural at every opportunity. The sequel doubles down on this, and while not quite as good overall (still exceptionally good, but harder to follow) each of the characters explores a novel and sweet+sad+horrifying kind of love.

load more comments (2 replies)
[–] TommySalami@lemmy.world 4 points 2 days ago

My a favorite part of the vampire thing is how they died out. Turns out vampires start seizing when trying to visually process 90Β° angles, and humans love building shit like that (not to mention a cross is littered with them). It's so mundane an extinction I'd almost believe it.

load more comments (15 replies)
[–] merc@sh.itjust.works 11 points 3 days ago (2 children)

then continue to shill it for use cases it wasn't made for either

The only thing it was made for is "spicy autocomplete".

load more comments (2 replies)
[–] BarrelAgedBoredom@lemm.ee 26 points 3 days ago (3 children)

It's marketed like its AGI, so we should treat it like AGI to show that it isn't AGI. Lots of people buy the bullshit

load more comments (3 replies)
[–] REDACTED@infosec.pub 25 points 3 days ago (12 children)

There are different types of Artificial intelligences. Counter-Strike 1.6 bots, by definition, were AI. They even used deep learning to figure out new maps.

load more comments (12 replies)
[–] SoftestSapphic@lemmy.world 12 points 3 days ago (5 children)

Maybe they should call it what it is

Machine Learning algorithms from 1990 repackaged and sold to us by marketing teams.

[–] outhouseperilous@lemmy.dbzer0.com 4 points 2 days ago (1 children)

Hey now, that's unfair and queerphobic.

These models are from 1950, with juiced up data sets. Alan turing personally sid a lot of work on them, before he cracked the math and figured out they were shit and would always be shit.

[–] SoftestSapphic@lemmy.world 4 points 2 days ago (2 children)

Fair lol

Alan Turing was the GOAT

RIP my beautiful prince

Also, thank you for being basically a person. This topic does a lot to convince me those aren't a thing.

His politics weren't perfect, but he got more nazis killed than a lot of people with much worse takes, and was a genuinely brilliant reasonably ethical contributor to a lot of cool shit that should have fucking stayed cool.

load more comments (4 replies)
load more comments (5 replies)