this post was submitted on 15 May 2024
2 points (100.0% liked)

TechTakes

1813 readers
71 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] EatATaco@lemm.ee 0 points 11 months ago (3 children)

Why is that a criticism? This is how it works for humans too: we study, we learn the stuff, and then try to recall it during tests. We've been trained on the data too, for neither a human nor an ai would be able to do well on the test without learning it first.

This is part of what makes ai so "scary" that it can basically know so much.

[–] exanime@lemmy.today 0 points 11 months ago (1 children)

Because a machine that "forgets" stuff it reads seems rather useless... considering it was a multiple choice style exam and, as a machine, Chat GPT had the book entirely memorized, it should have scored perfect almost all the time.

[–] EatATaco@lemm.ee 0 points 11 months ago

Chat GPT had the book entirely memorized

I feel like this exposes a fundamental misunderstanding of how LLMs are trained.

[–] Soyweiser@awful.systems 0 points 11 months ago (1 children)

Dont anthropomorphise. There is quite the difference between a human and an advanced lookuptable.

[–] phoenixz@lemmy.ca 0 points 11 months ago (1 children)

Well... I do agree with you but human brains are basically big prediction engines that use lookup tables, experience, to navigate around life. Obviously a super simplification, and LLMs are nowhere near humans, but it is quite a step in the direction.

[–] pikesley@mastodon.me.uk 1 points 11 months ago

@phoenixz @Soyweiser "Let's redefine what it means to be human, so we can say the LLM is human" have you bumped your head?

[–] mawhrin@awful.systems 0 points 11 months ago (2 children)

LLMs know nothing. literally. they cannot.

[–] Amoeba_Girl@awful.systems 0 points 11 months ago (1 children)

Yeah but neither did Socrates

[–] dgerard@awful.systems 1 points 11 months ago

but he at least was smug about it

[–] EatATaco@lemm.ee 0 points 11 months ago (2 children)

I guess it comes down to a philosophical question as to what "know" actually means.

But from my perspective is that it certainly knows some things. It knows how to determine what I'm asking, and it clearly knows how to formulate a response by stitching together information. Is it perfect? No. But neither are humans, we mistakenly believe we know things all the time, and miscommunications are quite common.

But this is why I asked the follow up question...what's the effective difference? Don't get me wrong, they clearly have a lot of flaws right now. But my 8 year old had a lot of flaws too, and I assume both will get better with age.

[–] mawhrin@awful.systems 1 points 11 months ago

i guess it comes down to a philosophical question

no, it doesn't, and it's not a philosophical question (and neither is this a question of philosophy).

the software simply has no cognitive capabilities.

[–] Amoeba_Girl@awful.systems 0 points 11 months ago (1 children)

don't compare your child to a chatbot wtf

[–] Soyweiser@awful.systems 1 points 11 months ago

The dehumanization that happens just because people think LLMs are impressive (they are, just not that impressive) is insane.