this post was submitted on 06 Apr 2025
135 points (97.9% liked)

Futurology

2439 readers
351 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] hendrik@palaver.p3x.de 4 points 1 day ago* (last edited 1 day ago)

I get what you say. I'm still not convinced. With something like like SQL, my query is an exact order. Fetch me these rows, do these operations on those fields and return that. And it does exactly that. While with LLMs, I put in human language, it translates that into some unknown representation and does autocomplete. Which I think is a different mechanism. And also in consequence, it's a different thing that gets returned. I'm thinking of something like asking a database an exact question. Like count the number of users and answer me which servers have the most users. You get the answer to that question. While if I query an AI, that also gives me an answer. And it may be deterministic once I set the temperature to zero. But I found LLMs tend to "augment" their answer with arbitrary "facts". Once it knows that Reddit for example is a big platform, it won't really look at the news article I gave and the numbers in it. If it's a counter-intuitive finding, it'll rather base its answer on its background knowledge and disregard the other numbers, leading to an incorrect answer. And that tends to happen to me with more complex things. So I think it isn't the correct tool for things like summarizations, or half the things databases are concerned with.

With simpler things, I'm completely on your side. It'll almost every time get simple questions right. And it has an astounding pile of knowledge available. It seems to be able to connect information, apply it to other things. I'm always amazed by what it can do. And its shortcomings. I think a lot of those aren't very obvious. I'm a bit curious whether we're one day able to improve LLMs to a state where we can steer AI into being truthful (or creative), control what it bases its responses on...

I mean we kind of want that. I frequently see some Github bot or help bot return incorrect answers. At the same time we want things like Retrieval Augmented Generation, AI assistants helping workers to be more efficient. Or doctors to avoid mistreatment, look through the patient's medical records... But I think these people often confuse AI with a database that gives a summary. And I don't think it is. It will do for the normal case. But you really have to pay attention to what current AI really is, if you use it for critical applications, because it's knowledgeable, but at the same time not super smart. And it tends to be weird with all edge-cases.

And I think that's kind of the difference. "Traditional" computing will handle edge-cases just as well as the regular stuff. It'll look up information and it will match the query or won't return anything. And it can't answer a lot of questions unles you tell the computer exactly what steps to do.