this post was submitted on 06 Apr 2025
135 points (97.9% liked)

Futurology

2439 readers
351 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Blue_Morpho@lemmy.world 42 points 1 day ago (4 children)

He seems right about everything. It's weird though that you can't say llms are useful without being downvoted.

Like if tech doesn't achieve fully conscious super human intelligence, it's useless.

[–] knightly@pawb.social 73 points 1 day ago (3 children)

Any real utility they might have had is wholly overshadowed by the massive capital overinvestment and shoehorning it into everything by folks grifting on that overpromise.

[–] PattyMcB@lemmy.world 19 points 1 day ago

Absolutely. AI in everything is counterproductive, especially if it's bad, or if it has nothing to do with the function of the system into which it's shoehorned.

[–] DScratch@sh.itjust.works 12 points 1 day ago

And the environmental impact of training and running llms, just so I can ask GPT why my code no work?!

[–] Blue_Morpho@lemmy.world 0 points 1 day ago

Capital over investment then crash is just how capitalism rolls. It's been that way forever.

The Internet had over investment and a crash in 2000. Game consoles, and home computers before that. Decades ago when I was looking into more office space for my ISP, the real estate agent talked about how the Internet was the latest bubble in a long chain of tech bubbles he had seen. He talked about the minicomputer bubble from the late 1960's.

Even in the sub market of AI there have been hypes and crashes like neural nets from 30 years ago. Today voice recognition and image to text is in everything yet no one complains, "Why is AI shoe horned into my camera app?" It's because it is no longer seen as AI, but a feature.

[–] atomicbocks@sh.itjust.works 14 points 1 day ago

Nobody had a problem with Watson. It’s the tech bros and the theft of IP that is the problem.

[–] ogmios@sh.itjust.works 6 points 1 day ago (1 children)

Way back people attempted to make automatons from cogs and gears, yet while that didn't work the basic technology was still extremely useful for appropriate applications.

[–] veroxii@aussie.zone 1 points 1 day ago

I saw a quote the other day and I'm paraphrasing: "AI is not going to replace your job. It's going to be replaced by someone who knows how to properly use and leverage AI"

[–] SpaceNoodle@lemmy.world 6 points 1 day ago (4 children)

The irony being that LLMs are completely orthogonal to general AI

[–] Blue_Morpho@lemmy.world 4 points 1 day ago* (last edited 1 day ago) (1 children)

I'm talking without knowing anything but it seems like LLM's aren't orthogonal but instead only insufficient. That is like our consciousness has a library of information to draw on and that library is organized based on references, the LLM could be the library that another software component uses to draw upon for actual reasoning.

That's part of what Deepseek has been trying to do. They put a bunch of induction logic for different categories in front of the LLM.

[–] moonlight@fedia.io 5 points 1 day ago

I agree, although this seems like an unpopular opinion in this thread.

LLMs are really good at organizing and abstracting information, and it would make a lot of sense for an AGI to incorporate them for that purpose. It's just that there's no actual thought process happening, and in my opinion, "reasoning models" like Deepseek are entirely insufficient and a poor substitute for true reasoning capabilities.

[–] moonlight@fedia.io 0 points 1 day ago

I don't think so, or rather, we don't know yet. LLMs are not the full picture, but they might be part of it. I could envision a future AGI that has something similar to a modern LLM as the "language / visual centers of the brain". To continue that metaphor, the part that's going to be really difficult is the frontal lobe.

edit: Orthogonal to actual reasoning? Sure. But not to "general AI".

[–] MartianSands@sh.itjust.works 0 points 1 day ago

That's not obviously the case. I don't think anyone has a sufficient understanding of general AI, or of consciousness, to say with any confidence what is or is not relevant.

We can agree that LLMs are not going to be turned into general AI though

[–] electricyarn@lemmy.world -4 points 1 day ago* (last edited 1 day ago) (1 children)

Completely othogonal you say? How perfectly tromulent.

[–] SpaceNoodle@lemmy.world 6 points 1 day ago (1 children)
[–] electricyarn@lemmy.world 0 points 1 day ago (1 children)

I read plenty. My bs jargon detector is finely honed.

[–] SpaceNoodle@lemmy.world 1 points 1 day ago (1 children)
[–] electricyarn@lemmy.world 1 points 1 day ago (1 children)

Enlighten me! What does orthogonal mean in the context you used it in.

[–] SpaceNoodle@lemmy.world 1 points 1 day ago

Independent; not parallel.