aio

joined 1 year ago
[–] aio@awful.systems 11 points 1 week ago

The actual pathfinding algorithm (which is surely just A* search or similar) works just fine; the problem is the LLM which uses it.

[–] aio@awful.systems 14 points 1 week ago (2 children)

I like how all of the currently running attempts have been equipped with automatic navigation assistance, i.e. a pathfinding algorithm from the 60s. And that's the only part of the whole thing that actually works.

[–] aio@awful.systems 5 points 1 week ago

levels of glazing previously unheard of

[–] aio@awful.systems 7 points 1 week ago (1 children)

The multiple authors thing is certainly a joke, it's a reference to the (widely accepted among scholars) theory that the Torah was compiled from multiple sources with different authors.

[–] aio@awful.systems 6 points 1 week ago* (last edited 1 week ago) (1 children)

I'm not sure what you mean by your last sentence. All of the actual improvements to omega were invented by humans; computers have still not made a contribution to this.

[–] aio@awful.systems 9 points 1 week ago* (last edited 1 week ago)

Yes - on the theoretical side, they do have an actual improvement, which is a non-asymptotic reduction in the number of multiplications required for the product of two 4x4 matrices over an arbitrary noncommutative ring. You are correct that the implied improvement to omega is moot since theoretical algorithms have long since reduced the exponent beyond that of Strassen's algorithm.

From a practical side, almost all applications use some version of the naive O(n^3) algorithm, since the asymptotically better ones tend to be slower in practice. However, occasionally Strassen's algorithm has been implemented and used - it is still reasonably simple after all. There is possibly some practical value to the 48-multiplications result then, in that it could replace uses of Strassen's algorithm.

[–] aio@awful.systems 5 points 2 weeks ago

I think this theorem is worthless for practical purposes. They essentially define the "AI vs learning" problem in such general terms that I'm not clear on whether it's well-defined. In any case it is not a serious CS paper. I also really don't believe that NP-hardness is the right tool to measure the difficulty of machine learning problems.

[–] aio@awful.systems 3 points 3 weeks ago (1 children)

As technology advanced, humans grew accustomed to relying on the machines.

[–] aio@awful.systems 3 points 3 weeks ago

honestly the only important difference between them is that emacs's default keybindings can and will give you a repetitive stress injury (ask me how i know...)

[–] aio@awful.systems 10 points 4 weeks ago* (last edited 4 weeks ago) (4 children)

Apparently MIT is teaching a vibe coding class:

How will this year’s class differ from last year’s? There will be some major changes this year:

  • Units down from 18 to 15, to reflect reduced load
  • Grading that emphasizes mastery over volume
  • More emphasis on design creativity (and less on ethics)
  • Not just permission but encouragement to use LLMs
  • A framework for exploiting LLMs in code generation
[–] aio@awful.systems 6 points 1 month ago

I've been using Anki, it works great but requires you to supply the discipline and willingness to learn yourself, which might not be possible for kids.

[–] aio@awful.systems 10 points 2 months ago (3 children)

Writing "My Immortal" in 2006 when nothing quite like it had ever been written before, is a (possibly unintentional) stroke of genius. Writing "My Immortal" after it's already been written is worthless.

view more: next ›