scruiser

joined 2 years ago
[–] scruiser@awful.systems 7 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

Mesa-optimization? I'm not sure who in the lesswrong sphere coined it... but yeah, it's one of their "technical" terms that don't actually have academic publishing behind it, so jargon.

Instrumental convergence.... I think Bostrom coined that one?

The AI alignment forum has a claimed origin here is anyone on the article here from CFAR?

[–] scruiser@awful.systems 8 points 3 weeks ago* (last edited 3 weeks ago)

Center For Applied Rationality. They hosted "workshops" were people could learn to be more rational. Except there methods weren't really tested. And pretty culty. And reaching the correct conclusions (on topics such as AI doom) were treated as proof of rationality.

Edit: still host, present tense. I had misremembered some news of some other rationality adjacent institution as them shutting down, nope, they are still going strong, offering regular 4 day ~~brainwashing sessions~~ workshops.

[–] scruiser@awful.systems 14 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I can use bad analogies also!

  • If airplanes can fly, why can't they fly to the moon? It is a straightforward extension of existing flight technology, and plotting airplane max altitude from 1900-1920 shows exponential improvement in max altitude. People who are denying moon-plane potential just aren't looking at the hard quantitative numbers in the industry. In fact, with no atmosphere in the way, past a certain threshold airplanes should be able to get higher and higher and faster and faster without anything to slow them down.

I think Eliezer might have started the bad airplane analogies... let me see if I can find a link... and I found an analogy from the same author as the 2027 ~~fanfic~~ forecast: https://www.lesswrong.com/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity

Eliezer used a tortured metaphor about rockets, so I still blame him for the tortured airplane metaphor: https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem

[–] scruiser@awful.systems 18 points 3 weeks ago* (last edited 3 weeks ago)

This isn't debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don't like that you can leave (or post a few more times for us to laugh at before you're banned).

As to the particular paper that got linked, we've seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren't going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.

[–] scruiser@awful.systems 14 points 3 weeks ago* (last edited 3 weeks ago)

As to cryonics... for both LLM doomers and accelerationists, they have no need for a frozen purgatory when the techno-rapture is just a few years around the corner.

As for the rest of the shiny futuristic dreams, they have give way to ugly practical realities:

  • no magic nootropics, just Scott telling people to take adderal and other rationalists telling people to micro dose on LSD

  • no low hanging fruit in terms of gene editing (as epistaxis pointed out over on reddit) so they’re left with eugenics and GeneSmith’s insanity

  • no drexler nanotech so they are left hoping (or fearing) the god-AI can figure it (which is also a problem for ever reviving cryonically frozen people)

  • no exocortex, just over priced google glasses and a hallucinating LLM “assistant”

  • no neural jacks (or neural lace or whatever the cyberpunk term for them is), just Elon murdering a bunch of lab animals and trying out (temporary) hope on paralyzed people

The future is here, and it’s subpar compared to the early 2000s fantasies. But hey, you can rip off Ghibli’s style for your shitty fanfic projects, so there are a few upsides.

[–] scruiser@awful.systems 5 points 1 month ago

I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.

They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.

[–] scruiser@awful.systems 6 points 1 month ago (1 children)

Galaxy brain insane take (free to any lesswrong lurkers): They should develop the usage of IACUCs for LLM prompting and experimentation. This is proof lesswrong needs more biologists! Lesswrong regularly repurpose comp sci and hacker lingo and methods in inane ways (I swear if I see the term red-teaming one more time), biological science has plenty of terminology to steal and repurpose they haven't touched yet.

[–] scruiser@awful.systems 6 points 1 month ago* (last edited 1 month ago)

Yeah there might be something like that going on causing the "screaming". Lesswrong, in it's better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isn't any effort to do that here.

[–] scruiser@awful.systems 5 points 1 month ago

I agree. There is intent going into the prompt fondler's efforts to prompt the genAI, it's just not very well developed intent and it is using the laziest shallowest method possible to express itself.

[–] scruiser@awful.systems 10 points 1 month ago

If you understood why the splattered paint was art, you would also understand why the AI generated images aren't art (or are, at best, the art of hacks). It seems like you understand neither.

[–] scruiser@awful.systems 12 points 1 month ago (16 children)

Another episode in the continued saga of lesswrongers anthropomorphizing LLMs to an absurd extent: https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-box-redteaming-makes-me-feel-weird-1

[–] scruiser@awful.systems 18 points 1 month ago (6 children)

Lol, Altman's AI generated purple prose slop was so bad even Eliezer called it out (as opposed to make a doomer-hype point):

Perhaps you have found some merit in that obvious slop, but I didn't; there was entropy, cliche, and meaninglessness poured all over everything like shit over ice cream, and if there were cherries underneath I couldn't taste it for the slop.

view more: ‹ prev next ›