scruiser

joined 2 years ago
[–] scruiser@awful.systems 18 points 4 months ago* (last edited 4 months ago)

This isn't debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don't like that you can leave (or post a few more times for us to laugh at before you're banned).

As to the particular paper that got linked, we've seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren't going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.

[–] scruiser@awful.systems 14 points 4 months ago* (last edited 4 months ago)

As to cryonics... for both LLM doomers and accelerationists, they have no need for a frozen purgatory when the techno-rapture is just a few years around the corner.

As for the rest of the shiny futuristic dreams, they have give way to ugly practical realities:

  • no magic nootropics, just Scott telling people to take adderal and other rationalists telling people to micro dose on LSD

  • no low hanging fruit in terms of gene editing (as epistaxis pointed out over on reddit) so they’re left with eugenics and GeneSmith’s insanity

  • no drexler nanotech so they are left hoping (or fearing) the god-AI can figure it (which is also a problem for ever reviving cryonically frozen people)

  • no exocortex, just over priced google glasses and a hallucinating LLM “assistant”

  • no neural jacks (or neural lace or whatever the cyberpunk term for them is), just Elon murdering a bunch of lab animals and trying out (temporary) hope on paralyzed people

The future is here, and it’s subpar compared to the early 2000s fantasies. But hey, you can rip off Ghibli’s style for your shitty fanfic projects, so there are a few upsides.

[–] scruiser@awful.systems 5 points 4 months ago

I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.

They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.

[–] scruiser@awful.systems 6 points 4 months ago (1 children)

Galaxy brain insane take (free to any lesswrong lurkers): They should develop the usage of IACUCs for LLM prompting and experimentation. This is proof lesswrong needs more biologists! Lesswrong regularly repurpose comp sci and hacker lingo and methods in inane ways (I swear if I see the term red-teaming one more time), biological science has plenty of terminology to steal and repurpose they haven't touched yet.

[–] scruiser@awful.systems 6 points 4 months ago* (last edited 4 months ago)

Yeah there might be something like that going on causing the "screaming". Lesswrong, in it's better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isn't any effort to do that here.

[–] scruiser@awful.systems 5 points 4 months ago

I agree. There is intent going into the prompt fondler's efforts to prompt the genAI, it's just not very well developed intent and it is using the laziest shallowest method possible to express itself.

[–] scruiser@awful.systems 10 points 4 months ago

If you understood why the splattered paint was art, you would also understand why the AI generated images aren't art (or are, at best, the art of hacks). It seems like you understand neither.

[–] scruiser@awful.systems 12 points 4 months ago (16 children)

Another episode in the continued saga of lesswrongers anthropomorphizing LLMs to an absurd extent: https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-box-redteaming-makes-me-feel-weird-1

[–] scruiser@awful.systems 19 points 4 months ago (6 children)

Lol, Altman's AI generated purple prose slop was so bad even Eliezer called it out (as opposed to make a doomer-hype point):

Perhaps you have found some merit in that obvious slop, but I didn't; there was entropy, cliche, and meaninglessness poured all over everything like shit over ice cream, and if there were cherries underneath I couldn't taste it for the slop.

[–] scruiser@awful.systems 11 points 5 months ago

Is this water running over the land or water running over the barricade?

To engage with his metaphor, this water is dripping slowly through a purpose dug canal by people that claim they are trying to show the danger of the dikes collapsing but are actually serving as the hype arm for people that claim they can turn a small pond into a hydroelectric power source for an entire nation.

Looking at the details of "safety evaluations", it always comes down to them directly prompting the LLM and baby-step walking it through the desired outcome with lots of interpretation to show even the faintest traces of rudiments of anything that looks like deception or manipulation or escaping the box. Of course, the doomers will take anything that confirms their existing ideas, so it gets treated as alarming evidence of deception or whatever property they want to anthropomorphize into the LLM to make it seem more threatening.

[–] scruiser@awful.systems 13 points 5 months ago* (last edited 5 months ago) (9 children)

That was literally the inflection point on my path to sneerclub. I had started to break from less wrong before, but I hadn't reached the tipping point of saying it was all bs. And for ssc and Scott in particular I had managed to overlook the real message buried in thousands of words of equivocating and bad analogies and bad research in his earlier posts. But "you are still crying wolf" made me finally question what Scott's real intent was.

[–] scruiser@awful.systems 3 points 5 months ago

This is a good summary of half of the motive to ignore the real AI safety stuff in favor of sci-fi fantasy doom scenarios. (The other half is that the sci-fi fantasy scenarios are a good source of hype.) I hadn't thought about the extent to which Altman's plan is "hey morons, hook my shit up to fucking everything and try to stumble across a use case that’s good for something" (as opposed to the "we’re building a genie, and when we’re done we’re going to ask it for three wishes" he hypes up), that makes more sense as a long term plan...

view more: ‹ prev next ›