scruiser

joined 2 years ago
[–] scruiser@awful.systems 5 points 1 month ago

It is pretty good as a source for science fiction ideas. I mean, lots of their ideas originate from science fiction, but their original ideas would make fun fantasy sci-fi concepts. Like looking off their current front page... https://www.lesswrong.com/posts/WLFRkm3PhJ3Ty27QH/the-cats-are-on-to-something cat's deliberately latching on to humans as the most lazy way of advancing their own value across the future seems like a solid point of fantasy worldworldbuilding...

[–] scruiser@awful.systems 5 points 1 month ago

To add to blakestacey's answer, his fictional worldbuilding concept, dath ilan (which he treats like rigorous academic work to the point of citing it in tweets), uses prediction markets in basically everything, from setting government policy to healthcare plans to deciding what restaurant to eat at.

[–] scruiser@awful.systems 5 points 1 month ago (2 children)

Every tweet in that thread is sneerable. Either from failing to understand the current scientific process, vastly overestimating how easily cutting edge can be turned into cleanly resolvable predictions, or assuming prediction markets are magic.

[–] scruiser@awful.systems 9 points 1 month ago (1 children)

He's the one that used the phrase "silent gentle rape"? Yeah, he's at least as bad as the worst evo-psych pseudoscience misogyny posted on lesswrong, with the added twist he has a position in academia to lend him more legitimacy.

[–] scruiser@awful.systems 7 points 1 month ago* (last edited 1 month ago) (14 children)

He had me in the first half, I thought he was calling out rationalist's problems (even if dishonestly disassociating himself from then). But then his recommended solution was prediction markets (a concept which rationalists have in fact been trying to play around with, albeit at a toy model level with fake money).

[–] scruiser@awful.systems 13 points 1 month ago

Chiming in to agree your prediction write-ups aren't particularly good. Sure they spark discussion, but the whole forecasting/prediction game is one we've seen the rationalists play many times, and it is very easy to overlook or at least undercount your misses and over hype your successes.

In general... I think your predictions are too specific and too optimistic...

[–] scruiser@awful.systems 11 points 1 month ago (1 children)

Every time I see a rationalist bring up the term "Moloch" I get a little angrier at Scott Alexander.

[–] scruiser@awful.systems 4 points 1 month ago

I use the term "inspiring" loosely.

[–] scruiser@awful.systems 15 points 1 month ago (4 children)

Putting this into the current context of LLMs... Given how Eliezer still repeats the "diamondoid bacteria" line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.

[–] scruiser@awful.systems 15 points 1 month ago (4 children)

Lesswronger notices all of the rationalist's attempts at making an "aligned" AI company keep failing: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate

Notably, the author doesn't realize Capitalism is the root problem in misaligning the incentives, and it takes a comment directly point it out for them to get as far as noticing as link to the cycle of enshittification.

[–] scruiser@awful.systems 6 points 1 month ago (6 children)

It's a good post. A few minor quibbles:

The “nonprofit” company OpenAI was launched under the cynical message of building a “safe” artificial intelligence that would “benefit” humanity.

I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn't really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH... if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?

These tools definitely have positive uses. I personally use them frequently for web searches, coding, and oblique strategies. I find them helpful.

I wish people didn't feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.

One of the things I liked and didn't know about before

Ask Claude any basic question about biology and it will abort.

That is hilarious! Kind of overkill to be honest, I think they've really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author's overall point that this shut-it-down approach could be used for a variety of topics.

One of the comments gets it:

Safety team/product team have conflicting goals

LLMs aren't actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they've thrown at them, so you're left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1

[–] scruiser@awful.systems 6 points 1 month ago

Lots of woo and mysticism already has a veneer of stolen Quantum terminology. It's too far from respectable to get the quasi-expert endorsement or easy VC money that LLM hype has gotten, but quantum hucksters fusing quantum computing nonsense with quantum mysticism can probably still con lots of people out of their money.

view more: ‹ prev next ›