scruiser

joined 2 years ago
[–] scruiser@awful.systems 9 points 1 day ago (1 children)

Should we give up on all altruist causes because the AGI God is nearly here? the answer may surprise you!

tldr; actually you shouldn't give because the AGI God might not be quite omnipotent and thus would still benefit from your help and maybe there will be multiple Gods, some used for Good and some for Evil so your efforts are still needed. Shrimp are getting their eyeballs cut off right now!

[–] scruiser@awful.systems 14 points 2 days ago

So this blog post was framed positively towards LLM's and is too generous in accepting many of the claims around them, but even so, the end conclusions are pretty harsh on practical LLM agents: https://utkarshkanwat.com/writing/betting-against-agents/

Basically, the author has tried extensively, in multiple projects, to make LLM agents work in various useful ways, but in practice:

The dirty secret of every production agent system is that the AI is doing maybe 30% of the work. The other 70% is tool engineering: designing feedback interfaces, managing context efficiently, handling partial failures, and building recovery mechanisms that the AI can actually understand and use.

The author strips down and simplifies and sanitizes everything going into the LLMs and then implements both automated checks and human confirmation on everything they put out. At that point it makes you question what value you are even getting out of the LLM. (The real answer, which the author only indirectly acknowledges, is attracting idiotic VC funding and upper management approval).

Even as critcal as they are, the author doesn't acknowledge a lot of the bigger problems. The API cost is a major expense and design constraint on the LLM agents they have made, but the author doesn't acknowledge the prices are likely to rise dramatically once VC subsidization runs out.

[–] scruiser@awful.systems 7 points 3 days ago

Is this “narrative” in the room with us right now?

I actually recall recently someone pro llm trying to push that sort of narrative (that it's only already mentally ill people being pushed over the edge by chatGPT)...

Where did I see it... oh yes, lesswrong! https://www.lesswrong.com/posts/f86hgR5ShiEj4beyZ/on-chatgpt-psychosis-and-llm-sycophancy

This has all the hallmarks of a moral panic. ChatGPT has 122 million daily active users according to Demand Sage, that is something like a third the population of the United States. At that scale it's pretty much inevitable that you're going to get some real loonies on the platform. In fact at that scale it's pretty much inevitable you're going to get people whose first psychotic break lines up with when they started using ChatGPT. But even just stylistically it's fairly obvious that journalists love this narrative. There's nothing Western readers love more than a spooky story about technology gone awry or corrupting people, it reliably rakes in the clicks.

The ~~call~~ narrative is coming from inside the ~~house~~ forum. Actually, this is even more of a deflection, not even trying to claim they were already on the edge but that the number of delusional people is at the base rate (with no actual stats on rates of psychotic breaks, because on lesswrong vibes are good enough).

[–] scruiser@awful.systems 3 points 1 week ago

He knows the connectionist have basically won (insofar as you can construe competing scientific theories and engineering paradigms as winning or losing... which is kind of a bad framing), so that is why he pushing the "neurosymbolic" angle so hard.

(And I do think Gary Marcus is right that the neurosymbolic approaches has been neglected by the big LLM companies because they are narrower and you can't "guarantee" success just by dumping a lot of compute on them, you need actual domain expertise to do the symbolic half.)

[–] scruiser@awful.systems 7 points 1 week ago

I can imagine it clear... a chart showing minimum feature size decreasing over time (using cherry picked data points) with a dotted line projection of when 3d printers would get down nanotech scale. 3d printer related companies would warn of dangers of future nanotech and ask for legislation regulating it (with the language of the legislation completely failing to effect current 3d printing technology). Everyone would be buying 3d printers at home, and lots of shitty startups would be selling crappy 3d printed junk.

[–] scruiser@awful.systems 4 points 1 week ago (2 children)

Yeah, that metaphor fits my feeling. And to extend the metaphor, I thought Gary Marcus was, if not a member of the village, at least an ally, but he doesn't seem to actually realize the battle lines. Like maybe to him hating on LLMs is just another way of pushing symbolic AI?

[–] scruiser@awful.systems 9 points 1 week ago

Those opening Peter Thiel quotes... Thiel uses talks about (in a kind of dated and maybe a bit offensive) trans people, to draw the comparison to transhumanists wanting to change themselves more extensively. The disgusting irony is that Thiel has empowered the right-wing ecosystem, which is deeply opposed to trans rights.

[–] scruiser@awful.systems 14 points 1 week ago (6 children)

So recently (two weeks ago), I noticed Gary Marcus made a lesswrong account to directly engage with the rationalists. I noted it in a previous stubsack thread

Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He’ll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass.

And sure enough, he has started talking about P(Doom). I hate being right. To be more than fair to him, he is addressing the scenario of Elon Musk or someone similar pulling off something catastrophic by placing too much trust in LLMs shoved into something critical. But he really should know better by now that using their lingo and their crit-hype terminology strengthens them.

[–] scruiser@awful.systems 6 points 2 weeks ago

Here’s a LW site dev whining about the study, he was in it and i think he thinks it was unfair to AI

There a complete lack of introspection. It seems like the obvious conclusion to draw from a study showing people's subjective estimates of their productivity with LLMs were the exact opposite of right would inspire him to question his subjectively felt intuitions and experience but instead he doubles down and insists the study must be wrong and surely with the latest model and best use of it it would be a big improvement.

[–] scruiser@awful.systems 4 points 2 weeks ago

You’re welcome.

Given their assumptions, the doomers should be thanking us for delaying AGI doom!

[–] scruiser@awful.systems 4 points 2 weeks ago

Ah, you see, you fail to grasp the shitlib logic that the US bombing other countries doesn't count as illegitimate violence as long as the US has some pretext and maintains some decorum about it.

[–] scruiser@awful.systems 9 points 2 weeks ago

They probably got fed up with a broken system giving up it's last shreds of legitimacy in favor of LLM garbage and are trying to fight back? Getting through an editor and appeasing reviewers already often requires some compromises in quality and integrity, this probably just seemed like one more.

 

So, lesswrong Yudkowskian orthodoxy is that any AGI without "alignment" will bootstrap to omnipotence, destroy all mankind, blah, blah, etc. However, there has been the large splinter heresy of accelerationists that want AGI as soon as possible and aren't worried about this at all (we still make fun of them because what they want would result in some cyberpunk dystopian shit in the process of trying to reach it). However, even the accelerationist don't want Chinese AGI, because insert standard sinophobic rhetoric about how they hate freedom and democracy or have world conquering ambitions or they simply lack the creativity, technical ability, or background knowledge (i.e. lesswrong screeds on alignment) to create an aligned AGI.

This is a long running trend in lesswrong writing I've recently noticed while hate-binging and catching up on the sneering I've missed (I had paid less attention to lesswrong over the past year up until Trump started making techno-fascist moves), so I've selected some illustrative posts and quotes for your sneering.

  • Good news, China actually has no chance at competing at AI (this was posted before deepseek was released). Well. they are technically right that China doesn't have the resources to compete in scaling LLMs to AGI because it isn't possible in the first place

China has neither the resources nor any interest in competing with the US in developing artificial general intelligence (AGI) primarily via scaling Large Language Models (LLMs).

  • The Situational Awareness Essays make sure to get their Yellow Peril fearmongering on! Because clearly China is the threat to freedom and the authoritarian power (pay no attention to the techbro techno-fascist)

In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers?

  • More crap from the same author
  • There are some posts pushing back on having an AGI race with China, but not because they are correcting the sinophobia or the delusions LLMs are a path to AGI, but because it will potentially lead to an unaligned or improperly aligned AGI
  • And of course, AI 2027 features a race with China that either the US can win with a AGI slowdown (and an evil AGI puppeting China) or both lose to the AGI menance. Featuring "legions of CCP spies"

Given the “dangers” of the new model, OpenBrain “responsibly” elects not to release it publicly yet (in fact, they want to focus on internal AI R&D). Knowledge of Agent-2’s full capabilities is limited to an elite silo containing the immediate team, OpenBrain leadership and security, a few dozen US government officials, and the legions of CCP spies who have infiltrated OpenBrain for years.

  • Someone asks the question directly Why Should I Assume CCP AGI is Worse Than USG AGI?. Judging by upvoted comments, lesswrong orthodoxy of all AGI leads to doom is the most common opinion, and a few comments even point out the hypocrisy of promoting fear of Chinese AGI while saying the US should race for AGI to achieve global dominance, but there are still plenty of Red Scare/Yellow Peril comments

Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP's suppression of information related to the origins of COVID feel more salient and relevant.

 

I am still subscribed to slatestarcodex on reddit, and this piece of garbage popped up on my feed. I didn't actually read the whole thing, but basically the author correctly realizes Trump is ruining everything in the process of getting at "DEI" and "wokism", but instead of accepting the blame that rightfully falls on Scott Alexander and the author, deflects and blames the "left" elitists. (I put left in quote marks because the author apparently thinks establishment democrats are actually leftist, I fucking wish).

An illustrative quote (of Scott's that the author agrees with)

We wanted to be able to hold a job without reciting DEI shibboleths or filling in multiple-choice exams about how white people cause earthquakes. Instead we got a thousand scientific studies cancelled because they used the string “trans-” in a sentence on transmembrane proteins.

I don't really follow their subsequent points, they fail to clarify what they mean... In sofar as "left elites" actually refers to centrist democrats, I actually think the establishment Democrats do have a major piece of blame in that their status quo neoliberalism has been rejected by the public but the Democrat establishment refuse to consider genuinely leftist ideas, but that isn't the point this author is actually going for... the author is actually upset about Democrats "virtue signaling" and "canceling" and DEI, so they don't actually have a valid point, if anything the opposite of one.

In case my angry disjointed summary leaves you any doubt the author is a piece of shit:

it feels like Scott has been reading a lot of Richard Hanania, whom I agree with on a lot of points

For reference the ssc discussion: https://www.reddit.com/r/slatestarcodex/comments/1jyjc9z/the_edgelords_were_right_a_response_to_scott/

tldr; author trying to blameshift on Trump fucking everything up while keeping up the exact anti-progressive rhetoric that helped propel Trump to victory.

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

 

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

view more: next ›