I like how he doesn't even bother debunking it point by point, he just slams the very premise of it and moves on.
scruiser
system memory
System memory is just the marketing label for "having an LLM summarize a bunch of old conversations and shoving it into a hidden prompt". I agree that using that term is sneer-worthy.
I have three more examples of sapient marine mammals!
- whales warning the team of an impending solar flare in Stargate Atlantis via echolocation induced hallucinations
- the dolphins in hitchhiker’s guide to the galaxy
- whales showing up to help in one book of Animorphs while they are morphed into dolphins
I was thinking this also, like it's the perfect parody of several lesswrong and EA memes: overly concerned with animal suffering/sapience, overly concerned with IQ stats, openly admitting to no expertise or even relevant domain knowledge but driven to pontificate anyway, and inspired by existing science fiction... I think the last one explains it and it isn't a parody. As cinnasverses points out, Cetacean intelligence shows up occasionally in sci-fi. to add to the examples... sapient whales warning the team of an impending solar flare in Stargate Atlantis via echolocation induced hallucinations, the dolphins in hitchhiker's guide to the galaxy, and the whales showing up to help in one book of Animorphs.
I was trying to figure out why he hadn't turned this into an opportunity to lecture (or write a mini-fanfic) about giving more attack surface to the AGI to manipulate you... I was stumped until I saw your comment. I think that is it, expressing his childhood distrust of authority trumps lecturing us on the AI-God's manipulations.
I have context that makes this even more cringe! "Lawfulness concerns" refers to like, Dungeons and Dragons lawfulness. Specifically the concept of lawfulness developed in the Pathfinder fanfiction we've previously discussed (the one with deliberately bad BDSM and eugenics). Like a proper Lawful Good Paladin of Iomedae wouldn't put you in a position where you had to trust they hadn't rigged the background prompt if you went to them for spiritual counseling. (Although a Lawful Evil cleric of Asmodeus totally would rig the prompt... Lawfulness as a measuring stick of ethics/morality is a terrible idea even accepting the premise of using Pathfinder fanfic to develop your sense of ethics.)
you can’t have an early version that you’ll lie about being a “big step towards General Quantum Computing” or whatever
So you might think that... but I recall some years ago an analog computer was labeled as quantum annealing or something like that... oh wait, found the wikipedia article: https://en.wikipedia.org/wiki/Quantum_annealing and https://en.wikipedia.org/wiki/D-Wave_Systems . So it sounds to a naive listener like the same sort of thing as the quantum computers that are supposed to break cryptography and even less plausible things, but actually it can only do one very specific algorithm.
I bet you could squeeze the "quantum" label onto a variety of analog computers well short of general quantum computing and have it technically not be fraud and still fool lots of idiot VCs!
It's a nice master post that gets all his responses and many useful articles linked into one place. It's all familiar if you've kept up with techtakes and Zitron's other posts and pivot-to-ai, but I found a few articles I had previously missed reading.
Related trend to all the but achskhually's AI booster's like to throw out. Has everyone else noticed the trend where someone makes a claim of a rumor they heard about an LLM making a genuine discovery in some science, except it's always repeated second hand so you can't really evaluate it, and in the rare cases they do have a link to the source, it's always much less impressive than they made it sound at first...
Even for the people that do get email notifications of Zitron's excellent content (like myself), I appreciate having a place here to discuss it.
Apparently Eliezer is actually against throwing around P(doom) numbers: https://www.lesswrong.com/posts/4mBaixwf4k8jk7fG4/yudkowsky-on-don-t-use-p-doom ?
The objections to using P(doom) are relatively reasonable by lesswrong standards... but this is in fact once again all Eliezer's fault. He started a community centered around 1) putting overconfident probability "estimates" on subjective uncertain things 2) need to make a friendly AI-God, he really shouldn't be surprised that people combine the two. Also, he has regularly expressed his certainty that we are all going to die to Skynet in terms of ridiculously overconfident probabilities, he shouldn't be surprised that other people followed suit.
Guns don't kill people, people kill people.
I like how Zitron does a good job of distinguishing firm overall predictions from specific scenarios (his chaos bets) which are plausible but far from certain. AI 2027 specifically conflated and confused those things in a way that gave it's proponents more rhetorical room to hide and dodge.