"This Is What Yudkowsky Actually Believes" seems like a subtitle that would get heavy use in a future episode of South Park about Cartman dropping out after one semester at community college.
istewart
Just had a video labeled "auto-dubbed" pop up in my YouTube feed for the first time. Not sure if it was chosen by the author or not. Too bad, it looks like a fascinating problem to see explained, but I don't think I'm going to trust an AI feature that I just saw for the first time to explain it. (And perhaps more crucially, I'm a bit afraid of what anime fans will have to say about this.)
Notwithstanding the subject matter, I feel like I've always gotten limited value from these Oxford-style university debates. KQED used to run a series called Intelligence Squared US that crammed it into an hour, and I shudder to think what that's become in the era of Trump and AI. It seems like a format that was developed to be the intellectual equivalent of intramural sports, complete with a form of scoring. But that contrivance renders it devoid of nuance, and also means it can be used to platform and launder ugly bullshit, since each side has to be strictly pro- or anti-whatever.
Really, it strikes me as a forerunner of the false certainty and point-scoring inherent in Twitter-style short-form discourse. In some ways, the format was unconsciously pared down and plopped online, without any sort of inquiry into its weaknesses. I'd be interested to know if anyone feels any different.
There aren't really many other options besides Springer and self-publishing for a book like that, right? I've gotten some field-specific article compilations from CRC Press, but I guess that's just an imprint of Routledge.
Considering Tesla's well-documented issues with functional door handles, this may be more accurate than you think
Marginally related, but I was just served a YouTube ad for chewing gum (yes, I'm too lazy to setup ad block).
"Respawn, by Razer. They didn't have gaming gum at Pompeii, just saying."
I think I felt part of my frontal lobe die to that incomprehensible sales pitch, so you all must be exposed to it as well.
Yeah, Phoronix is somehow a tier below even below HN. The Gamergate freaks of Linux enthusiasts. At least on Slashdot or HN, you might occasionally get someone with actual technical expertise posting. Phoronix just seems like a playground to push whatever the latest "I got this software for free and I hate it" grievance is, which is a profoundly pathetic thing indeed.
The "system prompt" phenomenon is one of the most flatly dopey things to come out of this whole mess. To put it politely, this seems like, uh, a very loosely causal way to set boundaries in high-dimensional latent spaces, if that's really what you're trying to do.
I put this prompt into my local Ollama instance, and suddenly Amazon is constantly delivering off-brand MOLLE vests and random stuff meant to attach to Picatinny rails, plus I also have nineteen separate subscriptions to the Black Rifle Coffee Company brew-of-the-month club. Help?
But Star Trek says the smartest guys in the room don't have emotions
Another thread worth pulling is that biotechnology and synthetic biology have turned out to be substantially harder to master than anticipated, and it didn't seem like it was ever the primary area of expertise for a lot of these people anyway. I don't have a copy of any of Kurzweil's books at hand to look at his predicted timelines for that stuff, but they're surely way off.
Faulty assumptions about the biological equivalence of digital neural network algorithms have done a lot of unexamined heavy lifting in driving the current AI bubble, and keeping the harder stuff on the fringes of the conversation. That said, I don't doubt that a few refugees from the bubble-burst will attempt to inflate the next bubble on the back of speculative biotech, and I've seen a couple of signs of that already.