this post was submitted on 27 Oct 2025
22 points (92.3% liked)

TechTakes

2275 readers
65 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the spooky surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Happy Halloween, everyone!)

top 50 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 2 points 37 minutes ago* (last edited 37 minutes ago)

Nuclear wonk Chery Rofer talks a bit about the plutonium fire sale to Saltman in this piece, after some faff about Putin trolling Trump into resuming nuclear testing:

https://www.lawyersgunsmoneyblog.com/2025/10/things-that-go-bump-in-the-night

[–] gerikson@awful.systems 3 points 3 hours ago (1 children)
[–] o7___o7@awful.systems 3 points 1 hour ago* (last edited 1 hour ago)

It's the Circle of Jerks 🎶

[–] BlueMonday1984@awful.systems 2 points 6 hours ago

Checked back in on the ongoing Framework dumpster fire - Project Bluefin's quietly cut ties, and the DHH connection is the reason why.

[–] sinedpick@awful.systems 13 points 19 hours ago (4 children)

Ugh. Hank Green just posted a 1-hour interview with Nate Soares about That Book. I'm halfway through on 2x speed and so far zero skepticism of That Book's ridiculous premises. I know it's not his field but I still expected a bit more from Hank.

A YouTube comment says it better than I could:

Yudkowsky and his ilk are cranks.

I can understand being concerned about the problems with the technology that exist now, but hyper-fixating on an unfalsifiable existential threat is stupid as it often obfuscates from the real problems that exist and are harming people now.

[–] UltimateNoob@programming.dev 3 points 2 hours ago* (last edited 2 hours ago) (1 children)

there is now a video on SciShow about it too.

This perception of AI as a competent agent that is inching ever so closer to godhood is honestly gaining way too much traction for my tastes. There's a guy in the comments of Hank's first video, I checked his channel and he has a video "We Are Not Ready for Superintelligence" and it got whopping 8 million views! There's another channel I follow for sneers and their video on Scott's AI 2027 paper has 3.7 and million views and a video about AI "attempted murder" has 8.5 million. Damn.

I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?

[–] BlueMonday1984@awful.systems 1 points 1 hour ago

I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?

Once the bubble bursts, I expect artificial intelligence as a concept will suffer a swift death, with the many harms and failures of this bubble (hallucinations, plagiarism, the slop-nami, etcetera) coming to be viewed as the ultimate proof that computers are incapable of humanlike intelligence (let alone Superintelligence™). There will likely be a contingent of true believers even after the bubble's burst, but the vast majority of people will respond to the question of "Can machines think?" with a resounding "no".

AI's usefulness to fascists (for propaganda, accountability sinks, misinformation, etcetera) and the actions of CEOs and AI supporters involved in the bubble (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) will also pound a good few nails into AI's coffin, by giving the public plenty of reason to treat any use of AI as a major red flag.

[–] Architeuthis@awful.systems 6 points 10 hours ago

it often obfuscates from the real problems that exist and are harming people now.

I am firmly on the side of it's possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.

Yudkowsky and his ilk are cranks.

That Yud is the Neil Breen of AI is the best thing ever written ~~about rationalism~~ in a youtube comment.

[–] mii@awful.systems 3 points 9 hours ago

I made it 30 minutes into this video before closing it.

What I like about Hank is that he usually reacts to community feedback and is willing to change his mind when confronted with new perspectives, so my hope is that enough people will tell him that Yud and friends are cranks and he'll do an update.

[–] blakestacey@awful.systems 9 points 16 hours ago (1 children)

"I can read HTML but not CSS" —Eliezer Yudkowsky, 2021 (and since apparently scrubbed from the Internet, to live only in the sneers of fond memory)

[–] swlabr@awful.systems 3 points 14 hours ago

It’s giving japanese mennonite reactionary coding

[–] nfultz@awful.systems 6 points 17 hours ago
[–] sc_griffith@awful.systems 18 points 1 day ago (3 children)
[–] nightsky@awful.systems 9 points 23 hours ago

That's like connecting a baking oven to a fridge and then marveling at the power of all the heat exchange

[–] fnix@awful.systems 6 points 1 day ago

AI was capitalism all along etc etc

Moar like power the butt.

[–] BlueMonday1984@awful.systems 6 points 1 day ago* (last edited 1 day ago)

A judge has given George RR Martin the green light to sue OpenAI for copyright infringement.

We are now one step closer to the courts declaring open season on the slop-bots. Unsurprisingly, there's jubilation on Bluesky.

[–] o7___o7@awful.systems 7 points 1 day ago (1 children)

How it feels to touch computers for a living in 2025:

https://youtube.com/watch?v=6TMOMTtAMBI

[–] saucerwizard@awful.systems 3 points 14 hours ago (1 children)
[–] o7___o7@awful.systems 1 points 3 hours ago

SAMHAIN FOREVER

[–] sailor_sega_saturn@awful.systems 12 points 1 day ago* (last edited 1 day ago) (1 children)

Grokipedia just dropped: https://grokipedia.com/

It's a bunch of LLM slop that someone encouraged to be right wing with varying degrees of success. I won't copy paste any slop here, but to give you an idea:

  • Grokipedia's article on Wikipedia uses the word "ideological" or "ideologically" 23 times (compared with Wikipedia using it twice in it's Wikipedia article).
  • Any articles about transgender topics tend to mix in lots of anti-transgender misinformation / slant, and use phrases like "rapid-onset gender dysphoria" or "biological males". The last paragraph of the article "The Wachowskis" is downright unhinged.
  • The articles tend to be long and meandering. I doubt even Grokipedia proponents will ultimately get much enjoyment out of it.

Also certain articles have this at the bottom:

The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License.

[–] BlueMonday1984@awful.systems 6 points 1 day ago (1 children)

Decided to check the Grokipedia "article" on the Muskrat out of morbid curiosity.

I haven't seen anything this fawning since that one YouTube video which called him, and I quote its title directly, "The guy who is saving the world".

[–] nfultz@awful.systems 3 points 18 hours ago* (last edited 17 hours ago)

Interesting that for the musk article, it has the "see edits" button disabled. ha

E:

I peeked under the hood, "see edits" data is in page.fixedIssues on the api, ripe for scraping: https://grokipedia.com/api/page?slug=StarCraft_II&includeContent=false

[–] rook@awful.systems 21 points 1 day ago (4 children)

KDE showing how it should be done:

https://mail.kde.org/pipermail/kde-www/2025-October/009275.html

Question:

I am curious why you do not have a link to your X social media on your website. I know you are just forwarding posts to X from your Mastodon server. However, I’m afraid that if you pushed for more marketing on X—like DHH and Ladybird do—the hype would be much greater. I think you need a separate social media manager for the X platform.

Response:

We stopped posting on X for several reasons:

  1. The owner is a nazi
  2. The owner censors non- nazis and promotes nazis and their messages
  3. (Hence) most people who remain on X or are clueless and have difficulty parsing written text (one would assume), or are nazis
  4. Most of the new followers we were getting were nazi-propaganda spewing bots (7 out of 10 on average) or just straight up nazis.

Our community is not made up of nazis and many of our friendly contributors would be the target of nazi harassment, so we were not sure what we were doing there and stopped posting and left.

We are happy with that decision and have no intention of reversing it.

[–] Soyweiser@awful.systems 6 points 1 day ago

Think some of the KDE people are old school punkers so might not be a big shock.

[–] BlueMonday1984@awful.systems 10 points 1 day ago

The follow-up's worth mentioning too:

It's interesting they're citing specifically DHH and Ladybird as examples to follow, considering:

https://drewdevault.com/2025/09/24/2025-09-24-Cloudflare-and-fascists.html

[–] Reach_the_man@awful.systems 7 points 1 day ago* (last edited 1 day ago)

common KDE W

[–] Seminar2250@awful.systems 6 points 1 day ago

after fedora announced that ai contributions are cool, this is really refreshing

[–] mirrorwitch@awful.systems 18 points 2 days ago* (last edited 2 days ago)

back in ~my~ day cartel oligarchs would meet in secret to fix prices for products you cannot live without, then get a ton of profit and swim in money, while backstabbing one another at any opening with blackmail and assassins and whatnot. sometimes they'd fund a library or something to pretend they were philanthropists.

cartels these days make pretend products that nobody wants, then promise they're going to "invest" one quadrillion dollars on the other oligarch's company to create more virtual husbandos, and the other company in turn promises they're going to buy one quadrilllion dollars of "compute" from the first company, so that both can report one quadrillion dollars of "growth" for doing absolutely nothing. like who are they even trying to impress here. then the oligarch hires people to pretend he can play Diablo. what happened to honest, salt-of-the-earth exploitation of the masses, huh. the boot stomping on my face is all cheap plastic nowadays. they gotta replace it every 3 years and the new model doesn't even fit my face anymore. they don't make cartels like they used to

load more comments
view more: next ›