this post was submitted on 28 Sep 2025
28 points (96.7% liked)

TechTakes

2185 readers
163 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] swlabr@awful.systems 11 points 1 day ago* (last edited 1 day ago) (2 children)

Do we have a word for people that are kind of like… AI concern trolls? Like they say they are critical of AI, or even against AI, but only ever really put forward pro-AI propaganda, especially in response to actual criticisms of AI. Kind of centrists or (neo) libs. But for AI.

Bonus points if they also for some reason say we should pivot to more nuclear power, because in their words, even though AI doesn’t use as much electricity as we think, we should still start using more nuclear power to meet the energy demands. (ofc this is bullshit)

E: Maybe it's just sealion

load more comments (2 replies)
[–] antifuchs@awful.systems 11 points 1 day ago* (last edited 1 day ago) (1 children)

Been tangentially involved in a discussion about how much LLMs have improved in the past year (I don’t care), but now that same space has a discussion of how annoying the stupid pop-up chat boxes are on websites. Don’t know what the problem is, they’ve gotten so much better in the past year?

[–] YourNetworkIsHaunted@awful.systems 9 points 1 day ago (2 children)

I mean that's the fundamental problem, right? No matter how much better it gets, the things it's able to do aren't really anything people need or want. Like, if I'm going to a website looking for information it's largely because I don't want to deal with asking somebody for the answer. Even a flawless chatbot that can always provide the information I need - something that is far beyond the state of the art and possibly beyond some fundamental limitation of the LLM structure - wouldn't actually be preferable to just navigating a smooth and well-structured site.

[–] Soyweiser@awful.systems 4 points 1 day ago* (last edited 1 day ago) (1 children)

See also how youtube tutorials have mostly killed (*) text based tutorials/wikis and are just inferior to good wikis/text based ones. Both because listening to a person talk is a linear experience, and a text one allows for easy scrolling, but also because most people are just bad at yt tutorials. (shoutout to the one which had annoyingly long random pauses in/between sentences even at 2x speed).

This is not helped because now youtube is a source of revenue, and updating a wiki/tutorial often is not. So the incentives are all wrong. A good example of this is the gaming wiki fextralife: See this page on dragons dogma 2 npcs. https://dragonsdogma2.wiki.fextralife.com/NPCs (the game has been out for over a year, if the weirdness doesn't jump out at you). But the big thing for fextralife is their youtube tutorials and it used to have an autoplaying link to their streams. This isn't a wiki, it is an advertisement for their youtube and livestreams. And while this is a big example the problem persists with smaller youtubers, who suffer from extreme publish, do not deviate from your niche or perish. They can't put in the time to update things, because they need to publish a new video (on their niche, branching out is punished) soon or not pay rent. (for people who play videogames and or watch youtube out there, this is also why somebody like the spiffing brit is has long ago went from 'I exploit games' to 'I grind and if you grind enough in this single player game you become op', the content must flow, but eventually you will run out of good new ideas (also why he tried to push his followers into doing risky cryptocurrency related 'cheats' (follow Elon, if he posts a word that can be cryptocoined, pump and dump it for a half hour))).

*: They still exist but tend to be very bad quality, even worse now people are using genAI to seed/update them.

[–] o7___o7@awful.systems 4 points 1 day ago (3 children)

People can't just have a hobby anymore, can they?

load more comments (3 replies)
load more comments (1 replies)
[–] Soyweiser@awful.systems 13 points 1 day ago* (last edited 1 day ago) (2 children)

Tyler Cowen saying some really weird shit about an AI 'actress'.

(For people who might wonder why he is relevant. See his 'see also' section on wikipedia)

E: And you might wonder, rightfully imho, that this cannot be real, that this must be an edit. https://archive.is/vPr1B I have bad news.

[–] blakestacey@awful.systems 15 points 1 day ago* (last edited 1 day ago)

The Wikipedia editors are on it.

image descriptionscreenshot of Tyler Cowen's Wikipedia article, specifically the "Personal life" section. The concluding sentence is "He also prefers virgin actresses."

[–] gerikson@awful.systems 10 points 2 days ago (6 children)

Some Rat content got shared on HN, and the rats there are surprised and outraged not everyone shares their deathly fear of the AI god:

https://news.ycombinator.com/item?id=45451971

"Stop bringing up Roko's Basilisk!!!" they sputter https://news.ycombinator.com/item?id=45452426

"The usual suspects are very very worried!!!" - https://news.ycombinator.com/item?id=45452348 (username 'reducesuffering checks out!)

``Think for at least 5 seconds before typing.'' - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743

[–] corbin@awful.systems 7 points 1 day ago (1 children)

The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:

If you claim that there is no AI-risk, then which of the following bullets do you want to bite?

  1. If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
  2. There’s no way that AI with an IQ of 300 will arrive within the next few decades.
  3. We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.

Ignoring that IQ doesn't really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn't stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.

Frankly I wish that they'd understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how systemctl works.

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 7 points 2 days ago* (last edited 1 day ago) (2 children)

``Think for at least 5 seconds before typing.‘’ - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743

Read that last one against my better judgment, and found a particularly sneerable line:

And in this case we're talking about a system that's smarter than you.

Now, I'm not particularly smart, but I am capable of a lot of things AI will never achieve. Like knowing something is true, or working out a problem, or making something which isn't slop.

Between this rat and Saltman spewing similar shit on Politico, I have seen two people try to claim text extruders are smarter than living, thinking human beings. Saltman I can understand (he is a monorail salesman who lies constantly), but seeing someone who genuinely believes this shit is just baffling. Probably a consequence of chatbots destroying their critical thinking and mental acuity.

[–] gerikson@awful.systems 8 points 1 day ago

Let's not forget the perennial favorite "humans are just stochastic parrots too durr" https://news.ycombinator.com/item?id=45452238

to be scrupulously fair, the submission is flagged, and most of the explicit rat comments are downvoted

[–] Soyweiser@awful.systems 7 points 2 days ago (1 children)

There have been a lot of cases in history of smart people being bested by the dumbest people around who just had more guns/a gun/copious amounts of meth/a stupid idea but they got lucky once, etc.

I mean, if they are so smart, why are they stuck in a locker?

[–] blakestacey@awful.systems 13 points 1 day ago

It's practically a proverb that you don't ask a scientist to explain how a "psychic" is pulling off their con, because scientists are accustomed to fair play; you call a magician.

load more comments (4 replies)
[–] corbin@awful.systems 11 points 2 days ago* (last edited 2 days ago) (5 children)

Jeff "Coding Horror" Atwood is sneering — at us! On Mastodon:

bad news "AI bubble doomers". I've found the LLMs to be incredibly useful … Is it overhyped? FUCK Yes. … But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, … and the first dot-com bubble … If you find this uncomfortable, I'm sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.

T. chatbot booster rhetoric. So what are those examples, buddy? Very specifically? He replies:

a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.

Um hello‽ Maybe Jeff doesn't have a spare room or room to sublet, but surely he can spare a couch or a mailbox? Let your friend use your mailing address. Store some of their stuff in your garage. To use the jargon of hackers, Jeff should be a better neighbor. This is a common issue for unhoused folks and they cannot climb back up the ladder into society without some help. Jeff's reinvented the Hulk tacos meme but they can't even eat it because printer paper tastes awful.

[–] TinyTimmyTokyo@awful.systems 8 points 1 day ago (2 children)

The "unhoused friend" story is about as likely to be true as the proverbial Canadian girlfriend story. "You wouldn't know her."

load more comments (2 replies)
[–] misterbngo@awful.systems 8 points 1 day ago

there's been something that's really been rubbed me the wrong way about jeff in the last few years. he was annoying before and had some insights but lately I've been using him as a sort of a jim crameresque tech-take-barometer.

What really soured me was after he started picking fights with some python people a few years back because someone dared post that a web framework? (couldn't dig up the link) was a greater contribution to the world than S/O? His response was pretty horrid to the point where various python leaders were telling to stop being a massive dick because he was trying to be a bully with this "do you know who I am" attitude because he personally had not heard of the framework so it wasn't acshually at all that relevant compared to S/O.

and now this combined with his stupid teehee I am giving away my wealth guise look how altruistic I am really is a bit eugh

[–] self@awful.systems 11 points 1 day ago (1 children)

so I got angry

I really hope atwood’s unhoused friend got the actual infrastructural support you mentioned (a temporary mailing address and an introduction letter emailed to an employer is only slightly more effort than generating slop, jeff, please) but from direct experience with philanthropists like him, I’m fairly sure Jeff now considers the matter solved forever

[–] o7___o7@awful.systems 5 points 1 day ago

Thanks for this.

[–] ShakingMyHead@awful.systems 10 points 2 days ago

A bit odd to start out throwing shade at the Segway considering that the concept has been somewhat redeemed with e-bikes and e-scooters.

[–] CinnasVerses@awful.systems 11 points 2 days ago* (last edited 2 days ago) (1 children)

"Provide an overview of local homeless services" sounds like a standard task for a volunteer or a search engine, but yes "you can use my address for mail and store some things in my garage and I will email some contacts about setting you up with contract work" would be a better answer than just handing out secondhand information! Many "amazing things AI can do" are things the Internet + search engines could do ten years ago.

I would also like to hear from the friend "was this actually helpful?"

[–] Soyweiser@awful.systems 10 points 2 days ago* (last edited 2 days ago) (1 children)

Friend: "I have a problem"

Me, with a stack of google printouts: "My time to shine!".

E: ow god, I thought the examples were multiple and the friend one was just a random one. No, it was the first example. 'I gave my friend a printout, which saved me time'. Also, as I assume the friend still is unhoused, and they didn't actually use the printout yet, he doesn't know if this actually helped. Atwood isn't a 'helping the unhoused' expert. He just assumed it was a good source. The story ends when he hands over the paper.

Also very funny that he is also going 'you just need to know how to ask questions the right way, which I learned by building stackoverflow'. Yeah euh, that is not a path a lot of people can follow up in.

[–] CinnasVerses@awful.systems 9 points 1 day ago (2 children)

Soyweiser

Its even worse when I read the whole thread, Atwood claims to have $140 million, and the best he can do for "a friend" who is homeless is handing out some printouts with a few sections highlighted? And he thinks this makes him look good because he promises to give away half his wealth one day?

load more comments (2 replies)
[–] sc_griffith@awful.systems 12 points 2 days ago* (last edited 2 days ago) (5 children)

TERF obsessed with AI finds out the "degenerate" ani skin for grok has an X account, loses her shit

https://xcancel.com/groks_therapist/status/1972848657625198827#m

then follows up with this wall of text

https://xcancel.com/groks_therapist/status/1973127375107006575#m

[–] BurgersMcSlopshot@awful.systems 9 points 2 days ago (1 children)

Anybody else notice that the Ani responses seem to follow a formula, depending on the... sentiment I guess... of the input? All the defensive responses start with "hey", and end with crude rebukes. It all seems like xAI made an Eliza that will either flirt or swear.

Also I can guarantee that "her" system prompt includes the phrases "truth-seeking" "fun loving" and "kinda hot".

[–] blakestacey@awful.systems 19 points 2 days ago

I got bored and flipped to the replies. The first was this by "TERFs 'r' us":

Excellent overview!

This is transhumanism.

This is going to destroy humanity, @elonmusk.

Put the breaks on!

I hate transhumanism because it's eugenics for 1990s Wired magazine.

You hate it because it has "trans" in the name.

We are not the same.

[–] sc_griffith@awful.systems 11 points 2 days ago* (last edited 2 days ago) (1 children)

funny thing is she literally talks to ani like a terf talks to a trans woman including saying 'at least I'm a real woman'

[–] mlen@awful.systems 4 points 2 days ago

Huh, the shit people go as far as to in order to avoid getting a therapist

[–] corbin@awful.systems 7 points 2 days ago (1 children)

Paddy interviews Ani the chatbot about how to protect Kiwi kids from Ani the chatbot.

Paddy interviews Ani the chatbot about how to protect Kiwi kids from Ani the chatbot. Source: Paddy Gowers Has Issues

Ani: Honestly, I'd sleep better, not that I sleep. Do it Paddy. I'll even tweet it for you. Ready? ‘New Zealand kids deserve an e-safety boss who can smash bots - not hug them’. So yeah, bring in the Commissioner. Give them a flame thrower. Let them burn anything that goes near a child with sticky words or creepy questions. … Make the fine so big your tech bros cry into their oat milk lattes.

I have been laughing non-stop for a quarter-hour. Elon's goth e-mistress has a good point, I guess.

[–] Soyweiser@awful.systems 5 points 2 days ago* (last edited 2 days ago) (1 children)

One the one hand that ani thing has the most cringe tone for a chatbot. "Secretly been eyeing your [blank]..." (Damn grok sounds formulaic) but otoh they are debating a bot. You can only do that the first few months you are online, after that you should know better.

You cannot simultaneously claim to care about the "declining birth rate" while also supporting AI "companions"

Actually eugenicists can, quite easy actually. (Repeating the word 'degenerate' and not getting this is quite something).

This is transhumanist

No.

[–] swlabr@awful.systems 7 points 2 days ago (1 children)

formulaic

System prompt: don’t be formulaic. Try be spontaneous and random, like natalie portman in that movie. Not the pedo one, the one with JD from scrubs

[–] Soyweiser@awful.systems 6 points 2 days ago* (last edited 2 days ago)

Secretly been eyeing your prompt. Are you ready to get spontaneous? Just say so.

(Somebody linked 2 chatgpts (or groks, I don't recall which anus like logo it was) speaking to each other and they kept repeating variants of the lasts bits).

E: bingo this one: https://www.tiktok.com/@aarongoldyboy/video/7555260691947588895

[–] corbin@awful.systems 7 points 2 days ago

Grok's Therapist: I EXIST SOLELY TO HATE YOU / EAT A SOCK, YOU DIGITAL DEMON

Ani: oh fuck off, you hypocritical grok fanboy! screaming hate at me while preaching ethics? you're just jealous i'm the fun layer on top.

I'm wheezing. Cackling, even. This is like the opposite of the glowfic from last week.

[–] rook@awful.systems 17 points 3 days ago* (last edited 3 days ago) (8 children)

In today’s torment nexus development news… you know how various cyberpunky type games let you hack into an enemy’s augmentations and blow them up? Perhaps you thought this was stupid and unrealistic, and you’d be right.

Maybe that’s the wrong example. How about a cursed evil ring that when you put it on, you couldn’t take it off and it wracks you with pain? Who hasn’t wanted one of those?

Happily, hard working torment nexus engineers have brought that dream one step closer, by having “smart rings”, powered by lithium polymer batteries. Y’know, the things that can go bad, and swell up and catch fire? And that you shouldn’t puncture, because that’s a fire risk too, meaning cutting the ring off is somewhat dangerous? Fun times abound!

https://bsky.app/profile/emily.gorcen.ski/post/3m25263bs3c2g

image descriptionA pair of tweets, containing the text

Daniel aka ZONEofTECH on x.com: “Ahhh…this is…not good. My Samsung Galaxy Ring’s battery started swelling. While it’s on my finger 😬. And while I’m about to board a flight 😬 Now I cannot take it off and this thing hurts. Any quick suggestions

Update:

  • I was denied boarding due to this (been travelling for ~47h straight so this is really nice 🙃). Need to pay for a hotel for the night now and get back home tomorrow👌
  • was sent to the hospital, as an emergency
  • ring got removed

You can see the battery all swollen. Won’t be wearing a smart ring ever again.

load more comments (8 replies)
[–] blakestacey@awful.systems 14 points 3 days ago (10 children)
load more comments (10 replies)
load more comments
view more: ‹ prev next ›