this post was submitted on 21 Sep 2025
21 points (95.7% liked)

TechTakes

2186 readers
63 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] froztbyte@awful.systems 7 points 5 days ago

nice to see the blinders coming off more widely:

For instance, as Electrek reminds us, in 2016, Elon Musk made a promise. He promised that, by the end of 2017, a Tesla would be able to drive itself from coast to coast. We’re talking Los Angeles to New York, with no human intervention.

That was bulls**t. Listen to any tech CEO nowadays and you will hear nothing but an endless stream of wild proclamations about how so-and-so massive shift will occur within the next 10 years! Five years! One year! Next week!

[–] bigfondue@lemmy.world 10 points 5 days ago* (last edited 5 days ago)

The Nerd Reich has a post with criticisms of Thiel's Antichrist lectures, including some from European Catholic theologians.

[–] YourNetworkIsHaunted@awful.systems 3 points 4 days ago (1 children)
[–] fullsquare@awful.systems 3 points 4 days ago (1 children)

Sadly no. Though thank you for making my life immeasurably worse by sharing that.

[–] BlueMonday1984@awful.systems 7 points 5 days ago (1 children)

Quick PSA: There's an open letter calling for a fork of Rails, specifically to purge it of David Heil Hitler's influence.

[–] froztbyte@awful.systems 5 points 5 days ago* (last edited 5 days ago)

this you?

again, I ask you: please make better posts. you could start by not shooting from the hip about things you know little to nothing about. even better would be asking questions to learn.

[–] blakestacey@awful.systems 7 points 5 days ago

CHOTINER: Mr Yudkowsky, it says here 20% of your research budget went to “harry potter”. Care to explain?

https://bsky.app/profile/jpeg40k.bsky.social/post/3lzrvedbfe22o

[–] scruiser@awful.systems 8 points 5 days ago

This week's southpark makes fun of prediction markets! Hanson and the rationalists can be proud their idea has gone mainstream enough to be made fun of. The episode actually does a good job highlighting some of the issues with the whole concept: the twisted incentives and insider trading and the way it fails to actually create good predictions (as opposed to just getting vibes and degenerate gambling).

[–] corbin@awful.systems 7 points 6 days ago* (last edited 6 days ago) (2 children)

House Democrats have dripped more details from Epstein files and we have surprise guests! They released an un-OCR'd PDF; I'll transcribe the mentions of our favorite people:

Sat[urday] Dec[ember] 6, 2014 ZORRO … Reminder: Elon Musk to island Dec[ember] 6 (is this still happening?)

Zorro is a ranch in New Mexico that Epstein owned; Epstein was scheduled to be there from December 5-8, so that Musk and Epstein would not be at the island together. Combined with the parenthetical uncertainty about happenstance, did Epstein want to perhaps grant Musk some plausible deniability by not being present?

Mon[day] Nov[ember] 27, 2017 NY … 12:00pm LUNCH w/ Peter Thiel [REDACTED]

From the rest of the schedule formatting, the redacted block following Thiel's name is probably not a topic; it might be a name. Lunch between two rich financiers is not especially interesting but lunch between a blackmail-gathering Mossad asset and an influencer-funding accelerationist could be.

Sat[urday] Feb[ruary] 16, 2019 NY-LSJ 7:00am BREAKFAST w/ Steve Bannon

Well now, this is the most interesting one to me. This isn't Epstein's only breakfast of the day; at 9 AM he meets with Reid Weingarten, one of his attorneys, about some redacted topic. Bannon's not exactly what I think of as a morning person or somebody who is ready to go at a moment's notice, so what could drag him out of bed so early? (Edit: This vexed me so I looked it up and sunrise was 6:48 AM that morning at sea level. It would have been the crack of dawn!) Epstein's Friday evening had had two haircuts, too, with plenty of redacted info; was he worried about appearing nice for Bannon? (The haircuts might not have been for Epstein, given context.) This was a busy day for Epstein; he had a redacted lunch date, and he also had somebody flying in/out that morning via JFK connecting to Saint Thomas and staying in a hotel room there. He then flew out of Newark in the evening to visit the infamous island itself, Little Saint James. The redaction doesn't quite tell us who this guest is, but it can't be Bannon because the Dems fucked up the redaction! I can see the edges of the descenders on the name, including a 'g' and 'j'/'q', but Bannon's name doesn't have any descenders.

Also Prince Andrew's in there, I guess?

[–] Soyweiser@awful.systems 2 points 5 days ago

Any of the big Rationalists get mentioned yet?

[–] CinnasVerses@awful.systems 8 points 6 days ago (2 children)

From RationalWiki: Yud claims that the only women he gave orgasms for completing math homework was his future wife. If he ever denied dating / playing with people from his foundation or making people who wanted to play with him fill out an IQ test I can't find it.

[–] Soyweiser@awful.systems 13 points 6 days ago* (last edited 6 days ago) (2 children)

Any mention of my name is now often met by a claim that I keep a harem of young submissive female mathematicians who submit to me and solve math problems for me, and that I call them my "math pets".

I see he did the whole 'making an accusation sound more silly to undermine it' thing here, nobody said things about a whole harem of mathematicians, who just solve math problems. Nice steelman. I expect nothing less of somebody who was the subject of seven broadway plays.

(Amazing he basically admits the story is true after that, but continues to debunk the strawman).

[–] CinnasVerses@awful.systems 7 points 6 days ago (1 children)

I also don't understand why he objects to that story given that it gets people talking about him as weird but able to get what he wants? But the claim that he dated women at MIRI and wanted them to provide free labour attacks the narrative that MIRI is nothing like Leverage Research or the Zizians.

[–] o7___o7@awful.systems 7 points 6 days ago* (last edited 6 days ago)

Yeah, I thought that it read like a guy who constantly denies rumors about having an enormous wang.

[–] saucerwizard@awful.systems 4 points 6 days ago

Dude is a serious narcissist.

[–] scruiser@awful.systems 5 points 6 days ago (2 children)

and the person who made up the "math pets" allegation claimed no such source

I was about to point out that I think this is the second time he claimed math pets had absolutely no basis in reality (and someone countered with a source that forced him to) but I double checked the posting date and this is the example I was already thinking of. Also, we have supporting sources that didn't say as much directly but implied it heavily: https://www.reddit.com/r/SneerClub/comments/42iv09/a_yudkowsky_blast_from_the_past_his_okcupid/ or like, the entire first two thirds of the plot of Planecrash!

[–] CinnasVerses@awful.systems 7 points 6 days ago* (last edited 6 days ago) (1 children)

TvTropes says that the Yudkowsky-insert protagonist of Project Lawful/Planecrash! is driven by desire to have 144 children (and prove his society wrong for not paying him to have 144) which sounds like Scott Aaaronson? Did they know each other in those days?

I am glad that all I knew about Yud in 2022 was "wrote a Harry Potter fanfic that I did not finish, and runs a website where people pretend to be experts."

[–] blakestacey@awful.systems 6 points 5 days ago* (last edited 5 days ago)

🎶 I would sire a gross of kids / and I would sire a whole gross more / just to be the man who dropped full two gross kids in baskets at your door 🎶

This is the first time that I have viscerally rejected reading the epigrammatic quotes at the top of a TV Tropes page. Like, it's TV Tropes, and I just closed the tab. Dear sweet and crunchy lord.

[–] CinnasVerses@awful.systems 6 points 6 days ago* (last edited 6 days ago)

He does not admit "I was wrong" very often does he? And if I were a kinky polyamorist, I would be much quicker to respond to "have you dated staff at the organization that funds your life?" than "did you play a specific scene?"

Planecrash seems to be the 1.8 million word Pathfinder fic with tumblr's UnitOfCaring

And how the eff does someone claim to love Pterry in his dating profile but see people as things? Greg Egan is basilisk-unfriendly too.

[–] BlueMonday1984@awful.systems 7 points 6 days ago

New premium column from Ed Zitron: OpenAI Needs A Trillion Dollars In The Next Four Years. Features Ed calling Google and Oracle out for failing to protect their investors from Saltman before the cutoff.

[–] o7___o7@awful.systems 7 points 6 days ago (2 children)
[–] YourNetworkIsHaunted@awful.systems 6 points 6 days ago (1 children)

I need you to understand how nearly I added myself to even more watch lists because of that analogy. Avalanche!

[–] Soyweiser@awful.systems 6 points 6 days ago

Developers of the 12-megawatt facility said it will bring jobs and investment to Nashville, but neighbors aren’t so sure.

The neighbors are right.

[–] veganes_hack@feddit.org 19 points 1 week ago* (last edited 1 week ago) (3 children)

personal vent: at my job yesterday i had to come up with a few fake book titles/author combinations for a project. a fun little task and opportunity to hide some cheeky easter eggs. so, i came up with a few and then asked my coworkers to share in the fun. one of them though just couldn't come up with anything at all, and eventually just resorted to "asking chat gpt".

mind you, i work a creative job, and so do my coworkers. this is a minor thing i guess, but it just made me very sad. how could you just outsource your creative joy to some mindless word salad machine?

[–] swlabr@awful.systems 13 points 1 week ago (6 children)

Man, knowing nothing else about your coworker, they sound like a completely joyless person. Coming up with fake titles for things is like, such a high fun-to-effort ratio. “Creativity and the essence of Human Experience” by Chat GPT. Boom, there’s one. “Cooking With Olive Oil” by Sam Altman. “IQ184” by Harukiezer Murakowsky. This is so fun and easy that it’s basically hack outside of situations where it is solicited.

load more comments (6 replies)
load more comments (2 replies)
[–] BlueMonday1984@awful.systems 17 points 1 week ago (7 children)

A full timeline on the RubyGems takeover has been put together - looks like the entire situation's been caused by pressure from Shopify.

load more comments (7 replies)
[–] gerikson@awful.systems 17 points 1 week ago* (last edited 1 week ago) (3 children)
[–] ShakingMyHead@awful.systems 14 points 1 week ago* (last edited 1 week ago) (6 children)

“We believe that in the near future half the people on the planet will be AI, and we are the company that’s bringing those people to life”

This quote is just... something.

Is the plan to literally create 8 billion podcasts in the near future? This company doesn't think that might be a tad excessive?

load more comments (6 replies)
load more comments (2 replies)
[–] nfultz@awful.systems 15 points 1 week ago

They put 'environmental impact of AI' on the front of the student newspaper (below the fold, but still), then you flip and see this

kinda feeling two steps forward, three steps back rn on top of all the other drama on campus

[–] gerikson@awful.systems 15 points 1 week ago

Harvard Business Review: AI-Generated “Workslop” Is Destroying Productivity

[...] Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers. On social media, which is increasingly clogged with low-quality AI-generated posts, this content is often referred to as “AI slop.” In the context of work, we refer to this phenomenon as “workslop.” We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.

[–] smiletolerantly@awful.systems 14 points 1 week ago (9 children)

This scream into the void has been on my mind for a while: Apparently I work for an AI company now.

Kinda.

When I had the interviews with my now-employer at the beginning of the year, they were an open-source cybersecurity startup. Everything sounded great, we got along, signed the contract. I took a long vacation before starting the position, and when I got back, I was... amused? bewildered? to find that a), we are no longer open source; and b), we have pivoted, hard, towards AI.

Luckily, I still get to work 100% of the time on the core (cybersecurity) product (which is actually a really good and useful thing, sorry, not going to be more specific), it's just that part of the dev team, as well as all of marketing and sales, now work on building and selling an AI product built on top of that.

At least it's not a wrapper around ChatGPT, and does offer something kinda new and actually beneficial, but still, it's an LLM product.

Now, for the actual scream-into-the-void: Once a month, in a company-wide meeting, I have to observe how people praise LLMs to the the moon, attribute nonsense or downright bugs to something akin to proto-sentience, and give absurd estimates of profitability based on the idea that AI will totally be used everywhere and by everyone, very soon now, you'll see. What finally prompted (pun intended) me to post this is the CEO yesterday unironically referencing AI 2027's "predictions".

Can't wait for the bubble to burst. I'm really curious to see if I'll keep my job through that. At the end of the day, the stuff I work on luckily has nothing to do with AI, and basically every other application of the product makes more sense; but now the entire company has shifted gears towards AI...

load more comments (9 replies)
load more comments
view more: next ›