this post was submitted on 29 Jun 2025
22 points (100.0% liked)

TechTakes

2054 readers
93 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, happy 4th July in advance...I guess.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 17 points 6 days ago (3 children)

Today in "I wish I didn't know who these people are", guess who is a source for the New York Times now.

[–] Soyweiser@awful.systems 5 points 5 days ago

Also dropped this in the other thread about this but some fam member I think is dropping some lols on the guy. https://bsky.app/profile/larkshead.bsky.social/post/3ljkqiag3u22z it gets less lol when you get to the "yeah we worried he might become a school shooter" bit.

[–] V0ldek@awful.systems 9 points 6 days ago (1 children)

Ye it was a real "oh fuck I recognise this nick, this cannot mean anything good" moment

I had a straight-up "wait I thought he was back in his hole after being outed" moment. I hate that all the weird little dumbasses we know here keep becoming relevant.

load more comments (1 replies)
[–] shapeofquanta@lemmy.vg 9 points 6 days ago

Rainbow, an Italian animation studio known for making Winx Club, is looking to hire a prompt engineer :-) Had I been Italian I would be considering applying if only to stop them from trying to sell NFTs and whitewashing their characters.

[–] BlueMonday1984@awful.systems 9 points 6 days ago* (last edited 6 days ago)

Ed Zitron's planning a follow-up to "The Subprime AI Crisis":

(Its gonna be a premium column, BTW)

EDIT: Swapped the image for one that's easier-to-read

[–] zbyte64@awful.systems 30 points 1 week ago* (last edited 1 week ago) (11 children)

I had applied to a job and it screened me verbally with an AI bot. I find it strange talking to an AI bot that gives no indication of whether it is following what I am saying like a real human does with "uh huh" or what not. It asked me if I ever did Docker and I answered I transitioned a system to Docker. But I had done an awkward pause after the word transition so the AI bot congratulated me on my gender transition and it was on to the next question.

[–] jillL@theblower.au 14 points 1 week ago

@zbyte64 this is so disrespectful to applicants.

[–] antifuchs@awful.systems 13 points 1 week ago (1 children)

Now I’m curious how a protected class question% speedrun of one of these interviews would look. Get the bot to ask you about your age, number of children, sexual orientation, etc

[–] zbyte64@awful.systems 11 points 1 week ago* (last edited 1 week ago) (1 children)

Not sure how I would trigger a follow-up question like that. I think most of the questions seemed pre-programmed but the transcription and AI response to the answer would "hallucinate". They really just wanted to make sure they were talking to someone real and not an AI candidate because I talked to a real person next who asked much of the same.

load more comments (1 replies)
load more comments (9 replies)
[–] shapeofquanta@lemmy.vg 18 points 1 week ago (3 children)

A bit of old news but that is still upsetting to me.

My favorite artist, Kazuma Kaneko, known for doing the demon designs in the Megami Tensei franchise, sold his soul to make an AI gacha game. While I was massively disappointed that he was going the AI route, the model was supposed to be trained solely on his own art and thus I didn't have any ethical issues with it.

Fast-forward to shortly after release and the game's AI model has been pumping out Elsa and Superman.

[–] JFranek@awful.systems 16 points 1 week ago

the model was supposed to be trained solely on his own art

much simpler models are practically impossible to train without an existing model to build upon. With GenAI it's safe to assume that training that base model included large scale scraping without consent

[–] blakestacey@awful.systems 13 points 1 week ago (2 children)

It's a bird! It's a plane! It's... Evangelion Unit 1 with a Superman logo and a Diabolik mask.

load more comments (2 replies)
load more comments (1 replies)
[–] o7___o7@awful.systems 17 points 1 week ago

Ed Zitron on bsky: https://bsky.app/profile/edzitron.com/post/3lsukqwhjvk26

Haven't seen a newsletter of mine hit the top 20 on Hackernews and then get flag banned faster, feels like it barely made it 20 minutes before it was descended upon by guys who would drink Sam Altman's bathwater

Also funny: the hn thread doesn't appear on their search.

https://news.ycombinator.com/item?id=44424456

[–] BigMuffN69@awful.systems 16 points 1 week ago* (last edited 1 week ago) (8 children)

Actually burst a blood vessel last weekend raging. Gary Marcus was bragging about his prediction record in 2024 being flawless

Gary continuing to have the largest ego in the world. Stay tuned for his upcoming book "I am God" when 2027 comes around and we are all still alive. Imo some of these are kind of vague and I wouldn't argue with someone who said reasoning models are a substantial advance, but my God the LW crew fucking lost their minds. Habryka wrote a goddamn essay about how Gary was a fucking moron and is a threat to humanity for underplaying the awesome power of super-duper intelligence and a worse forecaster than the big brain rationalist. To be clear Habryka's objections are overall- extremely fucking nitpicking totally missing the point dogshit in my pov (feel free to judge for yourself)

https://xcancel.com/ohabryka/status/1939017731799687518#m

But what really made me want to drive a drill to the brain was the LW brigade rallying around the claim that AI companies are profitable. Are these people straight up smoking crack? OAI and Anthropic do not make a profit full stop. In fact they are setting billions of VC money on fire?! (strangely, some LWers in the comments seemed genuinely surprised that this was the case when shown the data, just how unaware are these people?) Oliver tires and fails to do Olympic level mental gymnastics by saying TSMC and NVDIA are making money, so therefore AI is extremely profitable. In the same way I presume gambling is extremely profitable for degenerates like me because the casino letting me play is making money. I rank the people of LW as minimally truth seeking and big dumb out of 10. Also weird fun little fact, in Daniel K's predictions from 2022, he said by 2023 AI companies would be so incredibly profitable that they would be easily recuperating their training cost. So I guess monopoly money that you can't see in any earnings report is the official party line now?

[–] V0ldek@awful.systems 12 points 1 week ago (2 children)

I wouldn’t argue with someone who said reasoning models are a substantial advance

Oh, I would.

I've seen people say stuff like "you can't disagree the models have rapidly advanced" and I'm just like yes I can, here: no they didn't. If you're claiming they advanced in any way please show me a metric by which you're judging it. Are they cheaper? Are they more efficient? Are they able to actually do anything? I want data, I want a chart, I want a proper experiment where the model didn't have access to the test data when it was being trained and I want that published in a reputable venue. If the advances are so substantial you should be able to give me like five papers that contain this stuff. Absent that I cannot help but think that the claim here is "it vibes better".

If they're an AGI believer then the bar is even higher, since in their dictionary an advancement would mean the models getting closer to AGI, at which point I'd be fucked to see the metric by which they describe the distance of their current favourite model to AGI. They can't even properly define the latter in computer-scientific terms, only vibes.

I advocate for a strict approach, like physicist dismissing any claim containing "quantum" but no maths, I will immediately dismiss any AI claims if you can't describe the metric you used to evaluate the model and isolate the changes between the old and new version to evaluate their efficacy. You know, the bog-standard shit you always put in any CS systems Experimental section.

load more comments (2 replies)
load more comments (7 replies)
[–] BlueMonday1984@awful.systems 15 points 1 week ago (1 children)

New blogpost from Iris Meredith: Vulgar, horny and threatening, a how-to guide on opposing the tech industry

load more comments (1 replies)
[–] e8d79@discuss.tchncs.de 14 points 1 week ago* (last edited 1 week ago)

Stop killing games has hit the orange site. Of course, someone is very distressed by the fact that democratic processes exist.

[–] BlueMonday1984@awful.systems 14 points 1 week ago

New thread from Ed Zitron, gonna focus on just the starter:

You want my opinion, Zitron's on the money - once the AI bubble finally bursts, I expect a massive outpouring of schadenfreude aimed at the tech execs behind the bubble, and anyone who worked on or heavily used AI during the bubble.

For AI supporters specifically, I expect a triple whammy of mockery:

  • On one front, they're gonna be publicly mocked for believing tech billionaires' bullshit claims about AI, and publicly lambasted for actively assisting tech billionaires' attempts to destroy labour once and for all.

  • On another front, their past/present support for AI will be used as grounds to flip the bozo bit on them, dismissing whatever they have to say as coming from someone incapable of thinking for themselves.

  • On a third front, I expect their future art/writing will be immediately assumed to be AI slop and either dismissed as not worth looking at or mocked as soulless garbage made by someone who, quoting David Gerard, "literally cannot tell good from bad".

[–] gerikson@awful.systems 13 points 1 week ago (2 children)

Managers: "AI will make employees more productive!"

WaPo: "AI note takers are flooding Zoom calls as workers opt to skip meetings" https://archive.ph/ejC53

Managers: "not like that!!!!"

[–] BurgersMcSlopshot@awful.systems 17 points 6 days ago

This meeting could have been a text document of plausible sounding jibberish nobody needs to read.

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 13 points 1 week ago* (last edited 6 days ago) (1 children)

New thread from Baldur Bjarnason publicly sneering at his fellow programmers:

Anybody who has been around programmers for more than five minutes should not be surprised that many of them are enthusiastically adopting a tool that is harmful, destroying industries, sabotaging education, and hindering the energy transition because they feel it's giving them a moderate advantage

That they respond to those pointing some of this out with mockery ("nuts", "shove your concern up your ass") and that their peers see this mockery as reasonable discourse is also not surprising. Tech is entirely built on the backs of workers with no regard for externalities or second order effects

Tech is also extremely bad at software. We habitually make fragile, insecure, complex, and hard to maintain code that backs poor UIs. The best case scenario is that LLMs accelerate already broken software dev processes in an industry that is built around monopolies and billionaire extremists

But, sure, feeling discouraged by the state of the industry is "like quitting carpentry as a career thanks to the invention of the table saw"

Whatever

EDIT: Found out where Baldur got the "table saw" quote from - added it accordingly.

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 12 points 1 week ago
[–] scruiser@awful.systems 12 points 1 week ago* (last edited 1 week ago) (1 children)

So two weeks ago I linked titotal's detailed breakdown of what is wrong with AI 2027's "model" (tldr; even accepting the line goes up premise of the whole thing, AI 2027's math was so bad that they made the line always asymptote to infinity in the near future regardless of inputs). Titotal went to pretty extreme lengths to meet the "charitability" norms of lesswrong, corresponding with one of the AI 2027 authors, carefully considering what they might have intended, responding to comments in detail and depth, and in general not simply mocking the entire exercise in intellectual masturbation and hype generation like it rightfully deserves.

But even with all that effort, someone still decided make an entire (long, obviously) post with a section dedicated to tone-policing titotal: https://thezvi.substack.com/p/analyzing-a-critique-of-the-ai-2027?open=false#%C2%A7the-headline-message-is-not-ideal (here is the lw link: https://www.lesswrong.com/posts/5c5krDqGC5eEPDqZS/analyzing-a-critique-of-the-ai-2027-timeline-forecasts)

Oh, and looking back at the comments on titotal's post... his detailed elaboration of some pretty egregious errors in AI 2027 didn't really change anyone's mind, at most moving them back a year to 2028.

So, morale of the story, lesswrongers and rationalist are in fact not worth the effort to talk to and we are right to mock them. The numbers they claim to use are pulled out of their asses to fit vibes they already feel.

And my choice for most sneerable line out of all the comments:

https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models?commentId=XbPCQkgPmKYGJ4WTb

And I therefore am left wondering what less shoddy toy models I should be basing my life decisions on.

[–] blakestacey@awful.systems 13 points 1 week ago (4 children)

Oh, and looking back at the comments on titotal’s post… his detailed elaboration of some pretty egregious errors in AI 2027 didn’t really change anyone’s mind, at most moving them back a year to 2028.

Huh, what's this I have open in another browser tab:

The Great Disappointment in the Millerite movement was the reaction that followed Baptist preacher William Miller's proclamation that Jesus Christ would return to the Earth by 1844, which he called the Second Advent. His study of the Daniel 8 prophecy during the Second Great Awakening led him to conclude that Daniel's "cleansing of the sanctuary" was cleansing the world from sin when Christ would come, and he and many others prepared. When Jesus did not appear by October 22, 1844, Miller and his followers were disappointed.

load more comments (4 replies)
[–] nfultz@awful.systems 12 points 1 week ago* (last edited 1 week ago) (1 children)

Aella popped up on doomscroll - https://youtu.be/r7WL6kaTJnw

E: oh man the comments are great

E2:

1:08:02 There's a lot of discussions among the rationalist community about the uneven distribution of IQ and its correlation with race. Why is this a topic that people fixate on if they're also convinced that this ultra intelligence an AGI that's like smarter than every human on the planet why are these marginal differences so important to people?

[–] blakestacey@awful.systems 12 points 1 week ago* (last edited 1 week ago) (1 children)

Highlights from the comments: @wjpmitchell3 writes,

Actual psychology researcher: the problem with IQ is A) We don't really know what it's measuring, B.) We don't really know how it's useful, C.) We don't really know how context-specific it is, D.) When people make arguments about IQ, it's often couched around prejudiced ulterior motives. No one actually cares about IQ; they care about what it's a proxy measure of and we don't have good evidence yet to say "This is a reliable and broadly-encompassing representation of intelligence." or whatever else, so if you are trying to use IQ differences to say that there are race differences in intelligence, you have no grounds. The best you can say is there are race differences in this proxy measure that we're still trying to understand. It's dangerous to use an unreliable and possibly inaccurate representation of a phenomena to make policy changes or inform decisions around race. The evidence threshold has to be extremely high because we're entering sensitive ethical spaces, which is something that rationalist don't do well in because their utilitarian calculus has difficulty capturing the intangibles.

@arnoldkotlyarevsky383 says,

Nothing wrong with being self educated but she comes across as being not as far along as you would want someone to be in their self-education before being given a platform.

@User123456767 observes,

You can kind of tell she grew up as a Calvinist because she still seems to think she's part of the elect she's just replaced an actual big G God with some sort of AI God.

@jaredsarnie3712 begins,

I feel like so much of what she says boils down to finding bizarre hypothetical situations where child sexual abuse is morally acceptable.

And from @Fruuuuuuuuuck:

Doomscroll gooner arc

[–] BigMuffN69@awful.systems 16 points 1 week ago* (last edited 1 week ago)

One thing I have wondered about. The rats always have that graphic of the IQ of Einstein vs the village idiot being almost imperceptible vs the IQ of the super robo god. If that's the case, why the hell do we only want our best and brightest doing "alignment research"? The village idiot should be almost just as good!

[–] gerikson@awful.systems 12 points 1 week ago (1 children)

This titbit by Molly White about how whales have captured Polymarket's "dispute resolution" mechanism had me chuckling

https://hachyderm.io/@molly0xfff/114779592623569008

load more comments (1 replies)
[–] gerikson@awful.systems 12 points 1 week ago* (last edited 6 days ago) (1 children)

LWronger posts article entitled

"Authors Have a Responsibility to Communicate Clearly"

OK, title case, obviously serious.

The context for this essay is serious, high-stakes communication: papers, technical blog posts, and tweet threads.

Nope he's going for satire.

And ladies, he's available!

[–] Soyweiser@awful.systems 10 points 6 days ago (2 children)

I eas slightly saddened to scroll over his dating profile and see almost every seemed to be related to AI even his other activities. Also not sure how well a reference to a chad meme will make you do in the current dating in SV.

[–] BigMuffN69@awful.systems 8 points 5 days ago* (last edited 5 days ago) (1 children)

Bruh, there's a part where he laments that he had a hard time getting into meditation because he was paranoid that it was a form of wire heading. Beyond parody. The whole profile is 🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩

[–] Soyweiser@awful.systems 7 points 5 days ago

I now imagine a date going 'hey what is wire heading?' before slowly backing out of the room.

[–] gerikson@awful.systems 7 points 6 days ago (1 children)

Maybe it's to hammer home the idea that time before DOOM is limited and you might as well get your rocks off with him before that happens.

All this technology and we still haven't gotten past Grease 2.

load more comments
view more: ‹ prev next ›