this post was submitted on 22 Jun 2025
20 points (100.0% liked)

TechTakes

2027 readers
219 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 10 points 4 days ago

Thomas Claburn writes in The Register:

IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.

That implies something like 60 percent of agentic AI projects would be retained, which is actually remarkable given that the rate of successful task completion for AI agents, as measured by researchers at Carnegie Mellon University (CMU) and at Salesforce, is only about 30 to 35 percent for multi-step tasks.

[–] aio@awful.systems 14 points 6 days ago (3 children)
[–] swlabr@awful.systems 4 points 1 day ago

“It’s really hard to think about alignment. Maybe we need to redesign thinking” type shit

[–] bigfondue@lemmy.world 8 points 5 days ago

(setq alignment 'good)

[–] antifuchs@awful.systems 9 points 5 days ago

Fucking rude to drag lisp into this. How dare they.

[–] BlueMonday1984@awful.systems 17 points 6 days ago (3 children)

Minor bit of personal news: Newgrounds got hit with a wave of AI slop games recently.

I caught onto it back on Wednesday, but didn't get official confirmation until yesterday, when another user investigated the games and discovered the exact slop-generator used to shit them out - VIDEOGAME.ai.

[–] flaviat@awful.systems 12 points 6 days ago* (last edited 6 days ago) (1 children)

Thanks for the work you do on Newgrounds! This sentence stuck out to me

No more worrying about lack of content or fickle UGC creators

Oh they're just publically advertising their company to be anti-union. Bold.

[–] BlueMonday1984@awful.systems 12 points 6 days ago

Thanks for the work you do on Newgrounds!

Appreciate it - keeping one of the last bastions of creativity free of slop is a thankless task.

This sentence stuck out to me

No more worrying about lack of content or fickle UGC creators

Oh they’re just publically advertising their company to be anti-union. Bold.

What is AI if not a tool built to abuse the proletariat?

[–] YourNetworkIsHaunted@awful.systems 10 points 6 days ago (1 children)

That's fucking abominable. I was originally going to ask why anyone would bother throwing their slop on Newgrounds of all sites, but given the business model here I think we can be pretty confident they were hoping to use it to advertise.

Also, fully general bullshit detection question no.142 applies: if this turnkey game studio works as well as you claim, why are you selling it to me instead of doing it yourself? (Hint: it's because it doesn't actually work)

[–] BlueMonday1984@awful.systems 7 points 6 days ago

I was originally going to ask why anyone would bother throwing their slop on Newgrounds of all sites, but given the business model here I think we can be pretty confident they were hoping to use it to advertise.

Considering that AI bros are

  1. utterly malicious scumbags who hate anything which doesn't let them, and them alone, make all the money ever

  2. exceedingly stupid and shameless dipshits with a complete inability to recognise or learn from mistakes

I can absolutely see them looking at someplace like NG and thinking "hey, this place which stands for everything we want wiped off the Internet will totally accept our fucking slop".

(Personal sidenote: Part of me says this story would probably make a good Pivot to AI.)

[–] shapeofquanta@lemmy.vg 8 points 6 days ago (3 children)

VIDEOGAME.ai

Gotta love how the website isn't even functional.

Sure maybe the "Showcase" link doesn't do anything, the "Watch video" link goes to a password protected file, and none of the "Learn More" buttons do anything; but at least they tell you who invested in them as the very first thing on the page!

Also haha I clicked on the blog and this was at the top of the first post:

Here's a Substack post draft that introduces videogame.ai with a compelling and engaging tone suitable for readers interested in games, tech, or the future of creative work:

New use case for AI found: extracting money from venture capital without actually doing any real work.

[–] BlueMonday1984@awful.systems 7 points 6 days ago

Gotta love how the website isn’t even functional.

Its probably been vibe coded by the fucks behind the LLM, I'd be shocked if it was genuinely functional.

[–] o7___o7@awful.systems 11 points 6 days ago* (last edited 6 days ago) (2 children)

The bullshit engine has convinced my dirtbag sib-in-law that they can claim squatter's rights on (and take ownership of) the house that they aren't paying rent to live in.

They've been there a year.

They're gonna be homeless before this is over and I can't get them to see reason. I feel totally helpless, real big Cassandra vibes. LLMs are sooooo unhealthy for assholes.

[–] Soyweiser@awful.systems 9 points 5 days ago

Id tell them to contact local squatters who have exp in this stuff over trusting LLMs myself. But those people will prob not tell them what they want to hear.

[–] fullsquare@awful.systems 9 points 5 days ago (1 children)

depends on jurisdiction of course, but where i live you can pull something like this. it takes something like 30 years of living in the same place at minimum tho

[–] o7___o7@awful.systems 10 points 5 days ago* (last edited 5 days ago) (1 children)

Yeah, its nuts. They'd have to be resident, pay land taxes, and make improvements for 7 years here. They don't even mow the grass, the owner does.

these idiots made me feel sympathy for a landlord. I might never recover.

...

As an aside, it's fun to imagine the similar sort of brain damage a chatbot would cause Fox Mulder.

[–] blakestacey@awful.systems 7 points 4 days ago (1 children)

It's like when Scott Aaronson got me to sympathize with a cop. A sneersmas miracle.

[–] o7___o7@awful.systems 3 points 4 days ago

There should be a word for this!

[–] YourNetworkIsHaunted@awful.systems 10 points 6 days ago* (last edited 5 days ago)

The folks over at futurism are continuing to do their damnedest to spotlight the ongoing mental health crisis being spurred by chatbot sycophants.

I think the real problem this poses for OpenAI is that in order to address it they basically need to back out of their entire sales pitch. Like, these are basically people who fully believe the hype and it pretty clearly is part of sending them down a very bad road.

[–] Architeuthis@awful.systems 9 points 6 days ago* (last edited 6 days ago) (1 children)

Ed Zitron summarizes his premium post in the better offline subreddit: Why Did Microsoft Invest In OpenAI?

Summary of the summary: they fully expected OpenAI would've gone bust by now and MS would be looting the corpse for all it's worth.

[–] YourNetworkIsHaunted@awful.systems 11 points 6 days ago (1 children)

I also feel like while it's absolutely true that the whole "we'll make AGI and get a ton of money" narrative was always bullshit (whether or not anyone relevant believed it) it is also another kind of evil. Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves. Like, if they did believe their own hype and weren't grifting their hearts out then they're a whole different class of monster. From an ethical perspective, the grift narrative lets everyone involved be better people.

[–] Architeuthis@awful.systems 2 points 4 days ago

Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves.

Big deal, we'll just configure a few to be in a constant state of unparalleled bliss to cancel out the ones having a hard time of it.

Although I'd guess human level problem solving needn't imply a human-analogous subjective experience in a way that would make suffering and angst meaningful for them.

[–] froztbyte@awful.systems 6 points 6 days ago

and next this one that’ll be making waves too

[–] corbin@awful.systems 21 points 1 week ago

Last Week Tonight's rant of the week is about AI slop. A Youtube video is available here. Their presentation is sufficiently down-to-earth to be sharable with parents and extended family, focusing on fake viral videos spreading via Facebook, Instagram, and Pinterest; and dissecting several examples of slop in order to help inoculate the audience.

[–] UltimateNoob@programming.dev 17 points 1 week ago (4 children)
[–] nightsky@awful.systems 8 points 6 days ago

I'm in therapy and much better than I used to, but from my past before that, I am unfortunately quite experienced over many years in having existential worries and anxieties about extremely unlikely things.

And then I see this...

Cosmic rescue mission [...] These missions aim to identify and mitigate suffering among hypothetical extraterrestrial life forms

...and damn, that's next-level thinking, even for me.

[–] antifuchs@awful.systems 13 points 1 week ago

According to some scholars, s-risks warrant serious consideration as they are not extremely unlikely and can arise from unforeseen scenarios.

Guys I have found a way to phrase my anxiety in a way where every single word is extremely load-bearing

load more comments (2 replies)
[–] BlueMonday1984@awful.systems 16 points 1 week ago

Dan McQuillian just dropped the text of a seminar he gave: The role of the University is to resist AI

[–] BlueMonday1984@awful.systems 15 points 1 week ago (4 children)

Starting this off with Baldur Bjarnason sneering at his fellow techies for their "reading" of Dante's Inferno:

Reading through my feed reader and seeing tech dilettantes “doing” Dante in a week and change, I’m reminded of the time in university when we spent half a semester discussing Dante’s Divine Comedy, followed by tracing it’s impact and influence over the centuries

I don’t think these assholes even bother to read their footnotes, and their writing all sounds like it comes from ChatGPT. Naturally so, because I believe them when they claim they don’t use it for writing. They’re just genuinely that dull

At least read the footnotes FFS

If they were reading Dante for pleasure, that’d be different—genuinely awesome, even. But all of this is framed as doing the entirety of “humanities” in the space of a few weeks.

load more comments (4 replies)
[–] Architeuthis@awful.systems 13 points 1 week ago
[–] cornflake@awful.systems 12 points 1 week ago

This Thiel interview clip is amazing

Watch Ross Douthat realize for a moment in real time that he's spent a decade making ideological bedfellows with a techno-futurist, fascist Right that wants to see the birth of a "machine god" & is in no way enthusiastic about the survival of the human race in universal terms.

https://x.com/jasonwblakely/status/1938639600907612610

[–] wizardbeard@lemmy.dbzer0.com 12 points 1 week ago* (last edited 1 week ago) (3 children)

New Yorker put out an article on how AI use is homogenizing thought processes and writing ability.

Our friends on the orange site have clambored over each other to all make very similar counteraguments. Kind of proves the article, no?

I love this one:

All connection technology is a force for homogeneity. Television was the death of the regional accent, for example.

Holy shit. Yes, TV has reduced the strength of accents. But "the death"? Tell me again how little you pay attention to the people you inevitably interact with day to day.

[–] nightsky@awful.systems 7 points 6 days ago (1 children)

I would also like to understand under what definition ChatGPT can be classified as "connection technology".

[–] BlueMonday1984@awful.systems 10 points 6 days ago

ChatGPT connects your brain to a quality '50s-era psychiatrist, who can then lobotomise you non-invasively and turn you into a perfect office worker for our billionaire overlords

load more comments (2 replies)
[–] gerikson@awful.systems 11 points 1 week ago (7 children)

Following up on the thread that spawned from my comment yesterday:

https://awful.systems/comment/7777035

(I'm in vacation mode and forgot it was late on Sunday)

I wonder if Habryka, the LWer who posted both there and on Xhitter that "someone should do something about this troublesome page" realized that there would be less pushback if he'd simply coordinated in the background and got the edits in place without forewarning others. Was it intentional to try to pick a fight with Wikipedians?

[–] blakestacey@awful.systems 13 points 1 week ago

Or was it a consequence of the fact that capital-R Rationalists just don't shut up?

[–] scruiser@awful.systems 12 points 1 week ago (4 children)

The wikipedia talk page is some solid sneering material. It's like Habryka and HandofLixue can't imagine any legitimate reason why Wikipedia has the norms it does, and they can't imagine how a neutral Wikipedian could come to write that article about lesswrong.

Eigenbra accurately calling them out...

"I also didn't call for any particular edits". You literally pointed to two sentences that you wanted edited.

Your twitter post also goes against Wikipedia practices by casting WP:ASPERSIONS. I can't speak for any of the other editors, but I can say I have never read nor edited RationalWiki, so you might be a little paranoid in that regard.

As to your question:

Was it intentional to try to pick a fight with Wikipedians?

It seems to be ignorance on Habyrka's part, but judging by the talk page, instead of acknowledging their ignorance of Wikipedia's reasonable policies, they seem to be doubling down.

[–] scruiser@awful.systems 12 points 1 week ago* (last edited 1 week ago) (3 children)

Following up because the talk page keeps providing good material..

Hand of Lixue keeps trying to throw around the Wikipedia rules like the other editors haven't seen people try to weaponize the rules to push their views many times before.

Particularly for the unflattering descriptions I included, I made sure they reflect the general view in multiple sources, which is why they might have multiple citations attached. Unfortunately, that has now led to complaints about overcitation from @Hand of Lixue. You can't win with some people...

Looking back on the original lesswrong ~~brigade organizing~~ discussion of how to improve the wikipedia article, someone tried explaining to Habyrka the rules then and they were dismissive.

I don’t think it counts as canvassing in the relevant sense, as I didn’t express any specific opinion on how the article should be edited.

Yes Habyrka, because you clearly have such a good understanding of the Wikipedia rules and norms...

Also, heavily downvoted on the lesswrong discussion is someone suggesting Wikipedia is irrelevant because LLMs will soon be the standard for "access to ground truth". I guess even lesswrong knows that is bullshit.

load more comments (3 replies)
load more comments (3 replies)
[–] dgerard@awful.systems 11 points 1 week ago

Habryka doesn't really know how not to start fights

[–] blakestacey@awful.systems 11 points 1 week ago* (last edited 1 week ago) (3 children)

Wow, this is shit: https://en.wikipedia.org/wiki/Inner_alignment

Edit: I have been informed that the correct statement in line with Wikipedia's policies is WP:WOWTHISISSHIT

load more comments (3 replies)
load more comments (3 replies)
[–] YourNetworkIsHaunted@awful.systems 11 points 1 week ago (1 children)

We were joking about this last week if memory serves, but at least one person out there has started a rough aggregator of different sources of pre-AI internet dumps.

It's all gotta be in the models by now, but it's gonna be a cool resource for something, right?

load more comments (1 replies)
[–] nfultz@awful.systems 11 points 1 week ago (5 children)

Another response to Ptacek. "Vibe coding as contempt for materiality" part is particularly good.

https://deadsimpletech.com/blog/material_girl

load more comments (5 replies)
load more comments
view more: next ›