this post was submitted on 24 Aug 2025
22 points (100.0% liked)

TechTakes

2235 readers
220 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] Soyweiser@awful.systems 10 points 1 month ago* (last edited 1 month ago) (9 children)
[–] limer@lemmy.ml 9 points 1 month ago (3 children)

It wasn't posted yet in lemmy, did search. Yours was the only thing I found. So, I posted it in programming to rile people

load more comments (3 replies)
load more comments (8 replies)
[–] BlueMonday1984@awful.systems 9 points 1 month ago
[–] dgerard@awful.systems 9 points 1 month ago (3 children)

TIL that "Aris Thorne" is a character name favoured by ChatGPT - which means its presence is a reliable slop tell, lol

like the dumbass-ray version of Ballard calling multiple characters variants on "Traven"

what to do with this information

load more comments (3 replies)
[–] YourNetworkIsHaunted@awful.systems 9 points 1 month ago (4 children)
[–] V0ldek@awful.systems 10 points 1 month ago

I'm sorry but this is zero percent surprising like ye of course he is, he was addicted to anime porn before AI generation probably

[–] blakestacey@awful.systems 8 points 1 month ago

sex weirdo (derogatory)

load more comments (2 replies)
[–] BlueMonday1984@awful.systems 8 points 1 month ago (4 children)
load more comments (4 replies)
[–] BigMuffN69@awful.systems 8 points 1 month ago (7 children)

https://www.argmin.net/p/the-banal-evil-of-ai-safety

Once again shilling another great Ben Recht post. This time calling out the fucking insane irresponsibility of "responsible" AI providers to do the bare minimum to prevent people from having psychological beaks from reality.

"I’ve been stuck on this tragic story in the New York Times about Adam Raine, a 16-year-old who took his life after months of getting advice on suicide from ChatGPT. Our relationship with technological tools is complex. That people draw emotional connections to chatbots isn’t new (I see you, Joseph Weizenbaum). Why young people commit suicide is multifactorial. We’ll see whether a court will find OpenAI liable for wrongful death.

But I’m not a court of law. And OpenAI is not only responsible, but everyone who works there should be ashamed of themselves."

load more comments (7 replies)
[–] BlueMonday1984@awful.systems 8 points 1 month ago (1 children)

Found a couple articles about blunting AI's impact on education (got them off of Audrey Watters' blog, for the record).

The first is a New York Times guest essay by NYU vice provost Clay Shirky, which recommends "moving away from take-home assignments and essays and toward [...] assessments that call on students to demonstrate knowledge in real time."

The second is an article by Kate Manne calling for professors to prevent cheating via AI, which details her efforts in doing so:

Instead of take-home essays to write in their own time, I’ll have students complete in-class assignments that will be hand-written. I won’t allow electronic devices in my class, except for students who tell me they need them as a caregiver or first responder or due to a disability. Students who do need to use a laptop will have to complete the assignment using google docs, so I can see their revision history.

Manne does note the problems with this (outing disabled students, class time spent writing, and difficulties in editing, rewriting, and make-up work), but still believes "it is better, on balance, to take this approach rather than risk a significant proportion of students using AI to write their essays."

[–] Seminar2250@awful.systems 10 points 1 month ago* (last edited 1 month ago)

what worked for me teaching an undergrad course last year was to have

  • in-class exams weigh 90% of the total grade, but let them drop their lowest score
  • take-home work weigh 10% and be graded on completion (which i announced to the class, of course)
    • i was also diligent about posting solutions (sometimes before the due date

it's a completion grade after all) and i let students know that if they wanted direct feedback they could bring their solutions to office hours


it ended up working pretty well. an added benefit was that my TAs didn't have to deal with the nightmare of grading 120 very poorly written homeworks every four weeks. my students also stopped obsessing about the grades they would receive on their homeworks and instead focused on ~~learning~~ the grades they would receive on their exams

however, at the k-12 level, it feels like a much harder problem to tackle. parental involvement is the only solution i can think of, and that's already kind of a nightmare (at least here in the us)

[–] fnix@awful.systems 8 points 1 month ago* (last edited 1 month ago) (3 children)

Mark Cuban is feeling bullied by Bluesky. He will also have you know that you need to keep aware of the important achievements of your betters, as though he is currently the 5th most blocked user on there, he was indeed once the 4th most blocked user. Perhaps he is just crying out to move up the ranks once more?

It’s really all about Bluesky employees being able to afford their healthcare for Mark you see.

And of course, here’s never-Trumper Anne Applebaum running interference for him. Really an appropriate hotdog-guy-meme moment – as much as I shamelessly sneer at Cuban, I’m genuinely angered by the complete inability of the self-satisfied ‘democracy defender’ set to see their own complicity in perpetuating a permission structure for priviliged white men to feel eternally victimized.

load more comments (3 replies)
[–] swlabr@awful.systems 7 points 1 month ago (5 children)

RE: (meta?)speculation about the quantum bubble. So the preceding two tech bubbles tried to worm their way into the arts: NFTs, and slop. How do we think the quantum hucksters are gonna try co-opt craativity?

[–] Soyweiser@awful.systems 8 points 1 month ago

No idea, but the big problem area to look out for imho is when it gets matched with quantum woo again. And if people manage to link that to computers.

load more comments (4 replies)
load more comments
view more: ‹ prev next ›