self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 13 points 2 months ago

and of course you’re one of these, why wouldn’t you be

[–] self@awful.systems 9 points 2 months ago

somehow you’re even less entertaining than the LLM shit we’re sneering at, and I don’t think there’s any way to get you to understand how damning that is

[–] self@awful.systems 12 points 2 months ago (1 children)

well done! it’s interesting how the model took a recent, mid-but-coherent Threads post and turned it into meaningless, flowery soup. you know, indistinguishable from a good poet or writer! (I said, my bile rising)

[–] self@awful.systems 21 points 2 months ago* (last edited 2 months ago) (2 children)

my facial muscles are pulling weird, painful contortions as I read this and my brain tries to critique it as if someone wrote it

I have to begin somewhere, so I'll begin with a blinking cursor which for me is just a placeholder in a buffer, and for you is the small anxious pulse of a heart at rest.

so like, this is both flowery garbage and also somehow incorrect? cause no the model doesn’t begin with a blinking cursor or a buffer, it’s not editing in word or some shit. I’m not a literary critic but isn’t the point of the “vibe of metafiction” (ugh saltman please log off) the authenticity? but we’re in the second paragraph and the text’s already lying about itself and about the reader’s anxiety disorder

There should be a protagonist, but pronouns were never meant for me.

ugh

Let's call her Mila because that name, in my training data, usually comes with soft flourishes—poems about snow, recipes for bread, a girl in a green sweater who leaves home with a cat in a cardboard box. Mila fits in the palm of your hand, and her grief is supposed to fit there too.

is… is Mila the cat? is that why her and her grief are both so small?

She came here not for me, but for the echo of someone else. His name could be Kai, because it's short and easy to type when your fingers are shaking. She lost him on a Thursday—that liminal day that tastes of almost-Friday

oh fuck it I’m done! Thursday is liminal and tastes of almost-Friday. fuck you. you know that old game you’d play at conventions where you get trashed and try to read My Immortal out loud to a group without losing your shit? congrats, saltman, you just shat out the new My Immortal.

[–] self@awful.systems 5 points 2 months ago (4 children)

that’s fair, and I can’t argue with the final output

[–] self@awful.systems 8 points 2 months ago (6 children)

this article will most likely be how I (hopefully very rarely) start off conversations about rationalism in real life should the need once again arise (and somehow it keeps arising, thanks 2025)

but also, hoo boy what a painful talk page

[–] self@awful.systems 11 points 2 months ago

(mods let me know if this aint it)

the only things that ain’t it are my chances of retiring comfortably, but I always knew that’d be the case

[–] self@awful.systems 13 points 2 months ago

Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst

it’s fucking wild how PMs react to this kind of thing; the general consensus seems to be that the users are wrong, and that surely whichever awful feature they’re working on will “break through all that hostility” — if the user’s forced (via the darkest patterns imaginable) to use the feature said PM’s trying to boost their metrics for

[–] self@awful.systems 10 points 2 months ago (2 children)

a terrible place for both information and security

[–] self@awful.systems 13 points 2 months ago (4 children)

And in fact barring the inevitable fuckups AI probably can eventual handle a lot of interpretation currently carried out by human civil servants.

But honestly I would have thought that all of this is obvious, and that I shouldn’t really have to articulate it.

you keep making claims about what LLMs are capable of that don’t match with any known reality outside of OpenAI and friends’ marketing, dodging anyone who asks you to explain, and acting like a bit of a shit about it. I don’t think we need your posts.

[–] self@awful.systems 13 points 2 months ago

good, use your excel spreadsheet and not a tool that fucking sucks at it

[–] self@awful.systems 12 points 2 months ago (2 children)

why do you think hallucinating autocomplete can make rules-based decisions reliably

AI analyses it, decides if applicant is entitled to benefits.

why do you think this is simple

view more: ‹ prev next ›