this post was submitted on 31 Aug 2025
81 points (96.6% liked)

TechTakes

2146 readers
117 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 20 comments
sorted by: hot top controversial new old
[–] fasterandworse@awful.systems 12 points 2 days ago* (last edited 2 days ago) (1 children)

I'm quite sure a part of my last job loss was due to my open refusal to use AI, which extended to my criticism of the use of AI-generated code being added to a codebase I was expected to manage and maintain. Being careful is seen as friction now, careful is translated as slow, no matter how much time is spent fixing things broken by shitty generated code, fixing is seen as productivity more than careful production is.

[–] fullsquare@awful.systems 13 points 1 day ago (1 children)

slow is smooth, smooth is fast

the most productive way to do things is to do it deliberately and with good planning, at least in my field which is not coding related in any way

[–] BlueMonday1984@awful.systems 6 points 1 day ago (1 children)

the most productive way to do things is to do it deliberately and with good planning

Two things which coding is currently allergic to, as the rise of vibe coding has demonstrated

[–] fullsquare@awful.systems 3 points 1 day ago

get yourself into a career where not doing things carefully makes them either to stop working or generating accidents, it'll usually stop managerial assholes from forcing you to do things wrong way

[–] HedyL@awful.systems 37 points 2 days ago (3 children)

Refusing to use AI tools or output. Sabotage!

Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).

I work in the field of law/accounting/compliance, btw.

[–] HedyL@awful.systems 13 points 2 days ago* (last edited 2 days ago) (1 children)

Maybe it's also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it's expected to try and try again with different questions until one correct answer comes out and then use that one to "evangelize" about the virtues of AI.

[–] Slatlun@lemmy.ml 8 points 2 days ago

This is how I tested too. It failed. Why would I believe it on anything else?

[–] ulterno@programming.dev 7 points 2 days ago (1 children)

Even better to do the correspondence in written.
That way you can show others (and hope that someone cares) what you rejected and what they were trying to push.

[–] HedyL@awful.systems 7 points 2 days ago (1 children)

This may only be a problem if the people in charge don't understand why it's wrong. "But it sounds correct!" etc.

[–] ulterno@programming.dev 5 points 2 days ago

Not a problem.

If it manages to stay in history, hopefully someone after the next dark ages will read it and give you vindication.

[–] tazeycrazy@feddit.uk 2 points 2 days ago

You can definately ask ai for more jargon and add information about irrelevant details to make it practically unreadable. Pass this through the llm to add more vocabulary, deep fry it and sent it to management.

I can't help but feel like no matter how well-intentioned the actual recommendations (i.e. listen to your people when they tell you the AI is shit) this headline is going to be used to justify canning anyone who isn't sufficiently on board with wherever the C-suite wants to go. Even the generous (read: accurate) example of the historical luddites could be used to tar people as saboteurs and enemies of progress, which would give a callous executive license to do the things they want to do anyways to try and increase profits.

This bubble can't pop soon enough, before anyone is truly reliant on the base LLMs operated directly by OpenAI and other bottomless money pits.

[–] BlueMonday1984@awful.systems 20 points 2 days ago (1 children)

CIO even ends with talking up the Luddites — and how they smashed all those machines in rational self-defence.

I genuinely thought this wasn't true at first and went to check. Its completely true, a fucking business magazine's giving the Luddites their due:

Regardless of the fallout, fractional CMO Lars Nyman sees AI sabotage efforts as nothing new.

“This is luddite history revisited. In 1811, the Luddites smashed textile machines to keep their jobs. Today, it’s Slack sabotage and whispered prompt jailbreaking, etc. Human nature hasn’t changed, but the tools have,” Nyman says. “If your company tells people they’re your greatest asset and then replaces them with an LLM, well, don’t be shocked when they pull the plug or feed the model garbage data. If the AI transformation rollout comes with a whiff of callous ‘adapt or die’ arrogance from the C-suite, there will be rebellion.”

It may be in the context of warning capital not to anger labour too much, lest they inspire resistance, but its still wild to see.

[–] dgerard@awful.systems 5 points 2 days ago* (last edited 2 days ago)

reviewing his history, i don't think this article was actually written by the sort of commie who ends up at finance papers then belatedly remembered to tone it down a bit, but you'd be forgiven for thinking so

[–] Soyweiser@awful.systems 16 points 2 days ago (1 children)

It is not sabotage if they are helping you not commit to a long term strategy that is detrimental to the company.

[–] marcos@lemmy.world 8 points 2 days ago

Well, it is. It's not sabotaging the company, but it's absolutely sabotaging the initiative. (You can absolutely sabotage saboteurs.)

That said, it's not sabotage because no action of those is actually sabotage. It's just people telling their managers AI is bad on the job, or failing to make it good.

[–] dgerard@awful.systems 8 points 2 days ago

pile of youtube comments quoting the Beastie Boys

[–] TootSweet@lemmy.world 7 points 2 days ago* (last edited 2 days ago) (1 children)

The Pivot to AI article says 31%, but the source says 41%. I think Pivot to AI just accidentally a digit a little bit.

Edit: oh, 31% of employees, but 41% of Millennial and Gen-Z.

[–] dgerard@awful.systems 6 points 2 days ago

the survey says all sorts of things, it's true

[–] LillyPip@lemmy.ca 4 points 2 days ago