I'm quite sure a part of my last job loss was due to my open refusal to use AI, which extended to my criticism of the use of AI-generated code being added to a codebase I was expected to manage and maintain. Being careful is seen as friction now, careful is translated as slow, no matter how much time is spent fixing things broken by shitty generated code, fixing is seen as productivity more than careful production is.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
slow is smooth, smooth is fast
the most productive way to do things is to do it deliberately and with good planning, at least in my field which is not coding related in any way
the most productive way to do things is to do it deliberately and with good planning
Two things which coding is currently allergic to, as the rise of vibe coding has demonstrated
get yourself into a career where not doing things carefully makes them either to stop working or generating accidents, it'll usually stop managerial assholes from forcing you to do things wrong way
Refusing to use AI tools or output. Sabotage!
Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).
I work in the field of law/accounting/compliance, btw.
Maybe it's also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it's expected to try and try again with different questions until one correct answer comes out and then use that one to "evangelize" about the virtues of AI.
This is how I tested too. It failed. Why would I believe it on anything else?
Even better to do the correspondence in written.
That way you can show others (and hope that someone cares) what you rejected and what they were trying to push.
This may only be a problem if the people in charge don't understand why it's wrong. "But it sounds correct!" etc.
Not a problem.
If it manages to stay in history, hopefully someone after the next dark ages will read it and give you vindication.
You can definately ask ai for more jargon and add information about irrelevant details to make it practically unreadable. Pass this through the llm to add more vocabulary, deep fry it and sent it to management.
I can't help but feel like no matter how well-intentioned the actual recommendations (i.e. listen to your people when they tell you the AI is shit) this headline is going to be used to justify canning anyone who isn't sufficiently on board with wherever the C-suite wants to go. Even the generous (read: accurate) example of the historical luddites could be used to tar people as saboteurs and enemies of progress, which would give a callous executive license to do the things they want to do anyways to try and increase profits.
This bubble can't pop soon enough, before anyone is truly reliant on the base LLMs operated directly by OpenAI and other bottomless money pits.
CIO even ends with talking up the Luddites — and how they smashed all those machines in rational self-defence.
I genuinely thought this wasn't true at first and went to check. Its completely true, a fucking business magazine's giving the Luddites their due:
Regardless of the fallout, fractional CMO Lars Nyman sees AI sabotage efforts as nothing new.
“This is luddite history revisited. In 1811, the Luddites smashed textile machines to keep their jobs. Today, it’s Slack sabotage and whispered prompt jailbreaking, etc. Human nature hasn’t changed, but the tools have,” Nyman says. “If your company tells people they’re your greatest asset and then replaces them with an LLM, well, don’t be shocked when they pull the plug or feed the model garbage data. If the AI transformation rollout comes with a whiff of callous ‘adapt or die’ arrogance from the C-suite, there will be rebellion.”
It may be in the context of warning capital not to anger labour too much, lest they inspire resistance, but its still wild to see.
reviewing his history, i don't think this article was actually written by the sort of commie who ends up at finance papers then belatedly remembered to tone it down a bit, but you'd be forgiven for thinking so
It is not sabotage if they are helping you not commit to a long term strategy that is detrimental to the company.
Well, it is. It's not sabotaging the company, but it's absolutely sabotaging the initiative. (You can absolutely sabotage saboteurs.)
That said, it's not sabotage because no action of those is actually sabotage. It's just people telling their managers AI is bad on the job, or failing to make it good.
The Pivot to AI article says 31%, but the source says 41%. I think Pivot to AI just accidentally a digit a little bit.
Edit: oh, 31% of employees, but 41% of Millennial and Gen-Z.
the survey says all sorts of things, it's true
#Resist