this post was submitted on 07 Oct 2025
694 points (98.6% liked)

Technology

76088 readers
2485 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] latenightnoir@lemmy.blahaj.zone 98 points 1 week ago* (last edited 1 week ago) (3 children)

This is how AI will take over... not through wars or competence, but by being better at bureaucratic forgeries...

Edit: well, I guess the apple never falls far from the tree, as it were! Wa-hey! We wanted to create the ultimate worker, but we've managed to create the ultimate politician instead=))

[–] Gullible@sh.itjust.works 23 points 1 week ago (2 children)

AI politicians might be the move after next.

Corporate personhood(you are here) ->
Corporation self advocates ->
Corporations run for office

I don’t like this future. I’d like to go back.

[–] zqwzzle@lemmy.ca 12 points 1 week ago (1 children)
[–] Gullible@sh.itjust.works 15 points 1 week ago

I said I DON’T LIKE THIS FUTURE. I’D LIKE TO GO BACK

load more comments (1 replies)
[–] Lucidlethargy@sh.itjust.works 6 points 1 week ago

It's easy when the first line of every reply is "oh, you're so goddamn smart. Holy shit, are you the smartest person in the world for asking that question?..."

https://youtu.be/TuEKb9Ktqhc

load more comments (1 replies)
[–] Buffalox@lemmy.world 72 points 1 week ago* (last edited 1 week ago) (9 children)

I think the fad will die down a bit, when companies figure out that AI will be more likely than humans to make very expensive mistakes that the company has to compensate, and saying it was the AI is not a valid cop out.
I foresee companies will go bankrupt on that account.

It doesn't help to save $100k on cutting away an employee, if the AI causes damages for 10 or 100 times that amount.

[–] shalafi@lemmy.world 23 points 1 week ago (1 children)

When the bubble bursts, whoever is left standing is going to have to jack prices through the roof to put so much as a dent in their outlay. Their outlay so far. Can't see many companies hanging in there at that point.

[–] BanMe@lemmy.world 9 points 1 week ago (1 children)

Not if the IP is purchased by another company leaving the original saddled with the debt, or spun off so the parent company can rebuy it thusly, or the government bails them out, or buys it to be the State AI too, or a bunch of other scenarios in this dark new world ahead.

load more comments (1 replies)
[–] a4ng3l@lemmy.world 8 points 1 week ago

I put my money on AI act here in Europe and the willingness of local authorities to make a few examples. That would help bringing some accountability here and there and stir a bit the pot. Eventually, as AI commodities, it will be less in the light. That will also help.

load more comments (6 replies)
[–] jaybone@lemmy.zip 40 points 1 week ago (2 children)

LOLLLLLLLL that’s like a third of the US population. Probably half of the number currently employed. There’s no way in hell this useless garbage will take 1/3 to 1/2 of all jobs. Companies that do this will go out of business fast.

[–] skisnow@lemmy.ca 18 points 1 week ago (1 children)

You can tell how competent someone is at something by how good they think AI is at that thing.

[–] pinball_wizard@lemmy.zip 4 points 1 week ago (3 children)

You can tell how competent someone is at something by how good they think AI is at that thing.

This is so true.

I recently had a colleague - ignorant of this perspective - give a training presentation on using AI to update a kind of bullshit job useless document.

Dozens of peers attended their presentation. They went on demonstrating relatively mindless prompt inputting for 40 minutes.

I keep remembering just how many people they shared their AI enthusiasm with.

I think they may honestly believe that AI has democratized the workplace, and that they will vibe code their way to successful startup CEO-ship in a year.

load more comments (3 replies)
load more comments (1 replies)
[–] Zephorah@discuss.online 34 points 1 week ago (2 children)

Thus demonstrating the crux of the issue.

I was just looking for a name of a historical figure associated with the Declaration of Independence but not involved in the writing of it. Elizabeth Powel. Once I knew the name, I went through the ai to see how fast they’d get it. Duck.ai confidently gave me 9 different names, including people who were born on 1776 or soon thereafter and could not have been historically involved in any of it. I even said not married to any of the writers and kept getting Abagail Adams and the journalist, Goddard. It was continually distracted by “prominent woman” and would give Elizabeth Cady Stanton instead. Twice.

Finally, I gave the ai a portrait. It took the ai three tries to get the name from the portrait, and the portrait is the most used one under the images tab.

It was very sad. I strongly encourage everyone to test the ai. Easy to grab wikis that would be top of the search anyway are making the ai look good.

[–] merc@sh.itjust.works 22 points 1 week ago

If you understand how LLMs work, that's not surprising.

LLMs generate a sequence of words that makes sense in that context. It's trained on trillions(?) of words from books, Wikipedia, etc. In most of the training material, when someone asks "what's the name of the person who did X?" there's an answer, and that answer isn't "I have no fucking clue".

Now, if it were trained on a whole new corpus of data that had "I have no fucking clue" a lot more often, it would see that as a reasonable thing to print sometimes so you'd get that answer a lot more often. However, it doesn't actually understand anything. It just generates sequences of believable words. So, it wouldn't generate "I have no fucking clue" when it doesn't know, it would just generate it occasionally when it seemed like it was an appropriate time. So, you'd ask "Who was the first president of the USA?" and it would sometimes say "I have no fucking clue" because that's sometimes what the training data says a response might look like when someone asks a question of that form.

[–] Buffalox@lemmy.world 6 points 1 week ago (1 children)

LOL Maybe AI will be the next big job creator. The AI solves a task super fast, but a human has to sort out the mistakes, and spend twice the time doing that, than it would have taken to just do it yourself.

[–] DarkDarkHouse@lemmy.sdf.org 4 points 1 week ago

This what's happening in computer programming. The booming subfield is apparently slop cleaners.

[–] Grandwolf319@sh.itjust.works 32 points 1 week ago (2 children)

If you have a job that you can be confidently wrong without any self awareness after the fact, then yeah I guess.

But I can’t think of many jobs like that except something that is mostly just politics.

[–] Blackfeathr@lemmy.world 25 points 1 week ago (1 children)

Don't forget the vast majority of CEOs.

[–] thisbenzingring@lemmy.sdf.org 7 points 1 week ago

IMO AI would probably do the job of CEO better than a human. It wouldn't be as greedy and would be happy with any growth while being humble enough to make decisions that might be personally embarrassing

load more comments (1 replies)
[–] Buffalox@lemmy.world 26 points 1 week ago (1 children)

And over the next 50 years it will take 485 million jobs, and the unemployment rate will be 235%.

[–] architect@thelemmy.club 4 points 1 week ago (1 children)
[–] popekingjoe@lemmy.world 5 points 1 week ago

Here's hoping!

[–] thisbenzingring@lemmy.sdf.org 24 points 1 week ago (3 children)

funny... i expected IT workers to be in that list but we're not. AI couldn't do my job but it could be my boss and that frightens me.

[–] BanMe@lemmy.world 9 points 1 week ago

I drove Amazon Flex during Covid, having an AI as your boss is deeply and perpetually unsettling but ultimately doable! Just do what the push notification tells you to do. If you want to say something to your boss, use the feedback form on the corporate website. So simple.

[–] sexy_peach@feddit.org 3 points 1 week ago (1 children)
load more comments (1 replies)
[–] MonkderVierte@lemmy.zip 23 points 1 week ago* (last edited 1 week ago) (2 children)

Stop calling LLM AI. It creates false expectations.

[–] innermachine@lemmy.world 13 points 1 week ago (2 children)

The fact that "AI" training off other LLM slop produces worse and worse results is proof there is no "intelligence" going on just clever parroting.

load more comments (2 replies)
[–] 0x0@lemmy.dbzer0.com 4 points 1 week ago

It's not a lie if you can obtain a trademark... our LLM is AI™. Just like how Teslas are Fully Self Driving™.

[–] phutatorius@lemmy.zip 20 points 1 week ago

Just look at who's in charge of the Senate, and ask yourself if they are to be trusted to do anything but lie, steal and carry out witch hunts.

As for LLMs, unless driving contact-centre customer satisfaction scores even further through the floor counts as an achievement, so far, all there's been has been a vast volume of hype and wasted energy, and very little to show for it, except for some highly constrained point solutions which aren't significant enough to make economic impact. Even then, the ROI is questionable.

[–] tidderuuf@lemmy.world 19 points 1 week ago (1 children)

Knowing the way our country is going I would expect in the end workers will have to pay an AI tax on their income and most workers will start working 50 hours a week.

load more comments (1 replies)
[–] ms_lane@lemmy.world 15 points 1 week ago (2 children)

It hasn't taken any jobs, but this will keep being repeated so it can be used as a bludgeon against pay rises and keeping up with inflation.

'you're lucky to have a job'

[–] UnderpantsWeevil@lemmy.world 10 points 1 week ago

It hasn’t taken any jobs

Microsoft to cut up to 9,000 more jobs as it invests in AI

Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions

Tesla’s layoffs hit Autopilot team as AI develops

A lot of these bozos are drinking their own Kool-aid. They're laying off internal teams in droves and pivoting to "Vibes Coding" as a presumably more efficient method of internal devleopment.

[–] ipkpjersi@lemmy.ml 8 points 1 week ago* (last edited 1 week ago) (2 children)

I disagree, I have literally heard of people being laid off because managers think that AI can and will replace actual workers, I have literally seen it too. It's already happening.

[–] quetzaldilla@lemmy.world 10 points 1 week ago* (last edited 1 week ago)

Corporations are firing and laying off labor, but that labor is not being done by AI-- it's simply falling on those who are still employed or not getting done at all.

I resigned from an international public accounting firm due to having AI forced on very sensitive and delicate projects in order to lower costs. As a professional, every alarm bell went off and I left because I could be held liable for their terrible managerial decisions.

They told me they were sad to see me go, but AI is the future and hope I changed my mind-- this was all back in April.

Not only did AI fail to do a fraction of the work we were told it was going to do, it caused over $2MM in client damages that the firm then used to justify the firing of the remaining members of the projects' team for failing to properly supervise the AI, even though every manager struggles to open a PDF.

AI is not the future because it is literally only capable of looking backwards.

AI is a performative regurgitation of information that real people put the time and energy into gathering, distilling, refining, and presenting to others to evaluate and contribute to.

Even worse, AI demonstrably makes its users dependent and intellectually lazy. If you think about it, the more prevalent AI usage becomes, the less and less capable people will be left to maintain it. And to all the fools crying out that AI will take care of itself or robots will, I say:

All LLMs are hallucinating and going psychotic, and that is not something that can be fixed due to the very nature of how LLMs work.

AI is not intelligent. And while it could be, that would take far too much energy and resources to make cost-effective machines with as many neural connections present in the brain of an average MAGA voter-- and that is already a super a low bar for most of us to clear.

load more comments (1 replies)
[–] Smoogs@lemmy.world 15 points 1 week ago

Yes but got forbid those jobs be stolen by another country. Can’t have that.

[–] ZombieMantis@lemmy.world 15 points 1 week ago

This shit's so embarrassing

[–] _stranger_@lemmy.world 12 points 1 week ago* (last edited 1 week ago)

So they want to keep them terrified of losing their shitty, barely functioning status quo.

The reality is that these are the numbers the Republicans want , because it's the numbers their billionaire owners want. ChatGPT is just accidentally letting us know how they've poisoned the models.

[–] finitebanjo@lemmy.world 6 points 1 week ago* (last edited 1 week ago) (1 children)

My prediction is that AI, as in LLMs, will be responsible for 0 net jobs lost but simultaneously responsible for many companies going under.

People will lose their jobs to AI in the same way that lumberjacks lose their job to forest fires.

[–] normalexit@lemmy.world 6 points 1 week ago (1 children)

If we can make it through to midterm elections I will worry then.

load more comments (1 replies)
[–] tal@olio.cafe 4 points 1 week ago* (last edited 1 week ago)

I wouldn't put it entirely outside the realm of possibility, but I think that that's probably unlikely.

The entire US only has about 161 million people working at the moment. In order for a 97 million shift to happen, you'd have to manage to transition most human-done work in the US to machines, using one particular technology, in 10 years.

Is that technically possible? I mean, theoretically.

I'm pretty sure that to do something like that, you'd need AGI. Then you'd need to build systems that leveraged it. Then you'd need to get it deployed.

What we have today is most-certainly not AGI. And I suspect that we're still some ways from developing AGI. So we aren't even at Step 1 on that three-part process, and I would not at all be surprised if AGI is a gradual development process, rather than a "Eureka" moment.

[–] melfie@lemy.lol 4 points 1 week ago

AI isn’t taking the jobs, dipshit rich assholes are cutting the jobs. Taking a job implies doing the job, and from that perspective, the remaining people who weren’t laid off are taking the jobs, not AI.

[–] humanspiral@lemmy.ca 4 points 1 week ago (3 children)

robot tax

Needs to stop with stupid gimmicks from Bernie. Higher personal, corporate, and investment taxes to fund UBI. Welcome robots/automation to free us from any useless work instead of looking at cannibal solutions to "pick me" for the one job there is.

Robot taxes are wrongheaded, because automation is hard to define. Taxing pipes and wires will make full employment getting all your energy and water with buckets from the river and chopping down all the trees. Even if we strained to define narrow robots/automation categories, it would encourage more foreign production, and no local robot production economy. Why would those selling Yachts to the robot owners not be taxed?

[–] IronBird@lemmy.world 6 points 1 week ago (9 children)

we dont even have universal healthcare or functional public transit, UBI is a pipedream...

load more comments (9 replies)
load more comments (2 replies)
[–] Kyle_The_G@lemmy.world 4 points 1 week ago

and then 115 million will be needed to unwind the half-assed implementation and inevitable damage.

load more comments
view more: next ›