BlueMonday1984

joined 2 years ago
[–] BlueMonday1984@awful.systems 7 points 2 weeks ago

New post from Matthew Hughes: People Are The Point, effectively a manifesto against gen-AI as a concept.

[–] BlueMonday1984@awful.systems 6 points 2 weeks ago (5 children)

The only complexity theory I know of is the one which tries to work out how resource-intensive certain problems are for computers, so this whole thing sounds iffy right from the get-go.

[–] BlueMonday1984@awful.systems 11 points 2 weeks ago (1 children)

The deluge of fake bug reports is definitely something I should have noted as well, since that directly damages FOSS' capacity to find and fix bugs.

Baldur Bjanason has predicted that FOSS is at risk of being hit by "a vicious cycle leading to collapse", and security is a major part of his hypothesised cycle:

  1. Declining surplus and burnout leads to maintainers increasingly stepping back from their projects.

  2. Many of these projects either bitrot serious bugs or get taken over by malicious actors who are highly motivated because they can’t relay on pervasive memory bugs anymore for exploits.

  3. OSS increasingly gets a reputation (deserved or not) for being unsafe and unreliable.

  4. That decline in users leads to even more maintainers stepping back.

[–] BlueMonday1984@awful.systems 14 points 2 weeks ago (5 children)

Potential hot take: AI is gonna kill open source

Between sucking up a lot of funding that would otherwise go to FOSS projects, DDOSing FOSS infrastructure through mass scraping, and undermining FOSS licenses through mass code theft, the bubble has done plenty of damage to the FOSS movement - damage I'm not sure it can recover from.

[–] BlueMonday1984@awful.systems 10 points 2 weeks ago (1 children)

Reading through some of the examples at the end of the article it’s infuriating when these slop reports have opened and when the patient curl developers try to give them benefit of the doubt the reporter replies with “you have a vulnerability and I cannot explain further since I’m not an expert”

At that point, I feel the team would be justified in telling these slop-porters to go fuck themselves and closing the report - they've made it crystal clear they're beyond saving.

(And on a wider note, I suspect the security team is gonna be a lot less willing to give benefit of the doubt going forward, considering the slop-porters are actively punishing them for doing so)

[–] BlueMonday1984@awful.systems 10 points 2 weeks ago

This is pure speculation, but I suspect machine learning as a field is going to tank in funding and get its name dragged through the mud by the popping of the bubble, chiefly due to its (current) near-inability to separate itself from AI as a concept.

[–] BlueMonday1984@awful.systems 10 points 2 weeks ago

Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?

Considering how many are Trump bros, they probably consider getting consent to be Cuck Shit^tm^ and treat hearing anything but English as sufficient grounds to call ICE.

[–] BlueMonday1984@awful.systems 11 points 2 weeks ago (1 children)

Found an unironic AI bro in the wild on Bluesky:

You want my unsolicited thoughts on the line between man and machine, I feel this bubble has done more to clarify that line then to blur it, both by showcasing the flaws and limitations inherent to artificial intelligence, and by highlighting the aspects of human minds which cannot be replicated.

[–] BlueMonday1984@awful.systems 16 points 2 weeks ago (3 children)

The curl Bug Bounty is getting flooded with slop, and the security team is prepared to do something drastic to stop it. Going by this specific quote, reporters falling for the hype is a major issue:

As a lot of these reporters seem to genuinely think they help out, apparently blatantly tricked by the marketing of the AI hype-machines, it is not certain that removing the money from the table is going to completely stop the flood. We need to be prepared for that as well. Let’s burn that bridge if we get to it.

[–] BlueMonday1984@awful.systems 10 points 2 weeks ago

Shot-in-the-dark prediction here - the Xbox graphics team probably won't be filling those positions any time soon.

As a sidenote, part of me expects more such cases to crop up in the following months, simply because the widespread layoffs and enshittification of the entire tech industry is gonna wipe out everyone who cares about quality.

[–] BlueMonday1984@awful.systems 14 points 2 weeks ago

New high-strength sneer from Matthew Hughes: The Biggest Insult, targeting "The Unspeakable Contempt At The Heart of Generative AI"

[–] BlueMonday1984@awful.systems 9 points 3 weeks ago (1 children)

It would be really funny if Devin caused a financial crash this way

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this)

 

None of what I write in this newsletter is about sowing doubt or "hating," but a sober evaluation of where we are today and where we may end up on the current path. I believe that the artificial intelligence boom — which would be better described as a generative AI boom — is (as I've said before) unsustainable, and will ultimately collapse. I also fear that said collapse could be ruinous to big tech, deeply damaging to the startup ecosystem, and will further sour public support for the tech industry.

Can't blame Zitron for being pretty downbeat in this - given the AI bubble's size and side-effects, its easy to see how its bursting can have some cataclysmic effects.

(Shameless self-promo: I ended up writing a bit about the potential aftermath as well)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this)

 

I don’t think I’ve ever experienced before this big of a sentiment gap between tech – web tech especially – and the public sentiment I hear from the people I know and the media I experience.

Most of the time I hear “AI” mentioned on Icelandic mainstream media or from people I know outside of tech, it’s being used as to describe something as a specific kind of bad. “It’s very AI-like” (“mjög gervigreindarlegt” in Icelandic) has become the talk radio short hand for uninventive, clichéd, and formulaic.

babe wake up the butlerian jihad is coming

 

I stopped writing seriously about “AI” a few months ago because I felt that it was more important to promote the critical voices of those doing substantive research in the field.

But also because anybody who hadn’t become a sceptic about LLMs and diffusion models by the end of 2023 was just flat out wilfully ignoring the facts.

The public has for a while now switched to using “AI” as a negative – using the term “artificial” much as you do with “artificial flavouring” or “that smile’s artificial”.

But it seems that the sentiment might be shifting, even among those predisposed to believe in “AI”, at least in part.

Between this, and the rise of "AI-free" as a marketing strategy, the bursting of the AI bubble seems quite close.

Another solid piece from Bjarnason.

view more: ‹ prev next ›