BlueMonday1984

joined 2 years ago
[–] BlueMonday1984@awful.systems 9 points 1 month ago

New post from tante: The “Data” Narrative eats itself, using the latest Pivot to AI as a jumping off point to talk about synthetic data.

[–] BlueMonday1984@awful.systems 3 points 1 month ago

Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.

At this point, I'm gonna chalk the refusal to stop hoarding up to ideology more than anything else. The tech industry clearly sees data not as information to be taken sparingly, used carefully, and deleted when necessary, but as Objective Reality Units^tm^ which are theirs to steal and theirs alone.

[–] BlueMonday1984@awful.systems 14 points 1 month ago (8 children)

Starting things off with a newsletter by Jared White that caught my attention: Why “Normies” Hate Programmers and the End of the Playful Hacker Trope, which directly discusses how the public perception of programmers has changed for the worse, and how best to rehabilitate it.

Adding my own two cents, the rise of gen-AI has definitely played a role here - I'm gonna quote Baldur Bjarnason directly here, since he said it better than I could:

[–] BlueMonday1984@awful.systems 6 points 1 month ago

If AI slop is an insult to life itself, then this shit is an insult to knowledge. Any paper that actually uses "synthetic data" should be immediately retracted (and ideally destroyed altogether), but it'll probably take years before the poison is purged from the scientific record.

Artificial intelligence is the destruction of knowledge for profit. It has no place in any scientific endeavor. (How you managed to maintain a calm, detached tone when talking about this shit, I will never know.)

[–] BlueMonday1984@awful.systems 5 points 1 month ago (1 children)

Saw an AI-extruded "art" "timelapse" in the wild recently - the "timelapse" in question isn't gonna fool anyone who actually cares about art, but it's Good Enough^tm^ to pass muster on someone mindlessly scrolling, and its creation serves only to attack artists' ability to prove their work was human made.

This isn't the first time AI bros have pulled this shit (Exhibit A, Exhibit B), by the way.

[–] BlueMonday1984@awful.systems 6 points 1 month ago

Burke and Goodnough are working to rectify the report. That sounds like removing the fake stuff but not the conclusions based on it. Those were determined well ahead of time.

In a better world, those conclusions would've been immediately thrown out as lies and Burke and Goodnough would've been immediately fired. We do not live in a better timeline, but a man can dream.

[–] BlueMonday1984@awful.systems 5 points 1 month ago

This isn't the first time I've heard about this - Baldur Bjarnason's talked about how text extruders can be poisoned to alter their outputs before, noting its potential for manipulating search results and/or serving propaganda.

Funnily enough, calling a poisoned LLM as a "sleeper agent" wouldn't be entirely inaccurate - spicy autocomplete, by definition, cannot be aware that their word-prediction attempts are being manipulated to produce specific output. Its still treating these spicy autocompletes with more sentience than they actually have, though

[–] BlueMonday1984@awful.systems 11 points 1 month ago

Not to mention, Cursor's going to be training on a lot of highly sensitive material (sensitive data, copyrighted code, potential trade secrets) - the moment that shit starts to leak, all hell's gonna break loose on the legal front.

[–] BlueMonday1984@awful.systems 7 points 1 month ago

With AI, of course

[–] BlueMonday1984@awful.systems 10 points 1 month ago

Now, you might object: Anysphere wouldn’t be abusing just their customers’ data. Their customers’ customers’ data may have non-disclosure agreements with teeth. Then there’s personal data covered by the GDPR and so on.

If we're lucky, this will spook customers into running for the hills and hasten its demise. Whatever magical performance benefits Cursor's promising isn't gonna be worth getting blamed for a data breach.

[–] BlueMonday1984@awful.systems 6 points 1 month ago

The report claims its about ethical AI use, but all I see is evidence that AI is inherently unethical, and an argument for banning AI from education forever.

[–] BlueMonday1984@awful.systems 6 points 1 month ago (1 children)

OpenAI’s choices don’t make any long term sense if AGI isn’t coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment.

Another possibility is that Altman's bought into his own hype, and genuinely believes OpenAI will achieve AGI before the money runs out. Considering the tech press has been uncritically hyping up AI in general, and Sammy Boy himself has publicly fawned over "metafiction" "written" by an in-house text extruder, its a possibility I'm not gonna discount.

view more: ‹ prev next ›