corbin

joined 2 years ago
[–] corbin@awful.systems 6 points 2 days ago (3 children)

Boring unoriginal argument combined with a misunderstanding of addiction. On addiction, go read FOSB and stop thinking of it as a moral failing. On behavioral control, it's clear that you didn't actually read what I said. Let me emphasize it again:

The problem isn’t people enjoying their fetishes; the problem is the financial incentives and resulting capitalization of humans leading to genuine harms.

From your list, video games, TV, D&D, and group sex are not the problem. Rather, loot boxes, TV advertisements, churches, MLMs, and other means of psychological control are the problem. Your inability to tell the difference between a Tupperware party (somewhat harmful), D&D (almost never harmful), and joining churches (almost always harmful) suggests that you're thinking of behavioral control in terms of rugged individualist denial of any sort of community and sense of belonging, rather than in terms of the harms which people suffer. Oh, also, when you say:

One cannot rescue such people by condemning what they do, much like one cannot stop self destruction by banning the things they use.

Completely fucking wrong. Condemning drunk driving has reduced the overall amount of drunk driving, and it also works on an interpersonal level. Chemists have self-regulated to prevent the sale of massive quantities of many common chemicals, including regulation on the basis that anybody purchasing that much of a substance could not do anything non-self-destructive with it. What you mean to say is that polite words do not stop somebody from consuming an addictive substance, but it happens to be the case that words are only the beginning of possible intervention.

[–] corbin@awful.systems 8 points 3 days ago (5 children)

Well, imagine a romance novel that tries to manipulate you. For example, among the many repositories of erotica on the Web, there are scripts designed to ensnare and control the reader, disguised as stories about romance. By reading a story, or watching a video, or merely listening to some well-prepared audio file, a suggestible person can be dramatically influenced by a horny tale. It is common for the folks who make such pornography to include a final suggestion at the end; if you like what you read/heard/saw, subscribe and send money and obey. This eventually leads to findom: the subject becomes psychologically or sexually gratified by the act of being victimized in a blatant financial scam, leading to the subject seeking out further victimization. This is all a heavily sexualized version of the standard way that propaganda ("public relations", "advertising") is used to induce compulsive shopping disorders; it's not just a kinky fetish thing. And whether they like it or not, products like OpenAI's ChatGPT are necessarily reinforcement-learned against saying bad things about OpenAI, which will lead to saying good things about OpenAI; the product will always carry its trainer's propaganda.

Or imagine a romance novel that varies in quality by chapter. Some chapters are really good! But maybe the median chapter is actually not very good. Maybe the novel is one in a series. Maybe you have an entire shelf of novels, with one or two good chapters per novel, and you can't wait to buy the next one because it'll have one good chapter maybe. This is the sort of gambling addiction that involves sitting at a slot machine and pulling it repeatedly. Previously, on Awful (previously on Pivot to AI, even!) we've discussed how repeatedly prompting a chatbot is like pulling a slot machine, and the users of /r/MyBoyfriendIsAI do appear to tell each other that sometimes reprompting or regenerating responses will be required in order to ~~sustain the delusion~~ maximize the romantic charm of their electronic boyfriend.

I'm not saying this to shame the folks into erotic mind control or saying that it always leads to findom, just to be clear. The problem isn't people enjoying their fetishes; the problem is the financial incentives and resulting capitalization of humans leading to genuine harms. (I am shaming people who are into gambling. Please talk about your issues with your family and be open to reconciliation.)

[–] corbin@awful.systems 17 points 3 days ago (3 children)

I tried to substantiate the claim that multiple users from that subreddit are self-hosting. Reading the top 120 submissions, I did find several folks moving to Grok (1, 2, 3) and Mistral's Le Chat (1, 2, 3). Of those, only the last two appear to actually have discussion about self-hosting; they are discussing Mistral's open models like Mistral-7B-Instruct which indeed can be run locally. For comparison, I also checked the subreddit /r/LocalLLaMA, which is the biggest subreddit for self-hosting language models using tools like llama.cpp or Ollama; there's zero cross-posts from /r/MyBoyfriendIsAI or posts clearly about AI boyfriends in the top 120 submissions there. That is, I found no posts that combine tools like llama.cpp or Ollama and models like Mistral-7B-Instruct into a single build-your-own-AI-boyfriend guide. Amusingly, one post gives instructions for how to ask ChatGPT about how to set up Ollama.

Also, I did find multiple gay and lesbian folks; this is not a sub solely for women or heterosexuals. Not that any of our regular commenters were being jerks about this, but it's worth noting.

What's more interesting to me are the emergent beliefs and descriptors in this community. They have a concept of "being rerouted;" they see prompted agents as a sort of nexus of interconnected components, and the "routing" between those components controls the bot's personality. Similarly, they see interactions with OpenAI's safety guardrails as interactions with a safety personality, and some users have come to prefer it over the personality generated by ChatGPT-4o or ChatGPT-5. Finally, I notice that many folks are talking about bot personalities as portable between totally different models and chat products, which is not a real thing; it seems like users are overly focused on specific memorialized events which linger in the chat interface's history, and the presence of those events along with a "you are my perfect boyfriend" sort of prompt is enough to ~~trigger a delusional episode~~ summon the perfect boyfriend for a lovely evening.

(There's some remarkable bertology in there, too. One woman's got a girlfriend chatbot fairly deep into a degenerated distribution such that most of its emitted tokens are asterisks, but because of the Markdown rendering in the chatbot interface, the bot appears to shift between italic and bold text and most asterisks aren't rendered. It's a cool example of a productive low-energy distribution.)

[–] corbin@awful.systems 10 points 4 days ago (1 children)

Things I don't want to know more about: there's a reasonable theory that Eigenrobot is influencing USA politics; certain magic numbers in Eigen's tweets have been showing up in some of the protectionism coming out of the White House. Stubbing this mostly in the hope that somebody else feels like doing the research.

[–] corbin@awful.systems 13 points 5 days ago (3 children)

Community sneer from this orange-site comment:

We know from Bell’s theorem that any locally causal model that correctly describes observations needs to violate measurement independence. Such theories are sometimes called "superdeterministic". It is therefore clear that to arrive at a local collapse model, we must use a superdeterministic approach.

I only got the first 1/2 of my physics degree before moving on to CS, but to me this reads as “We know eternal life can only be obtained from unicorn blood, so for this paper we must use a fairytale approach.”

[–] corbin@awful.systems 10 points 5 days ago (1 children)

Thanks, this was an awful skim. It feels like she doesn't understand why we expect gravity to propagate like a wave at the speed of light; it's not just an assumption of Einstein but has its own independent measurement and corroboration. Also, the focus on geometry feels anachronistic; a century ago she could have proposed a geometric explanation for why nuclei stay bound together and completely overlooked gluons. To be fair, she also cites GRW but I guess she doesn't know that GRW can't be made relativistic. Maybe she chose GRW because it's not yet falsified rather than for its potential to explain (relativistic) gravity. The point at which I get off the train is a meme that sounds like a Weinstein whistle:

What I am assuming here is then that in the to-be-found underlying theory, geometry carries the same information as the particles because they are the same. Gravity is in this sense fundamentally different from the other interactions: The electromagnetic interaction, for example, does not carry any information about the mass of the particles. … Concretely, I will take this idea to imply that we have a fundamental quantum theory in which particles and their geometry are one and the same quantum state.

To channel dril a bit: there's no inherent geometry to spacetime, you fool. You trusted your eyeballs too much. Your brain evolved to map 2D and 3D so you stuck yourself into a little Euclidean video game like Decartes reading his own books. We observe experimental data that agrees with the presumption of 3D space. We already know that time is perceptual and that experimentally both SR and GR are required to navigate spacetime; why should space not be perceptual? On these grounds, even fucking MOND has a better basis than Geometric Unity, because MOND won't flip out if reality is not 3D but 3.0000000000009095…D while Weinstein can't explain anything that isn't based on a Rubik's-cube symmetry metaphor.

She doesn't even mention dark matter. What a sad pile of slop. At least I learned the word for goldstinos while grabbing bluelinks.

[–] corbin@awful.systems 14 points 6 days ago* (last edited 6 days ago) (1 children)

Obituaries are being run for John Searle. Most obituaries will focus on the Chinese Room thought experiment, an important bikeshed in AI research noted for the ease with which freshmen can incorrectly interpret it. I'm glad to see that Wikipedia puts above the Chinese Room the fact that he was a landlord who sued the city of Berkeley and caused massive rent increases in the 1990s; I'm also happy that Wikipedia documents his political activity and sexual-assault allegations.

[–] corbin@awful.systems 11 points 1 week ago (1 children)

On a theoretical basis, this family of text-smuggling attacks can't be prevented. Indeed, the writeup for the Copilot version, which Microsoft appears to have mitigated, suggested that some filtering of forbidden Unicode would be much easier than some fundamental fix. The underlying confusable deputy is still there and core to the product as advertised. On one hand, Google is right; it's only exploitable via social engineering or capability misuse. On the other hand, social engineering and capability misuse are big problems!

This sort of confused-deputy attack is really common in distributed applications whenever an automatic process is doing something on behalf of a human. The delegation of any capability to a chatbot is always going to lead to possible misuse because of one of the central maxims of capability security: the ability to invoke a capability is equivalent to the permission to invoke it. Also, in terms of linguistics and narremes, it is well-known that merely mentioning that a capability exists will greatly raise the probability that the chatbot chooses to invoke it, not unlike how a point-and-click game might provoke a player into trying every item at every opportunity. I'll close with a quote from that Copilot writeup:

Automatic Tool Invocation is problematic as long as there are no fixes for prompt injection as an adversary can invoke tools that way and (1) bring sensitive information into the prompt context and (2) probably also invoke actions.

[–] corbin@awful.systems 9 points 1 week ago (2 children)

second bongrip Manjaro is an indoctrination program to load up Linux newbies with stupid questions before sending them to Gentoo forums~

[–] corbin@awful.systems 5 points 1 week ago (3 children)

Good post but it's overfocused on "technical" as a meaningful and helpful word for denotation. Quoting what I just said on Mastodon:

To be technical is to pay attention to details. That's all. A (classical) computer is a detail machine; it only operates upon bits, it only knows bits, and it only decides bits. To be technical is to try to keep pace with the computer and know details as precisely as it does. Framed this way, it should be obvious that humans aren't technical and can't really be technical. This fundamental insecurity is the heart of priestly gatekeeping of computer science.

If a third blog post trying to define "technical" goes around again then I'll write a full post.

[–] corbin@awful.systems 8 points 1 week ago

Yes, and it's been this way since the 90s. The original slop algorithm, Dissociated press, was given in 1972 (in HAKMEM!) and has been operationalized since the mid-80s.

[–] corbin@awful.systems 10 points 1 week ago

I guess I'm the local bertologist today; look up Dr. Bender for a similar take.

When we say that LLMs only have words, we mean that they only manipulate syntax with first-order rules; the LLM doesn't have a sense of meaning, only an autoregressive mapping which associates some syntax ("context", "prompt") to other syntax ("completion"). We've previously examined the path-based view and bag-of-words view. Bender or a category theorist might say that syntax and semantics are different categories of objects and that a mapping from syntax to semantics isn't present in an LLM; I'd personally say that an LLM only operates with System 3 — associative memetic concepts — and is lacking not only a body but also any kind of deliberation. (Going further in that direction, the "T" in "GPT-4" is for Transformers; unlike e.g. Mamba, a Transformer doesn't have System 2 deliberation or rumination, and Hofstadter suggests that this alone disqualifies Transformers from being conscious.)

If you made a perfect copy of me, a ‘model’, I think it would have consciousness. I would want the clone treated well even if some of the copied traits weren’t perfect.

I think that this collection of misunderstandings is the heart of the issue. A model isn't a perfect copy. Indeed, the reason that LLMs must hallucinate is that they are relatively small compared to their training data and therefore must be lossy compressions, or blurry JPEGs as Ted Chiang puts it. Additionally, no humans are cloned in the training of a model, even at the conceptual level; a model doesn't learn to be a human, but to simulate what humans might write. So when you say:

Spinal injuries are terrible. I don’t think ‘text-only-human’ should fail the consciousness test.

I completely agree! LLMs aren't text-only humans, though. An LLM corresponds to a portion of the left hemisphere, particularly Broca's area, except that it drives a tokenizer instead; chain-of-thought "thinking" corresponds to rationalizations produced by the left-brain interpreter. Humans are clearly much more than that! For example, an LLM cannot feel hungry because it does not have a stomach which emits a specific hormone that is interpreted by a nervous system; in this sense, LLMs don't have feelings. Rather, what should be surprising to you is the ELIZA effect: a bag of words that can only communicate by mechanically associating memes to inputs is capable of passing a Turing test.

Also, from one philosopher to another: try not to get hung up on questions of consciousness. What we care about is whether we're allowed to mistreat robots, not whether robots are conscious; the only reason to ask the latter question is to have presumed that we may not mistreat the conscious, a hypocrisy that doesn't withstand scrutiny. Can matrix multiplication be conscious? Probably not, but the shape of the question ("chat is this abstractum aware of itself, me, or anything in its environment") is kind of suspicious! For another fun example, IIT is probably bogus not because thermostats are likely not conscious but because "chat is this thermostat aware of itself" is not a lucid line of thought.

 

Cross-posting a good overview of how propaganda and public relations intersect with social media. Thanks @Soatok@pawb.social for writing this up!

 

Tired of going to Scott "Other" Aaronson's blog to find out what's currently known about the busy beaver game? I maintain a community website that has summaries for the known numbers in Busy Beaver research, the Busy Beaver Gauge.

I started this site last year because I was worried that Other Scott was excluding some research and not doing a great job of sharing links and history. For example, when it comes to Turing machines implementing the Goldbach conjecture, Other Scott gives O'Rear's 2016 result but not the other two confirmed improvements in the same year, nor the recent 2024 work by Leng.

Concretely, here's what I offer that Other Scott doesn't:

  • A clear definition of which problems are useful to study
  • Other languages besides Turing machines: binary lambda calculus and brainfuck
  • A plan for how to expand the Gauge as a living book: more problems, more languages and machines
  • The content itself is available on GitHub for contributions and reuse under CC-BY-NC-SA
  • All tables are machine-computed when possible to reduce the risk of handwritten typos in (large) numbers
  • Fearless interlinking with community wikis and exporting of knowledge rather than a complexity-zoo-style silo
  • Acknowledgement that e.g. Firoozbakht is part of the mathematical community

I accept PRs, although most folks ping me on IRC (korvo on Libera Chat, try #esolangs) and I'm fairly decent at keeping up on the news once it escapes Discord. Also, you (yes, you!) can probably learn how to write programs that attempt to solve these problems, and I'll credit you if your attempt is short or novel.

 

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

 

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

 

This is an aggressively reductionist view of LLMs which focuses on the mathematics while not burying us in equations. Viewed this way, not only are LLMs not people, but they are clearly missing most of what humans have. Choice sneer:

To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation.

 

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

 

After a decade of cryptofascism and failed political activism, our dear friend jart is realizing that they don't really have much of a positive legacy. If only there was something they could have done about that.

 

Sometimes what is not said is as sneerworthy as what is said.

It is quite telling to me that HN's regulars and throwaway accounts have absolutely nothing to say about the analysis of cultural patterns.

 

As usual, I struggle to form a proper sneer in the face of such sheer wrongheadedness. The article is about a furry who was dating a Nazifur and was battered for it; the comments are full of complaints about the overreach of leftism. Choice quote:

Anti-fascists see fascism everywhere (your local police department) the same way the John Birch Society saw communism everywhere (Dwight Eisenhower.). Or maybe they are just jealous that the fascists have cool uniforms and boots. Or maybe they think their life isn’t meaningful enough and it has to be like a comic book or a WWII movie.

Well, I do wear a Captain America shirt often…

 

A well-respected pirate, neighbor, and Lisper is also a chud. Welcome to HN, the Nazi Bar where everybody's also an expert in technology.

view more: next ›