gerikson

joined 2 years ago
[–] gerikson@awful.systems 6 points 2 months ago (2 children)

Wasn't the original designation of Boers (as in the Boer war) a denigrating term?

[–] gerikson@awful.systems 11 points 2 months ago (2 children)

Explains his gushing over Scott in the intro.

I still think he makes a lot of good points in that promptfondlers are losing their shit because people aren't buyin the swill they're selling.

In a similar vein, check out this comment on LW.

[on "starting an independent org to research/verify the claims of embryo selection companies"] I see how it "feels" worth doing, but I don't think that intuition survives analysis.

Very few realistic timelines now include the next generation contributing to solving alignment. If we get it wrong, the next generation's capabilities are irrelevant, and if we get it right, they're still probably irrelevant. I feel like these sorts of projects imply not believing in ASI. This is standard for most of the world, but I am puzzled how LessWrong regulars could still coherently hold that view.

https://www.lesswrong.com/posts/hhbibJGt2aQqKJLb7/shortform-1?commentId=25HfwcGxC3Gxy9sHi

So belieiving in the inevitable coming of the robot god is dogma on LW now. This is a cult.

[–] gerikson@awful.systems 6 points 2 months ago (1 children)

There are a bit different axes here. The tax money doesn't directly go towards alleviating the suffering of family members of alcoholics, nor does it directly lower the effects of drunk driving. The income is a nice to have, for sure, but the stated aim is to be a "sin tax" which makes the bad thing less affordable.

[–] gerikson@awful.systems 5 points 2 months ago (3 children)

Good news everyone, we will be living with Big Yud until the literal end of time (see comments)

https://www.lesswrong.com/posts/owZt48g3GjJHDn5EE/kvmanthinking-s-shortform?commentId=YisxiwcWZq7eMPxSX

[–] gerikson@awful.systems 15 points 2 months ago (3 children)

OK now there's another comment

I think this is a good plea since it will be very difficult to coordinate a reduction of alcohol consumption at a societal level. Alcohol is a significant part of most societies and cultures, and it will be hard to remove. Change is easier on an individual level.

Excepting cases like the legal restriction of alcohol sales in many many areas (Nordics, NSW in Aus, Minnesota in the US), you can in fact just tax the living fuck out of alcohol if you want. The article mentions this.

JFC these people imagine they can regulate how "AGI" is constructed, but faced with a problem that's been staring humanity in the face since the first monk brewed the first beer they just say "whelp nothing can be done, except become a teetotaller yourself)

[–] gerikson@awful.systems 8 points 2 months ago

To be scrupulously fair it is a repost of another slubbslack[1]. Amusingly, both places have a comment with the gist of "well alcohol gets people laid so what's the problem". This of course is a reflection that most LWers cannot get a girl into bed without slipping her a roofie.


[1] is that even ok? I know the LW software has a "mirroring" functionality b/c a lot of content is originally on the member's SS, maybe you cna point it at any SS entry and get it onto LW.

[–] gerikson@awful.systems 12 points 2 months ago (9 children)

Nothing expresses the inherent atomism and libertarian nature of the rat community like this

https://www.lesswrong.com/posts/HAzoPABejzKucwiow/alcohol-is-so-bad-for-society-that-you-should-probably-stop

A rundown of the health risks of alcohol usage, coupled with actual real proposals (a consumption tax), finishes with the conclusion that the individual reader (statistically well-off and well-socialized) should abstain from alcohol altogether.

No calls for campaigning for a national (US) alcohol tax. No calls to fund orgs fighting alcohol abuse. Just individual, statistically meaningless "action".

Oh well, AGI will solve it (or the robot god will be a raging alcoholic)

[–] gerikson@awful.systems 9 points 2 months ago (9 children)

Oh FFS, that couple have managed to break into Sweden's public broadcasting site

https://www.svt.se/nyheter/utrikes/har-ar-familjen-som-vill-foda-elitbarn-for-att-radda-manskligheten

[–] gerikson@awful.systems 19 points 2 months ago (5 children)

Here's LWer "johnswentworth", who has more than 57k karma on the site and can be characterized as a big cheese:

My Empathy Is Rarely Kind

I usually relate to other people via something like suspension of disbelief. Like, they’re a human, same as me, they presumably have thoughts and feelings and the like, but I compartmentalize that fact. I think of them kind of like cute cats. Because if I stop compartmentalizing, if I start to put myself in their shoes and imagine what they’re facing… then I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both).

"why do people keep saying we sound like fascists? I don't get it!"

[–] gerikson@awful.systems 6 points 2 months ago

The artillery branch of most militaries has long been a haven for the more brainy types. Napoleon was a gunner, for example.

[–] gerikson@awful.systems 12 points 2 months ago

Oh, but LW has the comeback for you in the very first paragraph

Outside of niche circles on this site and elsewhere, the public's awareness about AI-related "x-risk" remains limited to Terminator-style dangers, which they brush off as silly sci-fi. In fact, most people's concerns are limited to things like deepfake-based impersonation, their personal data training AI, algorithmic bias, and job loss.

Silly people! Worrying about problems staring them in the face, instead of the future omnicidal AI that is definitely coming!

[–] gerikson@awful.systems 16 points 2 months ago (10 children)

LessWronger discovers the great unwashed masses , who inconveniently still indirectly affect policy through outmoded concepts like "voting" instead of writing blogs, might need some easily digested media pablum to be convinced that Big Bad AI is gonna kill them all.

https://www.lesswrong.com/posts/4unfQYGQ7StDyXAfi/someone-should-fund-an-agi-blockbuster

Cites such cultural touchstones as "The Day After Tomorrow", "An Inconvineent Truth" (truly a GenZ hit), and "Slaughterbots" which I've never heard of.

Listen to the plot summary

  • Slowburn realism: The movie should start off in mid-2025. Stupid agents.Flawed chatbots, algorithmic bias. Characters discussing these issues behind the scenes while the world is focused on other issues (global conflicts, Trump, celebrity drama, etc). [ok so basically LW: the Movie]
  • Explicit exponential growth: A VERY slow build-up of AI progress such that the world only ends in the last few minutes of the film. This seems very important to drill home the part about exponential growth. [ah yes, exponential growth, a concept that lends itself readily to drama]
  • Concrete parallels to real actors: Themes like "OpenBrain" or "Nole Tusk" or "Samuel Allmen" seem fitting. ["we need actors to portray real actors!" is genuine Hollywood film talk]
  • Fear: There's a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure. [so basically people will watch a conventional thriller except in the last few minutes everyone dies. No motivation. No clear "if we don't cut these wires everyone dies!"]

OK so what should be shown in the film?

compute/reporting caps, robust pre-deployment testing mandates (THESE are all topics that should be covered in the film!)

Again, these are the core components of every blockbuster. I can't wait to see "Avengers vs the AI" where Captain America discusses robust pre-deployment testing mandates with Tony Stark.

All the cited URLS in the footnotes end with "utm_source=chatgpt.com". 'nuff said.

view more: ‹ prev next ›