Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 18 points 2 months ago* (last edited 2 months ago) (13 children)

Saltman has a new blogpost out he calls 'Three Observations' that I feel too tired to sneer properly but I'm sure will be featured in pivot-to-ai pretty soon.

Of note that he seems to admit chatbot abilities have plateaued for the current technological paradigm, by way of offering the "observation" that model intelligence is logarithmically dependent on the resources used to train and run it (i = log( r )) so it's officially diminishing returns from now on.

Second observation is that when a thing gets cheaper it's used more, i.e. they'll be pushing even harded to shove it into everything.

Third observation is that

The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

which is hilarious.

The rest of the blogpost appears to mostly be fanfiction about the efficiency of their agents that I didn't read too closely.

[–] Architeuthis@awful.systems 17 points 2 months ago (1 children)

Penny Arcade weighs in on deepseek distilling chatgpt (or whatever actually the deal is):

[–] Architeuthis@awful.systems 6 points 3 months ago

You misunderstand, they escalate to the max to keep themselves (including selves in parallel dimensions or far future simulations) from being blackmailed by future super intelligent beings, not to survive shootouts with border patrol agents.

I am fairly certain Yud had said something very close to that effect in reference to preventing blackmail from the basilisk, even though he tries to no-true-scotchman zizians wrt his functional decision 'theory' these days.

[–] Architeuthis@awful.systems 6 points 3 months ago* (last edited 3 months ago)

Distilling is supposed to be a shortcut to creating a quality training dataset by using the output of an established model as labels, i.e. desired answers.

The end result of the new model ending up with biases inherited from the reference model should hold, but using as a base model the same model you are distilling from would seem to be completely pointless.

[–] Architeuthis@awful.systems 3 points 3 months ago (2 children)

The 671B model although 'open sourced' is a 400+GB download and is definitely not runnable on household hardware.

[–] Architeuthis@awful.systems 14 points 3 months ago* (last edited 3 months ago)

Taylor said the group believes in timeless decision theory, a Rationalist belief suggesting that human decisions and their effects are mathematically quantifiable.

Seems like they gave up early if they don't bring up how it was developed specifically for deals with the (acausal, robotic) devil, and also awfully nice of them to keep Yud's name out of it.

edit: Also in lieu of explanation they link to the wikipedia page on rationalism as a philosophical movement which of course has fuck all to do with the bay area bayes cargo cult, despite it having a small mention there, with most of the Talk: page being about how it really shouldn't.

[–] Architeuthis@awful.systems 8 points 3 months ago (1 children)

NYT and WaPo are his specific examples. He also wants a connection to "a policy/defense/intelligence/foreign affairs journal/magazine" if possible.

[–] Architeuthis@awful.systems 11 points 3 months ago* (last edited 3 months ago) (4 children)

Today on highlighting random rat posts from ACX:

poster thinks the future of llm training is contingent on focusing early on philosophical and theological text because they match the causality of human experience

(Current first post on today's SSC open thread)

On slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump's term, so be on the lookout for that when it materializes I guess.

[–] Architeuthis@awful.systems 2 points 3 months ago* (last edited 3 months ago)

wrong thread :(

[–] Architeuthis@awful.systems 8 points 3 months ago (3 children)

The zizian angle makes this so weird. Like, on top of probably being stopped for driving while trans, they might have instigated the shootout to prove to the basilisk that their parallel universe selves/simulated iterations/eternal souls can't be acausally blackmailed.

[–] Architeuthis@awful.systems 10 points 3 months ago (3 children)

It's another one of those things that the further you read the worse it gets, isn't it?

[–] Architeuthis@awful.systems 9 points 3 months ago* (last edited 3 months ago) (9 children)

Does anyone know who or what is Ziz in this context? Google says jewish mythological beast.

edit: found this:

The Zizians were a cult that focused on relatively extreme animal welfare, even by EA standards, and used a Timeless/Updateless decision theory, where being aggressive and escalatory was helpful as long as it helped other world branches/acausally traded with other worlds to solve the animal welfare crisis.

They apparently made a new personality called Maia in Pasek, and this resulted in Pasek's suicide.

They also used violence or the threat of violence a lot to achieve their goal.

This caused many problems for Ziz, and she now is in police custody.

view more: ‹ prev next ›