scruiser

joined 2 years ago
[–] scruiser@awful.systems 5 points 8 hours ago* (last edited 8 hours ago)

I just now heard about here. Reading about it on Wikipedia... they had a mathematical model that said their design shouldn't generate a sonic boom audible from ground level, but it was possible their mathematical model wasn't completely correct, so building a 1/3 scale prototype (apparently) validated their model? It's possible their model won't be right about their prospective design, but if it was right about the 1/3 scale then that is good evidence their model will be right? idk, ~~I'm not seeing much that is sneerable here~~, it seems kind of neat. Surely they wouldn't spend the money on the 1/3 scale prototype unless they actually needed the data (as opposed to it being a marketing ploy or worse yet a ploy for more VC funds)... surely they wouldn't?

iirc about the Concorde (one of only two supersonic passenger planes), it isn't so much that supersonic passenger planes aren't technologically viable, its more a question of economics (with some additional issues with noise pollution and other environmental issues). Limits on their flight path because of the sonic booms was one of the problems with the Concorde, so at least they won't have that problem. And as to the other questions... Boom Supersonic's webpage directly addresses these questions, but not in any detail, but at least they address them...

Looking for some more skeptical sources... this website seems interesting: https://www.construction-physics.com/p/will-boom-successfully-build-a-supersonic . They point out some big problems with Boom's approach. Boom is designing both its own engine and it's own plane, and the costs are likely to run into the limits of their VC funding even assuming nothing goes wrong. And even if they get a working plane and engine, the safety, cost, and reliability needed for a viable supersonic passenger plane might not be met. And... XB-1 didn't actually reach Mach 2.2 and was retired after only a few flight. Maybe it was a desperate ploy for more VC funding? Or maybe it had some unannounced issues? Okay... I'm seeing why this is potentially sneerable. There is a decent chance they entirely fail to deliver a plane with the VC funding they have, and even if they get that far it is likely to fail as a commercially viable passenger plane. Still, there is some possibility they deliver something... so eh, wait and see?

[–] scruiser@awful.systems 9 points 12 hours ago

As the other comments have pointed out, an automated search for this category of bugs (done without LLMs) would do the same job much faster, with much less computational resources, without any bullshit or hallucinations in the way. The LLM isn't actually a value add compared to existing tools.

[–] scruiser@awful.systems 38 points 23 hours ago (2 children)

Of course, part of that wiring will be figuring out how to deal with the the signal to noise ratio of ~1:50 in this case, but that’s something we are already making progress at.

This line annoys me... LLMs excel at making signal-shaped noise, so separating out an absurd number of false positives (and investigating false negatives further) is very difficult. It probably requires that you have some sort of actually reliable verifier, and if you have that, why bother with LLMs in the first place instead of just using that verifier directly?

[–] scruiser@awful.systems 6 points 23 hours ago

Loose Mission Impossible Spoilers

The latest Mission Impossible movie features a rogue AI as one of the main antagonists. But on the other hand, the AI's main powers are lies, fake news, and manipulation, and it only gets as far as it does because people allow fear to make themselves manipulable and it relies on human agents to do a lot of its work. So in terms of promoting the doomerism narrative, I think the movie could actually be taken as opposing the conventional doomer narrative in favor of a calm, moderate, internationally coordinated (the entire plot could have been derailed by governments agreeing on mutual nuclear disarmament before the AI subverted them) response against AI's that ultimately have only moderate power.

Adding to the post-LLM hype predictions: I think post LLM bubble popping, "Terminator" style rogue AI movie plots don't go away, but take on a different spin. Rogue AI's strength's are going to be narrower, their weaknesses are going to get more comical and absurd, and idiotic human actions are going to be more of a factor. For weaknesses it will be less "failed to comprehend love" or "cleverly constructed logic bomb breaks its reasoning" and more "forgets what it was doing after getting drawn into too long of a conversation". For human actions it will be less "its makers failed to anticipate a completely unprecedented sequence of bootstrapping and self improvement" and more "its makers disabled every safety and granted it every resource it asked for in the process of trying to make an extra dollar a little bit faster".

[–] scruiser@awful.systems 12 points 5 days ago

This connection hadn't occured to me before, but the Starship Troopers scenes (in the book) where they claim to have mathematically rigorous proofs about various moral statements or actions or societal constructs reminds me of how Eliezer has a decision theory in mind with all sorts of counter intuitive claims (it's mathematically valid to never ever give into any blackmail or threats or anything adjacent to them), but hasn't actually written out his decision theory in rigorous well defined terms that can pass peer review or be used to figure out anything beyond some pre-selected toy problems.

[–] scruiser@awful.systems 10 points 6 days ago

There are parts of the field that have major problems, like the sorts of studies that get done on 20 student volunteers and then get turned into a pop psychology factoid that gets tossed around and over-generalized while the original study fails to replicate, but there are parts that are actually good science.

[–] scruiser@awful.systems 7 points 6 days ago (1 children)

I wouldn't say even that part works so well, given how Mt. Moon is such a major challenge even with all the features like that.

[–] scruiser@awful.systems 15 points 6 days ago

Every AI winter, the label AI becomes unwanted and people go with other terms (expert systems, machine learning, etc.)... and I've come around to thinking this is a good thing, as it forces people to specify what it is they actually mean, instead of using a nebulous label with many science fiction connotations that lumps together decent approaches and paradigms with complete garbage and everything in between.

[–] scruiser@awful.systems 13 points 6 days ago (1 children)

No, I think BlueMonday is being reasonable. The article has some quotes from scientists with actually relevant expertise, but it uncritically mixes them with LLM hype and speculation in a typical both sides sort of thing that gives lay readers the (false) impression that both sides are equal. This sort of journalism may appear balanced, but it ultimately has contributed to all kinds of controversies (from Global Warming to Intelligent Design to medical pseudoscience) where the viewpoints of cranks and uninformed busybodies and autodidacts of questionable ability and deliberate fraudsters get presented equally with actually educated and researched viewpoints.

[–] scruiser@awful.systems 21 points 6 days ago (3 children)

A new LLM plays pokemon has started, with o3 this time. It plays moderately faster, and the twitch display UI is a little bit cleaner, so it is less tedious to watch. But in terms of actual ability, so far o3 has made many of the exact same errors as Claude and Gemini including: completely making things up/seeing things that aren't on the screen (items in Virdian Forest), confused attempts at navigation (it went back and forth on whether the exit to Virdian Forest was in the NE or NW corner), repeating mistakes to itself (both the items and the navigation issues I mentioned), confusing details from other generations of Pokemon (Nidoran learns double kick at level 12 in Fire Red and Leaf Green, but not the original Blue/Yellow), and it has signs of being prone to going on completely batshit tangents (it briefly started getting derailed about sneaking through the tree in Virdian Forest... i.e. moving through completely impassable tiles).

I don't know how anyone can watch any of the attempts at LLMs playing Pokemon and think (viable) LLM agents are just around the corner... well actually I do know: hopium, cope, cognitive bias, and deliberate deception. The whole LLM playing Pokemon thing is turning into less of a test of LLMs and more entertainment and advertising of the models, and the scaffold are extensive enough and different enough from each other that they really aren't showing the models' raw capabilities (which are even worse than I complained about) or comparing them meaningfully.

[–] scruiser@awful.systems 8 points 1 week ago (1 children)

Is that supposed to be an advertisement in favor of AI? (As opposed to stealth satire?) Seeing it makes me want to get off my computer and touch grass.

[–] scruiser@awful.systems 3 points 1 week ago

Wow, that is some skilled modeling. You should become a superforecaster and write ~~prophecies~~ AI timelines, they are quite popular on lesswrong.

 

So, lesswrong Yudkowskian orthodoxy is that any AGI without "alignment" will bootstrap to omnipotence, destroy all mankind, blah, blah, etc. However, there has been the large splinter heresy of accelerationists that want AGI as soon as possible and aren't worried about this at all (we still make fun of them because what they want would result in some cyberpunk dystopian shit in the process of trying to reach it). However, even the accelerationist don't want Chinese AGI, because insert standard sinophobic rhetoric about how they hate freedom and democracy or have world conquering ambitions or they simply lack the creativity, technical ability, or background knowledge (i.e. lesswrong screeds on alignment) to create an aligned AGI.

This is a long running trend in lesswrong writing I've recently noticed while hate-binging and catching up on the sneering I've missed (I had paid less attention to lesswrong over the past year up until Trump started making techno-fascist moves), so I've selected some illustrative posts and quotes for your sneering.

  • Good news, China actually has no chance at competing at AI (this was posted before deepseek was released). Well. they are technically right that China doesn't have the resources to compete in scaling LLMs to AGI because it isn't possible in the first place

China has neither the resources nor any interest in competing with the US in developing artificial general intelligence (AGI) primarily via scaling Large Language Models (LLMs).

  • The Situational Awareness Essays make sure to get their Yellow Peril fearmongering on! Because clearly China is the threat to freedom and the authoritarian power (pay no attention to the techbro techno-fascist)

In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers?

  • More crap from the same author
  • There are some posts pushing back on having an AGI race with China, but not because they are correcting the sinophobia or the delusions LLMs are a path to AGI, but because it will potentially lead to an unaligned or improperly aligned AGI
  • And of course, AI 2027 features a race with China that either the US can win with a AGI slowdown (and an evil AGI puppeting China) or both lose to the AGI menance. Featuring "legions of CCP spies"

Given the “dangers” of the new model, OpenBrain “responsibly” elects not to release it publicly yet (in fact, they want to focus on internal AI R&D). Knowledge of Agent-2’s full capabilities is limited to an elite silo containing the immediate team, OpenBrain leadership and security, a few dozen US government officials, and the legions of CCP spies who have infiltrated OpenBrain for years.

  • Someone asks the question directly Why Should I Assume CCP AGI is Worse Than USG AGI?. Judging by upvoted comments, lesswrong orthodoxy of all AGI leads to doom is the most common opinion, and a few comments even point out the hypocrisy of promoting fear of Chinese AGI while saying the US should race for AGI to achieve global dominance, but there are still plenty of Red Scare/Yellow Peril comments

Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP's suppression of information related to the origins of COVID feel more salient and relevant.

 

I am still subscribed to slatestarcodex on reddit, and this piece of garbage popped up on my feed. I didn't actually read the whole thing, but basically the author correctly realizes Trump is ruining everything in the process of getting at "DEI" and "wokism", but instead of accepting the blame that rightfully falls on Scott Alexander and the author, deflects and blames the "left" elitists. (I put left in quote marks because the author apparently thinks establishment democrats are actually leftist, I fucking wish).

An illustrative quote (of Scott's that the author agrees with)

We wanted to be able to hold a job without reciting DEI shibboleths or filling in multiple-choice exams about how white people cause earthquakes. Instead we got a thousand scientific studies cancelled because they used the string “trans-” in a sentence on transmembrane proteins.

I don't really follow their subsequent points, they fail to clarify what they mean... In sofar as "left elites" actually refers to centrist democrats, I actually think the establishment Democrats do have a major piece of blame in that their status quo neoliberalism has been rejected by the public but the Democrat establishment refuse to consider genuinely leftist ideas, but that isn't the point this author is actually going for... the author is actually upset about Democrats "virtue signaling" and "canceling" and DEI, so they don't actually have a valid point, if anything the opposite of one.

In case my angry disjointed summary leaves you any doubt the author is a piece of shit:

it feels like Scott has been reading a lot of Richard Hanania, whom I agree with on a lot of points

For reference the ssc discussion: https://www.reddit.com/r/slatestarcodex/comments/1jyjc9z/the_edgelords_were_right_a_response_to_scott/

tldr; author trying to blameshift on Trump fucking everything up while keeping up the exact anti-progressive rhetoric that helped propel Trump to victory.

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

 

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

view more: next ›