While most bullish outlooks are premised on economic reacceleration, it’s difficult to ignore the market’s reliance on AI capex. In market-pricing terms, we believe we’re closer to the seventh inning than the first, and several developments indicate we may be entering the later phases of the boom. First, AI hyperscaler free-cash-flow growth has turned negative. Second, price competition in the "monopoly-feeder businesses” seems to be accelerating. Finally, recent deal-making smacks of speculation and vendor-financing strategies of old.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
"You know, I never defrauded anyone,” says Sam Bankman-Fried
“You know, I never sent the boys across the Isonzo without believing we could win,” said Luigi Cadorna
I predict Sam will lose big on the S&C claims. They have made fuckin bank on this bankruptcy, but also their fees have already been ruled entirely reasonable given the shitshow in question, and the near complete recovery for creditors will make them look even better.
Guys according to LW you’re reading Omelas all wrong (just like LeGuin was wrong)
https://www.lesswrong.com/posts/n83HssLfFicx3JnKT/omelas-is-perfectly-misread
Choice sneer from the comments:
Omelas: how we talk about utopia [by Big Joel, a patient and straightforward Youtube humanist,] [has a] pretty much identical thesis, does this count?
Another solid one which aligns with my local knowledge:
It's also about literal child molesters living in Salem Oregon.
The story is meant to be given to high schoolers to challenge their ethics, and in that sense we should read it with the following meta-narrative: imagine that one is a high schooler in Omelas and is learning about The Plight and The Child for the first time, and then realize that one is a high schooler in Salem learning about local history. It's not intended for libertarian gotchas because it wasn't written in a philosophical style; it's a narrative that conveys a mood and an ethical framing.
One of the many annoying traits of rationalists is their tendency to backproject classic pieces of literature onto their chosen worldview.
LLMs are the Dippin Dots of technology.
That's an unfair comparison, Dippin Dots don't slowly ruin the world by existing (also, they're delicious)
It will always be the ice cream of the future, never the ice cream of today
So bit of a counter to our usual stuff thing. But a worker migrant here won a case against his employer who had linked his living space to his employment contract (forbidden) using chatgpt as an aid (how much is not told). So there actually was a case where it helped.
Interesting note on it, these sorts of cases have no jurisprudence yet, so that might have been a factor. No good links for it sadly as it was all in Dutch. (Cant even find a proper writeup in a bigger news site as a foreigner defending their rights against abuse is less interesting than some other country having a new bisshop). Skeets congratulating the guy here https://bsky.app/profile/isgoedhoor.bsky.social/post/3m27aqkyjjk2c (in Dutch). Nothing much about the genAI usage.
But this does fit a pattern, how, like with blind/bad eyesight people, these tools are veing used by people who have no other recourse because we refuse to help them (this is bad tbh, Im happy they are getting more help don't fet me wrong, but it shouldn't be this substandard).
The Guardian shat out its latest piece of AI hype, violating Betteridge's Law of headlines by asking "Parents are letting little kids play with AI. Are they wrong?"
Do we have a word for people that are kind of like… AI concern trolls? Like they say they are critical of AI, or even against AI, but only ever really put forward pro-AI propaganda, especially in response to actual criticisms of AI. Kind of centrists or (neo) libs. But for AI.
Bonus points if they also for some reason say we should pivot to more nuclear power, because in their words, even though AI doesn’t use as much electricity as we think, we should still start using more nuclear power to meet the energy demands. (ofc this is bullshit)
E: Maybe it's just sealion
sealAIon
Sealions is a bit more specific, as they do not stop, and demand way more evidence than is normal, Scott had a term for this, forgot it already (one of those more useful Rationalist ideas, which they only employ themselves asymmetrically). Noticed it recently on reddit, some person was mad I didn't properly counter Yuds arguments, while misrepresenting my position (which wasn't that strong tbh, I just quickly typed them up before I had other things to do). But it is very important to take Yuds arguments seriously for some reason, reminds me of creationists.
Think just calling them AI concern trolls works.
Been tangentially involved in a discussion about how much LLMs have improved in the past year (I don’t care), but now that same space has a discussion of how annoying the stupid pop-up chat boxes are on websites. Don’t know what the problem is, they’ve gotten so much better in the past year?
I mean that's the fundamental problem, right? No matter how much better it gets, the things it's able to do aren't really anything people need or want. Like, if I'm going to a website looking for information it's largely because I don't want to deal with asking somebody for the answer. Even a flawless chatbot that can always provide the information I need - something that is far beyond the state of the art and possibly beyond some fundamental limitation of the LLM structure - wouldn't actually be preferable to just navigating a smooth and well-structured site.
Yup, exactly. The chatbot, no matter how helpful, exists in a context of dark patterned web design, incredibly bad resource usage and theft. Its purpose is to make the customer’s question go away, so not even the fanatics are interested.
See also how youtube tutorials have mostly killed (*) text based tutorials/wikis and are just inferior to good wikis/text based ones. Both because listening to a person talk is a linear experience, and a text one allows for easy scrolling, but also because most people are just bad at yt tutorials. (shoutout to the one which had annoyingly long random pauses in/between sentences even at 2x speed).
This is not helped because now youtube is a source of revenue, and updating a wiki/tutorial often is not. So the incentives are all wrong. A good example of this is the gaming wiki fextralife: See this page on dragons dogma 2 npcs. https://dragonsdogma2.wiki.fextralife.com/NPCs (the game has been out for over a year, if the weirdness doesn't jump out at you). But the big thing for fextralife is their youtube tutorials and it used to have an autoplaying link to their streams. This isn't a wiki, it is an advertisement for their youtube and livestreams. And while this is a big example the problem persists with smaller youtubers, who suffer from extreme publish, do not deviate from your niche or perish. They can't put in the time to update things, because they need to publish a new video (on their niche, branching out is punished) soon or not pay rent. (for people who play videogames and or watch youtube out there, this is also why somebody like the spiffing brit is has long ago went from 'I exploit games' to 'I grind and if you grind enough in this single player game you become op', the content must flow, but eventually you will run out of good new ideas (also why he tried to push his followers into doing risky cryptocurrency related 'cheats' (follow Elon, if he posts a word that can be cryptocoined, pump and dump it for a half hour))).
*: They still exist but tend to be very bad quality, even worse now people are using genAI to seed/update them.
People can't just have a hobby anymore, can they?
Nope. And tbh, did some dd2 recently, and for a very short while I was tempted to push the edit button, but then I remembered that fextralife just tries to profit off my wiki editing labor. (I still like the idea of wikis, but do not have the fortitude and social calm to edit a mainstream one like wikipedia). (I did a quick check, and yeah I also really hate the license fextralife/valnet uses "All contributions to any Fextralife.com Wiki fall under the below Contribution Agreement and become the exclusive copyrighted property of Valnet.", and their editor sucks ass (show me the actual code not this wysiwyg shit)).
Gross :/ how come people have to be like that?
It wouldn't be so bad if they just didn't care and stopped maintaining, but their site is one of the first ones you get. Which is a regular problem with these things.
Tyler Cowen saying some really weird shit about an AI 'actress'.
(For people who might wonder why he is relevant. See his 'see also' section on wikipedia)
E: And you might wonder, rightfully imho, that this cannot be real, that this must be an edit. https://archive.is/vPr1B I have bad news.
The Wikipedia editors are on it.
image description
screenshot of Tyler Cowen's Wikipedia article, specifically the "Personal life" section. The concluding sentence is "He also prefers virgin actresses."
Some Rat content got shared on HN, and the rats there are surprised and outraged not everyone shares their deathly fear of the AI god:
https://news.ycombinator.com/item?id=45451971
"Stop bringing up Roko's Basilisk!!!" they sputter https://news.ycombinator.com/item?id=45452426
"The usual suspects are very very worried!!!" - https://news.ycombinator.com/item?id=45452348 (username 'reducesuffering checks out!)
``Think for at least 5 seconds before typing.'' - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743
https://news.ycombinator.com/item?id=45453386
nobody mentioned this particular incident, dude just threw it into the discussion himself
incredible how he rushes to assure us that this was "a really hot 17 year old"
Amusing to see him explaining to you the connection between Bay Area rationalists and AI safety people.
Always fun trawling thru comments
Government banning GPUs: absolutely neccessary: https://news.ycombinator.com/item?id=45452400
Government banning ICE vehicles - eh, a step too far: https://news.ycombinator.com/item?id=45440664
The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:
If you claim that there is no AI-risk, then which of the following bullets do you want to bite?
- If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
- There’s no way that AI with an IQ of 300 will arrive within the next few decades.
- We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.
Ignoring that IQ doesn't really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn't stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.
Frankly I wish that they'd understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how systemctl
works.
If a race of aliens with an IQ of 300 came to Earth
Oh noes, the aliens scored a meaningless number on the eugenicist bullshit scale, whatever shall we do
Next you'll be telling me that the aliens can file their TPS reports in under 12 parsecs
``Think for at least 5 seconds before typing.‘’ - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743
Read that last one against my better judgment, and found a particularly sneerable line:
And in this case we're talking about a system that's smarter than you.
Now, I'm not particularly smart, but I am capable of a lot of things AI will never achieve. Like knowing something is true, or working out a problem, or making something which isn't slop.
Between this rat and Saltman spewing similar shit on Politico, I have seen two people try to claim text extruders are smarter than living, thinking human beings. Saltman I can understand (he is a monorail salesman who lies constantly), but seeing someone who genuinely believes this shit is just baffling. Probably a consequence of chatbots destroying their critical thinking and mental acuity.
Let's not forget the perennial favorite "humans are just stochastic parrots too durr" https://news.ycombinator.com/item?id=45452238
to be scrupulously fair, the submission is flagged, and most of the explicit rat comments are downvoted
There have been a lot of cases in history of smart people being bested by the dumbest people around who just had more guns/a gun/copious amounts of meth/a stupid idea but they got lucky once, etc.
I mean, if they are so smart, why are they stuck in a locker?
It's practically a proverb that you don't ask a scientist to explain how a "psychic" is pulling off their con, because scientists are accustomed to fair play; you call a magician.
Jeff "Coding Horror" Atwood is sneering — at us! On Mastodon:
bad news "AI bubble doomers". I've found the LLMs to be incredibly useful … Is it overhyped? FUCK Yes. … But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, … and the first dot-com bubble … If you find this uncomfortable, I'm sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.
T. chatbot booster rhetoric. So what are those examples, buddy? Very specifically? He replies:
a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.
Um hello‽ Maybe Jeff doesn't have a spare room or room to sublet, but surely he can spare a couch or a mailbox? Let your friend use your mailing address. Store some of their stuff in your garage. To use the jargon of hackers, Jeff should be a better neighbor. This is a common issue for unhoused folks and they cannot climb back up the ladder into society without some help. Jeff's reinvented the Hulk tacos meme but they can't even eat it because printer paper tastes awful.
The "unhoused friend" story is about as likely to be true as the proverbial Canadian girlfriend story. "You wouldn't know her."
jeff’s follow-up after the backlash clarifies: you wouldn’t know her because he donated right under the limit to incur a taxable event and didn’t establish a trust like a normal millionaire and also the LLM printout only came pointlessly after months of research and financially supporting the unhoused friend and also you’re no longer allowed to ask publicly about the person he brought up in public, take it to email
Alex, I'll take "Things that never happened" for $1000.
there's been something that's really been rubbed me the wrong way about jeff in the last few years. he was annoying before and had some insights but lately I've been using him as a sort of a jim crameresque tech-take-barometer.
What really soured me was after he started picking fights with some python people a few years back because someone dared post that a web framework? (couldn't dig up the link) was a greater contribution to the world than S/O? His response was pretty horrid to the point where various python leaders were telling to stop being a massive dick because he was trying to be a bully with this "do you know who I am" attitude because he personally had not heard of the framework so it wasn't acshually at all that relevant compared to S/O.
and now this combined with his stupid teehee I am giving away my wealth guise look how altruistic I am really is a bit eugh
I really hope atwood’s unhoused friend got the actual infrastructural support you mentioned (a temporary mailing address and an introduction letter emailed to an employer is only slightly more effort than generating slop, jeff, please) but from direct experience with philanthropists like him, I’m fairly sure Jeff now considers the matter solved forever
Thanks for this.
A bit odd to start out throwing shade at the Segway considering that the concept has been somewhat redeemed with e-bikes and e-scooters.
"Provide an overview of local homeless services" sounds like a standard task for a volunteer or a search engine, but yes "you can use my address for mail and store some things in my garage and I will email some contacts about setting you up with contract work" would be a better answer than just handing out secondhand information! Many "amazing things AI can do" are things the Internet + search engines could do ten years ago.
I would also like to hear from the friend "was this actually helpful?"
Friend: "I have a problem"
Me, with a stack of google printouts: "My time to shine!".
E: ow god, I thought the examples were multiple and the friend one was just a random one. No, it was the first example. 'I gave my friend a printout, which saved me time'. Also, as I assume the friend still is unhoused, and they didn't actually use the printout yet, he doesn't know if this actually helped. Atwood isn't a 'helping the unhoused' expert. He just assumed it was a good source. The story ends when he hands over the paper.
Also very funny that he is also going 'you just need to know how to ask questions the right way, which I learned by building stackoverflow'. Yeah euh, that is not a path a lot of people can follow up in.
Soyweiser
Its even worse when I read the whole thread, Atwood claims to have $140 million, and the best he can do for "a friend" who is homeless is handing out some printouts with a few sections highlighted? And he thinks this makes him look good because he promises to give away half his wealth one day?
Like Clinton starting a go fund me for a coworker with cancer, the rich and their money are not voluntarily parted.
This also shows problems with the "effective altruist" approach. Donating to the local theater or "to raise awareness of $badThing" might not be the best way of using funds, but when a friend needs help now, you have the resources to help them, and you say "no, that might not be as efficient as creating a giant charity to help strangers one day" something is wrong.