I don't know about Ed, but I've had scenes from Network stuck in my head for months, particularly the scene where the corporate hatchet man Hackett is explaining that a Saudi conglomerate is about to buy out a failing TV network. He says, "We need that Saudi money bad."
corbin
It's the cost of the electricity, not the cost of the GPU!
Empirically, we might estimate that a single training-capable GPU can pull nearly 1 kilowatt; an H100 GPU board is rated for 700W on its own in terms of temperature dissipation and the board pulls more than that when memory is active. I happen to live in the Pacific Northwest near lots of wind, rivers, and solar power, so electricity is barely 18 cents/kilowatt-hour and I'd say that it costs at least a dollar to run such a GPU (at full load) for 6hrs. Also, I estimate that the GPU market is currently offering a 50% discount on average for refurbished/like-new GPUs with about 5yrs of service, and the H100 is about $25k new, so they might depreciate at around $2500/yr. Finally, I picked the H100 because it's around the peak of efficiency for this particular AI season; local inference is going to be more expensive when we do apples-to-apples units like tokens/watt.
In short, with bad napkin arithmetic, an H100 costs at least $4/day to operate while depreciating only $6.85/day or so; operating costs approach or exceed the depreciation rate. This leads to a hot-potato market where reselling the asset is worth more than operating it. In the limit, assets with no depreciation relative to opex are treated like securities, and we're already seeing multiple groups squatting like dragons upon piles of nVidia products while the cost of renting cloudy H100s has jumped from like $2/hr to $9/hr over the past year. VCs are withdrawing, yes, and they're no longer paying the power bills.
I went into this with negative expectations; I recall being offended in high school that The Flashbulb was artificially sped up, unlike my heroes of neoclassical guitar and progressive-rock keyboards, and I've felt that their recent thoughts on newer music-making technology have been hypocritical. That said, this was a great video and I'm glad you shared it.
Ears and eyes are different. We deconvolve visual data in the brain, but our ears actually perform a Fourier decomposition with physical hardware. As a result, psychoacoustics is a real and non-trivial science, used e.g. in MP3, which limits what an adversary can do to frustrate classification or learning, because the result still has to sound like music in order to get any playtime among humans. Meanwhile I'm always worried that these adversarial groups are going to accidentally propagate something like McCollough stripes, a genuine cognitohazard that causes edges to become color-coded in the visual cortex for (up to) months after a few minutes of exposure; it's a kind of possible harm that fundamentally defies automatic classification by definition.
HarmonyCloak seems like a fairly boring adversarial tool for protecting the music industry from the music industry. Their code is incomplete and likely never going to get properly published; again we're seeing an industry-capture research group taking and not giving back to the Free Software community. I think all of the demos shown here are genuine, but he fully admits that this is a compute-intensive process which I estimate is going to slide back out of affordability by the end of 2026. This is going to stop being effective as soon as we get back into AI winter, but I'm not going to cry for Nashville.
I really like the two attacks shown near the end, starting around 22:00. The first attack, if genuinely not audible to humans, is likely a Mosquito-style frequency that is above hearing range and physically vibrates the components of the microphone. Hofstadter and the Tortoise would be proud, although I'm concerned about the potential long-term effects on humans. The second attack is again adversarial but specific to models on home-assistant devices which are trained to ignore some loud sounds; I can't tell spectrographically whether that's also done above hearing range or not. I'm reluctant to call for attacks on home assistants, but they're great targets.
Fundamentally this is a video that doesn't want to talk about how musicians actually rip each other off. The "tones and rhythms" that he keeps showing with nice visualizations have been machine-learnable for decades, ranging from beat-finders to frequency-analyzers to chord-spellers to track-isolators built into our music editors. He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.
It's well-known folklore that reinforcement learning with human feedback (RLHF), the standard post-training paradigm, reduces "alignment," the degree to which a pre-trained model has learned features of reality as it actually exists. Quoting from the abstract of the 2024 paper, Mitigating the Alignment Tax of RLHF (alternate link):
LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting pretrained abilities, which is also known as the alignment tax.
In practice, the behaviors that the chatbots learn in post-training are FUD and weasel-wording; they appear to not unlearn facts, but to learn so much additional nuance as to bury the facts. The bots perform worse on various standardized tests about the natural world after post-training; there are quantitative downsides to forcing them to adopt any particular etiquette, including speaking like a chud.
The problem is mostly that the uninformed public will think that the chatbot is knowledgeable and well-spoken because it rattles off the same weak-worded hedges as right-wing pundits, and it's addressed by the same improvements in education required to counter those pundits.
Answering your question directly: no, slop machines can't be countered with more slop machines without drowning us all in slop. A more direct approach will be required.
Yes, but the article's not actually about that. It's about Microsoft returning to the same datacenter-building schedule from a decade ago. Datacenters have a lag of about 3-5yrs depending on what's inside them and where they're located, so what we're actually seeing is Microsoft projecting a relative reduction in overall usage. Note that among all the cancellations of notes and prospective claims, Microsoft isn't walking back their two-decade nuclear-power deal with Westinghouse; they're not destroying or reducing any existing capacity, just planning to build less. At risk of quoting Bloomberg:
After a frantic expansion to support OpenAI and other artificial intelligence projects, [Microsoft] expects spending to shift from new construction to fitting out data centers with servers and other equipment.
To the extent that the bubble is popping, Microsoft and other datacenter owners have to guess half a decade in advance when the bubble will pop, and if you take them at their word — that is, if we assume that they canceled these contracts with perfect foresight — then the bubble must have already popped in 2023-2024, and the market is experiencing coyote time because…? More likely, this is fallout from their ongoing breakup with OpenAI, who almost certainly begged Microsoft for so much compute (and definitely begged for too many nVidia GPUs!) that Microsoft had to adjust their datacenter plans. The bubble's not done until OpenAI has exhausted all possible funding, say in late 2025 or early 2026 when Softbank and the Saudis realize that they've made a hilarious mistake.
We've discussed this previously on awful.systems, both the value of nuclear-energy contracts and Microsoft's retraction of intents.
Like any reality-show writing room, they only plan one episode in advance and only have a week's worth of photography in mind.
As the classic film Network points out, the Saudi money is the end of the road; there aren't any richer or more gullible large wealth funds who will provide further cash. So OpenAI could be genuinely out of greater ~~fools~~ financing after another year of wasting Somebody Else's Money. This crash has removed "large" from the front of any other wealth fund that might have considered bailing them out. The Stargate gamble could still work out, but so far I think ti's only transferred bag-holding responsibilities from Microsoft to Oracle.
Another path is to deflate nVidia's cap. At first blush, this seems impossible to me; nVidia's business behavior is so much worse than that of competitors Intel or Imagination yet they have generally never lost faith from their core gaming laity, and as long as nVidia holds 20-30% of the gaming GPU market they will always have a boutique niche with cap at least comparable to e.g. their competitor AMD. But GPUs have been treated as currency among datacenter owners, and a market crash could devalue the piles of nVidia GPUs which some datacenter owners have been using as collateral for purchasing land, warehouses, machines, more GPUs, etc. nVidia isn't the only bag-holder here, though, and since they don't really want to play loan-shark and repossess a datacenter for dereliction, odds are good that they'll survive even if they're no longer king of the hill. The gold rush didn't work out? Too bad, no returns allowed on shovels or snow gear.
Side note: If folks just wanted to know whether tech in general is hurt by this, then yes, look at Tesla's valuation. Tesla is such a cross-cutting big-ticket component of so many ETFs that basically every retirement scheme took a hit from Tesla taking a hit. The same thing will happen with nVidia and frankly retirement-fund managers should feel bad for purchasing so much of what any long-term investor would consider to be meme stocks. (I don't hold either TSLA or NVDA stocks.)
I hope this makes sense. I don't post with this candor when I'm well-rested and sober.
Australian chemist and videographer Explosions & Fire argues convincingly that the ongoing recent radioactive-boy-scout scandal should not result in prosecution. For context, a 24-year-old man ordered small samples of radioactive isotopes from the USA, Australia failed to intercept it at the border, and they are prosecuting him in order to avoid embarrassment over incompetence. I don't have a choice sneer; E&F is unwaveringly energized over the topic of radioactive isotopes and injustice, and the whole thing is worth watching.
Today on the orange site, an AI bro is trying to reason through why people think he's weird for not disclosing his politics to people he's trying to be friendly with. Previously, he published a short guide on how to talk about politics, which — again, very weird, no possible explanation for this — nobody has adopted. Don't worry, he's well-read:
So far I've only read Harry Potter and The Methods of Rationality, but can say it is an excellent place to start.
The thread is mostly centered around one or two pearl-clutching conservatives who don't want their beliefs examined:
I find it astonishing that anyone would ask, ["who did you vote for?"] … In my social circle, anyway, the taboo on this question is very strong.
To which the top reply is my choice sneer:
In my friend group it's clear as day: either you voted to kill and deport other people in the friend group or you didn't. Pretty obvious the group would like to know if you're secretly interested in their demise.
Angela Collier has a wonderfully grumpy video up, why functioning governments fund scientific research. Choice sneer at around 32:30:
But what do I know? I'm not a medical doctor but neither is this chucklefuck, and people are listening to him. I don't know. I feel like this is [sighs, laughs] I always get comments that tell me, "you're being a little condescending," and [scoffs] yeah. I mean, we can check the dictionary definition of "condescending," and I think I would fit into that category. [Vaccine deniers] have failed their children. They are bad parents. One in four unvaccinated kids who get measles will die. They are playing Russian roulette with their child's life. But sure, the problem is I'm being, like, a little condescending.
I encourage NYC neighbors to spread the idea of deranking. It worked in Portland. We had an exceptionally shitty candidate:
We pushed to derank him. And the result: