scruiser

joined 2 years ago
[–] scruiser@awful.systems 2 points 5 hours ago (1 children)

Oh, I had misunderstood their role in this. So they are more like someone that was already in place for other (scammey) reasons than anyone's preferred partner or middleman? And they are critical enough to be a weak link that breaks first and brings everyone else down?

[–] scruiser@awful.systems 1 points 5 hours ago

Ultra ultra high end gaming? Okay, looking at the link, 94 GB of GPU memory is probably excessive even for eccentrics cranking the graphics settings all the way up. Hobbyists with way too much money trying to screw around with open weight models even after the bubble bursts? Which would presume LLMs or something similar continue to capture hobbyists' interests and that smaller models can't satisfy their interests. Crypto mining with algorithms compatible with GPUs? And cyrpto is its own scam ecosystem, but one that seems to refuse to die permanently.

I think the ultra high end gaming is the closest to a workable market, and even that would require a substantial discount.

[–] scruiser@awful.systems 5 points 7 hours ago (3 children)

Isn't being a fall-man the point of Coreweave for Microsoft, NVIDIA, and everyone else using them as middle-man? They all theoretically have the ability to do the things Coreweave does in-house, but that would expose them to more risk if the bubble pops, so they have Coreweave take on the biggest part of the risk and draw in outside investor money?

[–] scruiser@awful.systems 10 points 7 hours ago

It's really the perfect opportunity for integration! They can steal the data and content of their own users, instead of other people's users, and then they can serve their slop directly to their own users instead of users having to generate and export their slop to other people's social media sites. And both of these applications can distract from the fact that AGI isn't happening and even more modest LLM agents aren't practically useful. And since Altman already built up a user base on ChatGPT, he'll have a head start on getting a critical mass of users!

Thinking about it... something like this is probably Altman's best bet for making OpenAI's financials work out, because as David Gerard and Ed Zitron and others have all pointed out, they are losing money per LLM user, so they really do need a way to convert a huge user base into money that doesn't involve LLMs.

[–] scruiser@awful.systems 7 points 7 hours ago* (last edited 7 hours ago)

That feels like a fitting ironic fate, a company selling AI slopcode generation looses a bunch of users from believing their own bullshit and using an LLM as customer support. Hopefully that story repeated a few dozen times across other businesses and the business majors stop pushing LLM usage.

Edit... looking at the orange site comments... some unironically cited Anthropic ~~research~~ marketing hype, which (correctly) shows "Chain-of-Thought" is often bullshit unrelated to the final answer (but it's Anthropic, so the label it as deception and unfaithfulness instead of the entire approach being bullshit in general).

[–] scruiser@awful.systems 4 points 7 hours ago

Linking this recent comment on an older thread because it was so relevant: https://awful.systems/comment/6966312

TLDR; GPUs cost as much to operate as they normally depreciate over time, so even if the bubble pops people might be sitting on piles of GPUs without reselling or using them.

[–] scruiser@awful.systems 2 points 7 hours ago (2 children)

That is substantially worse than I realized. So possibly people could sit on GPUs for years after the bubble pops instead of selling them or using them? (Particularly if the crash means NVIDIA decides to slow how fast the push the bleeding edge on GPU specs so newer ones don't as radically outperform older ones?)

[–] scruiser@awful.systems 10 points 2 days ago* (last edited 2 days ago)

These people need to sit through a college level class on linguistics or something like that. This is a demonstration of why STEM majors need general higher education.

[–] scruiser@awful.systems 8 points 2 days ago

Yeah I also worry the slop and spam is here to stay, it's easy enough to make, of as passable quality for the garbage uses people want from it, and if GPUs/compute go down in price, affordable enough for the spammers and account boosters and karma farmers and such to keep using it.

[–] scruiser@awful.systems 12 points 2 days ago

I think you are much more optimistic than me about the general public's ability to intellectually understand fascism or think about copyright or give artists their appropriate credit. To most people that know about image gen, it's a fun toy: throw in some words and rapidly get pictures. The most I hope for is that AI image generation becomes unacceptable to use in professional or serious settings and it is relegated to a similar status as clip art.

[–] scruiser@awful.systems 6 points 2 days ago (2 children)

I don’t think they’d try that hard.

Wow lol... 2) was my guess at an easy/lazy/fast solution, and you think they are too lazy for even that? (I think a "proper" solution would involve substantial modifications/extensions to the standard LLM architecture, and I've seen academic papers with potential approaches, but none of the modelfarmers are actually seriously trying anything along those lines.)

[–] scruiser@awful.systems 9 points 2 days ago (8 children)

Serious question: what are people's specific predictions for the coming VC bubble popping/crash/AI winter? (I've seen that prediction here before, and overall I agree, but I'm not sure about specifics...)

For example... I've seen speculation that giving up on the massive training runs could free up compute and cause costs to drop which the more streamlined and pragmatic GenAI companies could use to pivot to providing their "services" at sustainable rates (and the price of GPUs would drop to the relief of gamers everywhere). Alternatively, maybe the bubble bursting screws up the GPU producers and cloud service providers as well and the costs on compute and GPUs don't actually drop that much if any?

Maybe the bubble bursting makes management stop pushing stuff like vibe coding... but maybe enough programmers have gotten into the habit of using LLMs for boilerplate that it doesn't go away, and LLM tools and plugins persist to make code shittery.

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

 

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

view more: next ›