I started reading the post about wealth bias and was immediately distracted by the fact that they're trying to call a government based on prediction markets a "futarchy" which speaks to these people being entirely the wrong kind of terminally online.
YourNetworkIsHaunted
I guess UNESCO, like all right-thinking people, really like the anime animalgirl mascots and give preference to any product that has one
Isn't it more grammatically correct to say "Jeffreys Epstein"?
No, I'm sure this time we can identify the person or people who are divinely anointed to exercise absolute power over everyone.
I've got to acknowledge the sheer guts it takes to look at arguably the most predictable consequence of the cyberpunk dystopia you're building and say "nah that won't happen because reasons."
In each case, existing social and communication-oriented tasks tended to be displaced by new tasks that involved more interaction with the robots than with the residents. Instead of saving time for staff to do more of the human labor of social and emotional care, the robots actually reduced the scope for such work.
That's legitimately chilling. I guess just like quality of art and writing is too hard to quantify against "efficiency" and "productivity" so is quality of care. The slow AIs are literally optimizing humans out of the economy before our eyes and the people who were most afraid of being turned into paperclips are the ones leading the goddamn charge.
I'm not familiar with the cannibal/missionary framed puzzle, but reading through it the increasingly simplified notation reads almost like a comp sci textbook trying to find or outline an algorithm for something, but with an incredibly simple problem. We also see it once again explicitly acknowledge then implicitly discard part of the problem; in this case it opens by acknowledging that each boat can carry up to 6 people and that each boat needs at least one person, but somehow gets stuck on the pattern that we need to alternate trips left and right and each trip can only consist of one boat. It's still pattern matching rather than reasoning, even if the matching gets more sophisticated.
It's pretty sobering to see the financials laid out like this, and Ed even highlights some areas of uncertainty as though begging someone from OpenAI or Microsoft to provide the information to rebut his conclusions.
Orange site really is out here reinventing hard behaviorism.
"We can't directly observe internal states beyond our own subjectivity" -> Let's try to ignore them and see what we get" -> "We've developed a model that doesn't feature internal states as a meaningful element of cognition" -> "there are no internal states" -> "I know I'm a stochastic parrot but what are you?"
I think we're going to see an ongoing level of AI-enabled crapification for coding and especially for spam. I'm guessing there's going to be enough money from the spam markets to support a level of continued development to keep up to date with new languages and whatever paradigms are in vogue, so vibe coding is probably going to stick around on some level, but I doubt we're going to see major pushes.
One thing that this has shown is how much of internet content "creation" and "communication" is done entirely for its own sake or to satisfy some kind of algorithm or metric. If nobody cares whether it actually gets read then it makes economic sense to automate the writing as much as possible, and apparently LLMs represent a "good enough" ability to do that for plausible deniability and staving off existential dread in the email mines.
I don't know, I think we're just talking about using AI to make the government more efficient, which is basically just the stated policy goal at this point.
I think the other big objection is that the value of the information you can get from a prediction market basically only approaches usability as the time to market close approaches zero. If you're trying to predict whether an event is actually going to happen you usually want to know with enough of a time lead to actually do something about it, but at the same time that "do something about it" is going to impact the actual event being predicted and get "priced in."
It's that old business aphorism about making a metric into a target. Even if prediction markets were unambiguously useful as informational tools and didn't have any of the incredibly obvious perverse incentives and power imbalances that they do, as soon as you try to actually use that information to do anything the market will start to change based on the perception of the market itself. Like, if there's a market on someone being assassinated, you need to factor in not only the chances of it happening on its own but also the chances of it happening given that a high likelihood from the prediction market will result in additional safety measures being deployed or given that a small likelihood from the market may cause them to take on riskier public appearances or otherwise create more opportunities. If you don't actually use the information for anything then it might be capturing something, but that something becomes wildly self-referential is the information is actually used in any way.