Isn't it more grammatically correct to say "Jeffreys Epstein"?
YourNetworkIsHaunted
In each case, existing social and communication-oriented tasks tended to be displaced by new tasks that involved more interaction with the robots than with the residents. Instead of saving time for staff to do more of the human labor of social and emotional care, the robots actually reduced the scope for such work.
That's legitimately chilling. I guess just like quality of art and writing is too hard to quantify against "efficiency" and "productivity" so is quality of care. The slow AIs are literally optimizing humans out of the economy before our eyes and the people who were most afraid of being turned into paperclips are the ones leading the goddamn charge.
I'm not familiar with the cannibal/missionary framed puzzle, but reading through it the increasingly simplified notation reads almost like a comp sci textbook trying to find or outline an algorithm for something, but with an incredibly simple problem. We also see it once again explicitly acknowledge then implicitly discard part of the problem; in this case it opens by acknowledging that each boat can carry up to 6 people and that each boat needs at least one person, but somehow gets stuck on the pattern that we need to alternate trips left and right and each trip can only consist of one boat. It's still pattern matching rather than reasoning, even if the matching gets more sophisticated.
Orange site really is out here reinventing hard behaviorism.
"We can't directly observe internal states beyond our own subjectivity" -> Let's try to ignore them and see what we get" -> "We've developed a model that doesn't feature internal states as a meaningful element of cognition" -> "there are no internal states" -> "I know I'm a stochastic parrot but what are you?"
I think we're going to see an ongoing level of AI-enabled crapification for coding and especially for spam. I'm guessing there's going to be enough money from the spam markets to support a level of continued development to keep up to date with new languages and whatever paradigms are in vogue, so vibe coding is probably going to stick around on some level, but I doubt we're going to see major pushes.
One thing that this has shown is how much of internet content "creation" and "communication" is done entirely for its own sake or to satisfy some kind of algorithm or metric. If nobody cares whether it actually gets read then it makes economic sense to automate the writing as much as possible, and apparently LLMs represent a "good enough" ability to do that for plausible deniability and staving off existential dread in the email mines.
The fact that it appears to be trying to create a symbolic representation of the problem is interesting, since that's the closest I've ever seen this come to actually trying to model something rather than just spewing raw text, but the model itself looks nonsensical, especially for such a simple problem.
Did you use any of that kind of notation in the prompt? Or did some poor squadron of task workers write out a few thousand examples of this notation for river crossing problems in an attempt to give it an internal structure?
I guess UNESCO, like all right-thinking people, really like the anime animalgirl mascots and give preference to any product that has one