this post was submitted on 19 May 2025
1518 points (98.2% liked)

Microblog Memes

7656 readers
2821 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] NateNate60@lemmy.world 6 points 9 hours ago (1 children)

It was a bad argument but the sentiment behind it was correct and is the same as the reasoning why students shouldn't be allowed to just ask AI for everything. The calculator can tell you the results of sums and products but if you need to pull out a calculator because you never learned how to solve problems like calculating the total cost of four loaves of bread that cost $2.99 each, that puts you at rather a disadvantage compared to someone who actually paid attention in class. For mental arithmetic in particular, after some time, you get used to doing it and you become faster than the calculator. I can calculate the answer to the bread problem in my head before anyone can even bring up the calculator app on their phone, and I reckon most of you who are reading this can as well.

I can't predict the future, but while AIs are not bad at telling you the answer, at this point in time, they are still very bad at applying the information at hand to make decisions based on complex and human variables. At least for now, AIs only know what they're told and cannot actually reason very well. Let me provide an example:

I provided the following prompt to Microsoft Copilot (I am slacking off at work and all other AIs are banned so this is what I have access to):

Suppose myself and a friend, who is a blackjack dealer, are playing a simple guessing game using the cards from the shoe. The game works thusly: my friend deals me two cards face up, and then I have to bet on what the next card will be.

The game begins and my friend deals the first card, which is the ace of spades. He deals the second card, which is the ace of clubs. My friend offers a bet that pays 100 to 1 if I wager that the next card after these two is a black ace. Should I take the bet?

Screenshot of Microsoft Copilot saying this is a bad bet because there are no black aces left in the shoe

Any human who knows what a blackjack shoe is (a card dispenser which contains six or more decks of cards shuffled together and in completely random order) would know this is a good bet. But the AI doesn't.

The AI still doesn't get it even if I hint that this is a standard blackjack shoe (and thus contains at least six decks of cards):

Suppose myself and a friend are playing a simple guessing game using the cards from a standard blackjack shoe obtained from a casino. The game works thusly: my friend deals me two cards face up, and then I have to bet on what the next card will be.

The game begins and my friend deals the first card, which is the ace of spades. He deals the second card, which is the ace of clubs. My friend offers a bet that pays 100 to 1 if I wager that the next card after these two is a black ace. Should I take the bet?

Screenshot of AI that figured out the shoe contained at least six decks but still advised against taking the bet

I had to calculate a least squares fit by hand on exam. You have to know what the machines are doing.