this post was submitted on 13 Mar 2025
1623 points (99.8% liked)

People Twitter

6360 readers
2919 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Phoenicianpirate@lemm.ee 4 points 47 minutes ago

Exactly my thoughts.

[–] kane@femboys.biz 2 points 38 minutes ago

Exactly this is why I have a love/hate relationship with just about any LLM.

I love it most for generating code samples (small enough that I can manually check them, not entire files/projects) and re-writing existing text, again small enough to verify everything. Common theme being that I have to re-read its output a few times, to make 100% sure it hasn't made some random mistake.

I'm not entirely sure we're going to resolve this without additional technology, outside of 'the LLM'-itself.

[–] DicJacobus@lemmy.world 3 points 52 minutes ago

I have frequentley seen gpt give a wrong answer to a question, get told that its incorrect, and the bot fights with me and insists Im wrong. and on other less serious matters Ive seen it immediatley fold and take any answer I give it as "correct"

[–] jsomae@lemmy.ml 3 points 52 minutes ago

ChatGPT is a tool. Use it for tasks where the cost of verifying the output is correct is less than the cost of doing it by hand.

[–] PartiallyApplied@lemmy.world 6 points 1 hour ago (1 children)

I feel this hard with the New York Times.

I feel 99% of the time I feel it covers subjects adequately. It might be a bit further right than me, but for a general US source, I feel it’s rather representative.

Then they write a story about something happening to low income US people, and it’s just social and logical salad. They report, it appears as though they analytically look at data, instead of talking to people. Statisticians will tell you, and this is subtle: conclusions made at one level of detail cannot be generalized to another level of detail. Looking at data without talking with people is fallacious for social issues. The NYT needs to understand this, but meanwhile they are horrifically insensitive bordering on destructive at times.

“The jackboot only jumps down on people standing up”

  • Hozier, “Jackboot Jump”

Then I read the next story and I take it as credible without much critical thought or evidence. Bias is strange.

[–] multiplemigs@sh.itjust.works 3 points 1 hour ago* (last edited 1 hour ago)

come on guys, the joke is right there.... 60% of the time it works, every time!

[–] Alloi@lemmy.world 2 points 1 hour ago

i mainly use it for fact checking sources from the internet and looking for bias. i double check everything of course. beyond that its good for rule checking for MTG commander games, and deck building. i mainly use it for its search function.

[–] capuccino@lemmy.world 2 points 1 hour ago

does chat gpt have ADHD?

[–] FatsoJackson@lemmy.ml 3 points 2 hours ago

same with every documentary out there

[–] foxlore@programming.dev 22 points 5 hours ago (2 children)

Talking with an AI model is like talking with that one friend, that is always high that thinks they know everything. But they have a wide enough interest set that they can actually piece together an idea, most of the time wrong, about any subject.

[–] dagger_punch@lemmy.world 18 points 5 hours ago

Isn't this called "the Joe Rogan experience"?

[–] enbipanic@lemmy.blahaj.zone 2 points 3 hours ago

I am sorry to say I can frequently be this friend...

[–] lowside@lemmy.world 7 points 4 hours ago (1 children)

One thing I have found it to be useful for is changing the tone if what I write.

I tend to write very clinicaly because my job involves a lot of that style of writing. I have started asked chat gpt to rephrase what i write in a softer tone.

Not for everything, but for example when Im texting my girlfriend who is feeling insecure. It has helped me a lot! I always read thrugh it to make sure it did not change any of the meaning or add anything, but so far it has been pretty good at changing the tone.

Also use it to rephrase emails at work to make it sound more professional.

[–] taxiiiii@lemmy.world 1 points 55 minutes ago

I do that in reverse, lol. Except I'm also not a native speaker. "Rephrase this, it should sound more scientific".

[–] aceshigh@lemmy.world 9 points 11 hours ago

I use chatgpt as a suggestion. Like an aid to whatever it is that I’m doing. It either helps me or it doesn’t, but I always have my critical thinking hat on.

[–] RabbitBBQ@lemmy.world 31 points 16 hours ago (2 children)

If the standard is replicating human level intelligence and behavior, making up shit just to get you to go away about 40% of the time kind of checks out. In fact, I bet it hallucinates less and is wrong less often than most people you work with

[–] bier@feddit.nl 6 points 12 hours ago

My kid sometimes makes up shit and completely presents it as facts. It made me realize how many made up facts I learned from other kids.

[–] Devanismyname@lemmy.ca 10 points 15 hours ago

And it just keeps improving over time. People shit all over ai to make themselves feel better because scary shit is happening.

[–] Hikermick@lemmy.world 14 points 16 hours ago (4 children)

I did a google search to find out how much i pay for water, the water department where I live bills by the MCF (1,000 cubic feet). The AI Overview told me an MCF was one million cubic feet. It's a unit of measurement. It's not subjective, not an opinion and AI still got it wrong.

[–] TranslateErr0r@lemmy.world 2 points 1 hour ago

I just think you need an abbrevations chart.

[–] AnUnusualRelic@lemmy.world 4 points 7 hours ago

Shouldn't it be kcf? Or tcf if you're desperate to avoid standard prefixes?

[–] TonyTonyChopper@mander.xyz 11 points 16 hours ago (8 children)

Everywhere else in the world a big M means million.

load more comments (8 replies)
load more comments (1 replies)
[–] SirSamuel@lemmy.world 77 points 22 hours ago (1 children)

First off, the beauty of these two posts being beside each other is palpable.

Second, as you can see on the picture, it's more like 60%

[–] morrowind@lemmy.ml 24 points 18 hours ago (1 children)

No it's not. If you actually read the study, it's about AI search engines correctly finding and citing the source of a given quote, not general correctness, and not just the plain model

[–] SirSamuel@lemmy.world 28 points 18 hours ago

Read the study? Why would i do that when there's an infographic right there?

(thank you for the clarification, i actually appreciate it)

[–] spankmonkey@lemmy.world 157 points 1 day ago (2 children)

I love that this mirrors the experience of experts on social media like reddit, which was used for training chatgpt...

[–] PM_Your_Nudes_Please@lemmy.world 44 points 23 hours ago* (last edited 23 hours ago) (6 children)

Also common in news. There’s an old saying along the lines of “everyone trusts the news until they talk about your job.” Basically, the news is focused on getting info out quickly. Every station is rushing to be the first to break a story. So the people writing the teleprompter usually only have a few minutes (at best) to research anything before it goes live in front of the anchor. This means that you’re only ever going to get the most surface level info, even when the talking heads claim to be doing deep dives on a topic. It also means they’re going to be misleading or blatantly wrong a lot of the time, because they’re basically just parroting the top google result regardless of accuracy.

load more comments (6 replies)
[–] skillissuer@discuss.tchncs.de 62 points 1 day ago (2 children)
[–] jjjalljs@ttrpg.network 9 points 16 hours ago

i was going to post this, too.

The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

load more comments (1 replies)
[–] Kolanaki@pawb.social 10 points 18 hours ago (1 children)

Most of my searches have to do with video games, and I have yet to see any of those AI generated answers be accurate. But I mean, when the source of the AI's info is coming from a Fandom wiki, it was already wading in shit before it ever generated a response.

load more comments (1 replies)
[–] DudeImMacGyver@kbin.earth 56 points 1 day ago (3 children)
load more comments (3 replies)
load more comments
view more: next ›