this post was submitted on 13 Mar 2025
1477 points (99.8% liked)

People Twitter

6360 readers
2173 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
(page 2) 50 comments
sorted by: hot top controversial new old
[–] Zachariah@lemmy.world 11 points 21 hours ago

This, but for tech bros.

[–] balderdash9@lemmy.zip 9 points 22 hours ago (24 children)

Deepseek is pretty good tbh. The answers sometimes leave out information in a way that is misleading, but targeted follow up questions can clarify.

load more comments (24 replies)
[–] lalala@lemmy.world 1 points 16 hours ago

I think that AI has now reached the point where it can deceive people ,not equal to humanity.

[–] OsrsNeedsF2P@lemmy.ml -1 points 17 hours ago* (last edited 17 hours ago) (2 children)

Oof let's see, what am I an expert in? Probably system design - I work at (insert big tech) and run a system design club there every Friday. I use ChatGPT to bounce ideas and find holes in my design planning before each session.

Does it make mistakes? Not really? it has a hard time getting creative with nuanced examples (i.e. if you ask it to "give practical examples where the time/accuracy tradeoff in Flink is important" it can't come up with more than 1 or 2 truly distinct examples) but it's never wrong.

The only times it's blatantly wrong is when it hallucinates due to lack of context (or oversaturated context). But you can kind of tell something doesn't make sense and prod followups.

Tl;dr funny meme, would be funnier if true

[–] RagingRobot@lemmy.world 5 points 17 hours ago

That's not been my experience with it. I'm a software engineer and when I ask it stuff it usually gives plausible answers but there is always something wrong. For example it will recommend old outdated libraries or patterns that look like they would work but when you try them out you figure out they are setup differently now or didn't even exist.

I have been using windsurf to code recently and I'm liking that but it makes some weird choices sometimes and it is way too eager to code so it spits out a ton of code you need to review. It would be easy to get it to generate a bunch of spaghetti code that works mostly that's not maintainable by a person out of the box.

[–] spooky2092@lemmy.blahaj.zone 3 points 16 hours ago

I ask AI shitbots technical questions and get wrong answers daily. I said this in another comment, but I regularly have to ask it if what it gave me was actually real.

Like, asking copilot about Powershell commands and modules that are by no means obscure will cause it to hallucinate flags that don't exist based on the prompt. I give it plenty of context on what I'm using and trying to do, and it makes up shit based on what it thinks I want to hear.

load more comments
view more: ‹ prev next ›