this post was submitted on 01 Oct 2025
644 points (99.4% liked)

Programmer Humor

26846 readers
633 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] r00ty@kbin.life 96 points 1 week ago (2 children)

I have a tool that I wrote, probably 5+ years ago. Runs once a week, collects data from a public API, translates it into files usable by the asterisk phone server.

I totally forgot about it. Checked. Yep, up to date files created, all seem in the right format.

Sometimes things just keep working.

[–] RustyNova@lemmy.world 52 points 1 week ago (5 children)

Meanwhile, had to debug a script that zipped a zip recursively, with the new data appended. The server had barely enough storage left, as the zip took almost 200GB (the data is only 3GB). I looked at the logs, last successful run: 2019

[–] r00ty@kbin.life 19 points 1 week ago (1 children)

Yes, had the same happen. Something that should be simple failing for stupid reasons.

[–] RustyNova@lemmy.world 13 points 1 week ago (2 children)

Well it's not that simple... Because whoever wrote that made it way too complicated (and the production version has been tweaked without updating the dev too)

A clean rewrite with some guard clauses helped remove the haduken ifs and actually zipping the file outside of the zipped directory helped a lot

[–] r00ty@kbin.life 7 points 1 week ago (1 children)

I mean, I have to say I've hastened my own demise (in program terms) by over-engineering something that should be simple. Sometimes adding protective guardrails actually causes errors when something changes.

load more comments (1 replies)
[–] Quantenteilchen@discuss.tchncs.de 3 points 1 week ago (1 children)

Am I understanding that last part correctly?

[...] and actually zipping the file outside of the zipped directory helped a lot

Did they just automatically create a backup zip-bomb in their script‽

[–] RustyNova@lemmy.world 9 points 1 week ago

I oversimplified it but the actual process was to zip files to send to an FTP server

The cron zipped the files to send in the same directory as the zipped files, then sent the zip, then deleted the zip

Looks fine, right? But what if the FTP server is slow and uploading take more time than the hourly cron dispatch? You now have a second script that zip all the folder, with the previous zip file, which will slow down the upload, etc...

I believe may have been started by an FTP upload erroring out and forcing an early return without having a cleanup, and progressively got worse

... I suppose this happened. The logs were actually broken and didn't actually add the message part of the error object, and only logging the memory address to it

load more comments (4 replies)
[–] Gonzako@lemmy.world 20 points 1 week ago (2 children)

Yeah, all these simple data processing scripts will always work as long as both sides stay the same/compatible

[–] r00ty@kbin.life 21 points 1 week ago

Yep. It seems they haven't changed a thing about the format. Probably a script much older than mine on their end is generating it too.

[–] MonkderVierte@lemmy.zip 9 points 1 week ago (1 children)

Isn't that true for all of data processing?

[–] Gonzako@lemmy.world 8 points 1 week ago

Maybe. But webdevs have made it a mission not to seem like so

the final part of that is "written by person that left the company ten years ago"

[–] rumba@lemmy.zip 47 points 1 week ago

I don't see the alias in your .bashrc

yeah, um, about that. I have no idea where it comes from. We can type alias and see what it is, so if it's ever lost, we can recreate it, but I looked for 30 minutes yesterday even did a grep -R and I have NO IDEA where it comes from, or why it's named electricboogaloo

[–] Gonzako@lemmy.world 37 points 1 week ago (1 children)
[–] cenzorrll@lemmy.ca 13 points 1 week ago* (last edited 1 week ago)

Ha, loser.

*glances over at 6 bash scripts and 2 cron jobs*

Not you, you're perfect

[–] dotslashme@infosec.pub 31 points 1 week ago (3 children)

My current project has a crontab with 216 entries.

[–] pinball_wizard@lemmy.zip 34 points 1 week ago (3 children)

Well, here's a sentence I haven't been tempted to use before:

"I believe that may be too many crontab entries."

[–] DickFiasco@sh.itjust.works 19 points 1 week ago (2 children)

Any problem in server administration can be solved with an additional crontab entry. Except for the problem of too many crontab entries.

[–] Opisek@piefed.blahaj.zone 10 points 1 week ago* (last edited 1 week ago)

And that's why I added a crontab entry that periodically purges my cron configuration. That way, I'm forced to readd only the truly necessary cron jobs, successfully reducing the amount of crontab entries.

load more comments (1 replies)
[–] cupcakezealot@piefed.blahaj.zone 10 points 1 week ago (1 children)

just randomly delete 50 of them.

[–] pinball_wizard@lemmy.zip 17 points 1 week ago (1 children)

Yes. The strongest crontab entries will probably restore themselves. (For anyone reading along, this is sarcasm. Don't do this.)

a crontab can regenerate from bisection to form two whole crontabs

[–] rumba@lemmy.zip 3 points 1 week ago (1 children)

pshaw, just drop in there and combine a few

/etc/cron.d/first25 /etc/cron.d/second25 ...

load more comments (1 replies)
[–] Lightfire228@pawb.social 7 points 1 week ago

Use SystemD timers, you animal

[–] marcos@lemmy.world 5 points 1 week ago (2 children)

At some point it may be good to migrate to airflow or something similar.

It's not the number of entries that makes it bad. It's the fact that if you run crontab, they are gone...

[–] dondelelcaro@lemmy.world 10 points 1 week ago (1 children)

That's why there's a crontab rule to load the crontab from a file. Cronception if you will.

[–] marcos@lemmy.world 7 points 1 week ago (1 children)

Make the rule start a secondary cron system. Otherwise it won't run after you erase the crontab.

[–] dondelelcaro@lemmy.world 6 points 1 week ago* (last edited 1 week ago)

Here you go:

with-lock-ex -q /path/to/lockfile sh -c '
while true; do
    crontab cronfile;
    sleep 60;
done;'
[–] bleistift2@sopuli.xyz 8 points 1 week ago* (last edited 1 week ago) (1 children)

At first I thought you missed the -r. Then I checked. Defaulting to STDIN here is very, very dumb, IMHO. Almost as bad as putting the “edit” flag right next to the “delete everything without confirmation” flag on a Western keyboard (-e vs -r).

[–] marcos@lemmy.world 7 points 1 week ago

Crontab is a really badly designed program that we just can't fix because everybody depends on its WFTs for something.

[–] ag10n@lemmy.world 28 points 1 week ago

Suck my dick O’Leary

Nah bro, that bash alias is FULLY documented in .bashrc! Idiot.

[–] A_norny_mousse@feddit.org 16 points 1 week ago

A self-written shell script "daemon" that tails & greps log output for "ERR|FAIL"

[–] barnaclebutt@lemmy.world 11 points 1 week ago

I know there's a meme here, but as a Canadian, I'm sorry about that traitorous asshat.

[–] Adderbox76@lemmy.ca 8 points 1 week ago (2 children)

I'll hear NO aspersions against my precious Cron!

Cron is magic. Cron is civilization!

[–] phutatorius@lemmy.zip 4 points 1 week ago

Naw, mate, that's Crom.

load more comments (1 replies)
[–] AnanasMarko@lemmy.world 7 points 1 week ago (1 children)

Since I'm somewhat of a simpleton... isn't that how pipelines actually work? The only difference being, they're all (scripts) available from a centralized system and triggered i.e. with webhooks?

Instead of a local script on a server, the system opens i.e. a ssh session and runs the script step by step remotely?

So is that the joke or am I missing something?

[–] orhtej2@eviltoast.org 13 points 1 week ago

Pipelines are meant to be versioned an replicable, as opposed to a hack job that only runs on a forgotten server in someone's closet depicted in the meme.

[–] gmtom@lemmy.world 6 points 1 week ago

Oh man, you guys should see what I was cooking up at my old place.

Head office too shitty to give us an actual asset management solution, but we did have full access to the Microsoft suite, so i used a SharePoint lists as databases, powerapps apps running on iPads for all the data entry ux and then like two dozen hacked together power automate flows linking them all together as well as taking any Info out of the actual IT systems head office used and since we didn't have API access to those system any data feeding back in to them would be in the form of automated emails that the poor 1st line techs in head office would have to sort through and process manually.

[–] desmosthenes@lemmy.world 6 points 1 week ago

I feel attacked

load more comments
view more: next ›