this post was submitted on 11 Sep 2025
83 points (78.2% liked)

Science Memes

17127 readers
2554 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] clay_pidgin@sh.itjust.works 8 points 1 month ago (4 children)

This is a very interesting story. The Theriac 25 is an early computer-controlled tumor irradiating machine, and it had some design flaws that led to patients with radiation burns and at least one death.

A good podcast episode about it. https://www.pushkin.fm/podcasts/cautionary-tales/captain-kirk-forgot-to-put-the-machine-on-stun

[–] BootLoop@sh.itjust.works 9 points 1 month ago (3 children)

It's usually taught in university Computer Science studies in the ethics studies.

[–] clay_pidgin@sh.itjust.works 4 points 1 month ago (2 children)

Great idea. The company said they couldn't reproduce the error, but the error was caused by an unplanned user behavior.

[–] scratchee@feddit.uk 6 points 1 month ago (1 children)

The company did many things wrong, it’s an almost idealised example of total failure to take software seriously.

Most importantly they decided they didn’t need to test the software on their new machines because they’d already shipped previous machines running the software, so they “knew it worked”. The previous machines had hardware interlocks that made it impossible for the software to cause a massive dosing errors, the new machine was entirely software controlled.

Also they had exactly 1 “very smart” engineer build the software, who obviously wrote it for a hardware-safe machine. To be fair, I’m sure he was very smart, but safety critical and solo projects are not a great combo.

Also they had no mechanisms to ensure failures would be communicated to their engineer~~s~~ for investigation (failures were reported to them and then dropped into a black hole and forgotten about).

Also they didn’t even have any capability to test their machines after failures started popping up, because they knew the code worked perfectly so they didn’t need to waste any time or money on qa capability, massively slowing down their ability to fix things once people started dying

The single engineer wasn't mentioned on the podcast, episode but the rest of it was. It's a really instructive story.

Really, the whole podcast is this kind of story.