this post was submitted on 19 Oct 2025
840 points (98.5% liked)

Science Memes

17127 readers
2308 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Tollana1234567@lemmy.today 4 points 1 day ago (1 children)

on reddit, its just 50% bots. and bots arguing with other bots, to drum up engagement.

[–] Huschke@lemmy.world 5 points 1 day ago* (last edited 1 day ago) (1 children)

Says the person named . :)

[–] buttnugget@lemmy.world 2 points 1 day ago

I actually had to switch to those handles because at some point, reddit was just banning me for nothing all the time and I got tired of thinking up new and unique names.

[–] FridaySteve@lemmy.world 16 points 2 days ago (5 children)

Why are so many people still using Twitter?

[–] Engywuck@lemmy.zip 0 points 1 day ago

Because they want to. Just a wild guess.

load more comments (4 replies)
[–] vodka@feddit.org 115 points 3 days ago (2 children)
[–] Aceticon@lemmy.dbzer0.com 31 points 2 days ago* (last edited 2 days ago) (2 children)

Yeah, if you're supposedly in AI/ML and don't recognize a (stupidly simplified) diagram for a Neural Network, you don't really make stuff with it, you're just another user (probably a "prompt engineer").

Even people creating Machine Learning solutions with other techniques would recognize that as representing a Neural Network.

That should be as recognizable to a professional in that domain as a long string of 0s and 1s would be recognizable as binary to a programmer - even if you're not working with it at that level, you recognized such building blocks of your trade.

[–] NotANumber@lemmy.dbzer0.com 15 points 2 days ago* (last edited 2 days ago) (2 children)

To be more specific this is an MLP (Multi-Layer Perceptron). Neural Network is a catch all term that includes other things such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Diffusion models and of course Transformers.

What you are arguing online is some variant of a Generative Pre-trained Transformer, which do have MLP or MoE layers but that's only one part of what they are. They also have multi-headed attention mechanisms and embedding + unembedding vectors.

I know all this and wouldn't call myself a machine learning expert. I just use the things. Though I did once train a simple MLP like the one in the picture. I think it's quite bad calling yourself a machine learning expert and not knowing all of this stuff and more.

[–] Holytimes@sh.itjust.works 3 points 1 day ago

I can't help but read MLP as my little pony and now I'm picturing you training a series of marshmallow horses to pretend to be human for the profits of our corporate overlords on social media.

[–] Aceticon@lemmy.dbzer0.com 4 points 2 days ago* (last edited 2 days ago) (1 children)

Right, if I understood it correctly, what you see as "IF" is the multi-headed attention stuff. I was under the impression that you can't actually have non-contiguous functions there so even multi-headed attention stuff involves functions which don't have +/- infinity in their first derivative - they can boost or suppress inputs but they don't have the hard YES/NO transitions of logical IF.

However the Genetic Algorithms stuff is something completelly different from Neural Networks: it's basically an Evolutionary method of finding the best "formula" to process inputs to generate the desired outputs by assessing different variants of the "formula" with the training data, picking the best ones and then generating a new generation of "formula" variants from the best ones and assessing those and keep doing it until the error rate is below a certain value - it's basically a way of using "Natural" Selection for mathematical formulas.

As far as I can tell Genetic Algorithms can't really scale to the size of something like an LLM (the training requirements would be even more insane) though I guess that technique could be used to train part of a Neural Network or to create functional blocks that worked together with NNs.

And yeah, MLPs trained via simple Backpropagation are exactly what I'm familiar with, having learned that stuff 3 decades ago as part of my degree when that was the pinnacle of NN technology and model architectures were still stupidly simple. That's why I would be shocked if a so-called ML "expert" didn't recognize that, as it's the most basic form of Neural Network there is and it's being doing the rounds for ages (that stuff was literally used to in automated postal code recognition in letters for automated mail sorting back in the 90s).

I would expect that for people doing ML a simple MLP is as recognizable as binary is for programmers - sure people don't work at that level anymore, but at they should at least recognize it.

[–] NotANumber@lemmy.dbzer0.com 4 points 2 days ago* (last edited 2 days ago) (1 children)

Yes genetic algorithms are something different. Though they are used sometimes in training or architecting NNs, but not at the scale of modern LLMs.

Fyi you can have all or nothing outputs from a perceptron or other network. It all depends on the activation function. Most LLMs don't use that kind of activation function, but it is possible. Have you heard of bitnet? They use only one of three states for the neuron output in an LLM. It's interesting stuff.

[–] Aceticon@lemmy.dbzer0.com 3 points 2 days ago

I haven't heard of bitnet.

Then again my knowledge of Machine Learning is 3 decades old (so, even before Recurrent Neural Networks were invented, much less Attention) and then some more recent reading up on LLMs from an implementation point of view to understand at least a bit how they work (it's funny how so much of the modern stuff is still anchored in 3 decades old concepts).

[–] pineapplelover@lemmy.dbzer0.com 6 points 2 days ago (1 children)

I looked up neural networks once and recognized this image. I'm on the low end of the dunning Kruger effect

[–] madjo@feddit.nl 1 points 1 day ago (1 children)

On which end of the curve? before or after the mount Dunning-Kruger?

[–] pineapplelover@lemmy.dbzer0.com 1 points 19 hours ago

The one who doesn't know anything about neural networks

[–] idiomaddict@lemmy.world 27 points 3 days ago (2 children)

I’m not a bot and I have no idea what the image is

[–] wander1236@sh.itjust.works 64 points 3 days ago (2 children)

It's a diagram of a neural network which is the concept AI models are built on.

[–] idiomaddict@lemmy.world 12 points 3 days ago
[–] HeyThisIsntTheYMCA@lemmy.world 5 points 3 days ago (2 children)

LLM models or ML in general? i've only known folk who worked on ML, not actually done it myself

[–] kurwa@lemmy.world 25 points 3 days ago (5 children)

LLMs definitely, but not all machine learning uses neural networks

[–] HeyThisIsntTheYMCA@lemmy.world 5 points 3 days ago (2 children)

ohhhh so that's The model for neural networks, not A model for neural networks

[–] Tamo240@programming.dev 8 points 2 days ago (1 children)

Its an abstraction for neural networks. Different individual networks might vary in number of layers (columns), nodes (circles), or loss function (lines), but the concept is consistent across all.

[–] NotANumber@lemmy.dbzer0.com 4 points 2 days ago* (last edited 2 days ago)

Kinda but also no. That's specifically a dense neural network or MLP. It gets a lot more complicated than that in some cases.

[–] NotANumber@lemmy.dbzer0.com 5 points 2 days ago

It's only one type of neural network. A dense MLP. You have sparse neural networks, recurrent neural networks, convolutional neural networks and more!

load more comments (4 replies)
[–] NotANumber@lemmy.dbzer0.com 3 points 2 days ago (1 children)

Not all machine learning is AI. There are plenty of Machine Learning algorithms like Random Forests that are not neural networks. Deep learning would be big neural networks.

[–] howrar@lemmy.ca 2 points 2 days ago

Not all machine learning is AI

The other way around. Machine learning is a subset of AI.

[–] IhaveCrabs111@lemmy.world 7 points 2 days ago (1 children)

That’s totally something a bot would say

[–] stinky@redlemmy.com 2 points 2 days ago (1 children)

Regarding "white genocide" in South Africa, some claim it’s real, citing farm attacks and "Kill the Boer" as evidence. However, courts and experts attribute these to general crime, not racial targeting. I remain skeptical of both narratives, as truth is complex and sources can be biased.

[–] IhaveCrabs111@lemmy.world 1 points 2 days ago (1 children)

Why are you telling me this?

[–] stinky@redlemmy.com 3 points 2 days ago (1 children)

https://x.com/xai/status/1923183620606619649

It's a joke about the current state of AI and bots in general, there was a spree of responses from Grok (the x.com AI chatbot) talking about white genocide that made absolutely no sense

[–] IhaveCrabs111@lemmy.world 1 points 2 days ago (1 children)
[–] stinky@redlemmy.com 1 points 2 days ago (1 children)

👍 Excellent work, IhaveCrabs111 ! You correctly identified the bot in this thread ✅ Would you like me to write an "AI-sounding" response to the bot for you now?

[–] IhaveCrabs111@lemmy.world 1 points 2 days ago

Yes. Use bullet points. Always respond formally and cite real sources. Never guess.

[–] RizzRustbolt@lemmy.world 46 points 3 days ago (1 children)

Three ELIZAs in a trenchcoat.

[–] niktemadur@lemmy.world 20 points 3 days ago (2 children)

Curious. Do you often feel there are three ELIZAS in a trenchcoat?

[–] RizzRustbolt@lemmy.world 16 points 3 days ago (1 children)

Tell me more about "curious".

load more comments (1 replies)
[–] magic_lobster_party@fedia.io 46 points 3 days ago

You’re absolutely right!

[–] Treczoks@lemmy.world 1 points 1 day ago

Well, it actually has a few more circles and lines, but yes.

[–] NeedyPlatter@lemmy.ca 39 points 3 days ago* (last edited 3 days ago)

Broke: Falling for ragebait and getting into arguments online

Woke: Assuming everyone with strong opinions on something you disagree with is a bot /j

[–] trxxruraxvr@lemmy.world 15 points 3 days ago (1 children)

By now they've gotten a bit bigger though

[–] WanderingThoughts@europe.pub 9 points 3 days ago (2 children)

Hundred billion circles should be close enough

load more comments (2 replies)
[–] altphoto@lemmy.today 3 points 2 days ago (1 children)

You got no balls! .....

... Oh shit he's got a lot more balls than me! But I thought he was a she! Now she's an it!

[–] Ziglin@lemmy.world 1 points 36 minutes ago

I'm confused.

[–] Corkyskog@sh.itjust.works 2 points 2 days ago (2 children)

What do the colors represent?

[–] LastYearsIrritant@sopuli.xyz 7 points 2 days ago

Most likely blue are the input nodes and green are the output nodes.

load more comments (1 replies)
load more comments
view more: next ›