Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
view the rest of the comments
AI slop should be banned in general
I think people should be free to choose whatever they want. But I also think it should be easy for them to make that choice. Currently there’s no easy way to identify all the AI images.
Maybe if we had some sort of intelligent algorithm that could filter things… (I kid. Crowdsourcing tags would probably be easier and more accurate.)
As part of training AI you also create a second AI that detects whether something is AI or not. The solution is to use AI to detect AI. However running this on every single image is computationally very expensive.
There's an upper limit on detecting generative AI before the generative AI can generate content that is indistinguishable from real content. Not that we're there yet; perhaps the current approach can't even get there and it will require models that understand lighting, materials, anatomy, etc. But considering even real images are just approximations based on sample rate/resolution, AI only has to get to the point where it "stimulates" accurately at a subpixel level to be as undetectable as text too small for a camera to pick up, no matter how many times a hacker says "enhance".
It also only detects images generated using that specific model, so you’d need an entire library of those detectors, which compounds the problem further.