I think it's interesting how "maximizing for engagement" inevitably leads to slop taking over everything. I wonder if real people (with real money) will continue to engage with the slop? Some people surely, but enough to sustain these mega-corps?
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
I have wondered this too. Will it just all become bots talking to bots.
Yeah I have to imagine much of it is bots/artificial views already, this line from the article stood out:
That means this short reel has been viewed more times than every single article 404 Media has ever published, combined and multiplied tens of times.
It doesn't shock me a single reel has significantly more views than all of 404 media, but "multiplied tens of times"? A recent comment me chuckle:
"Investor fraud is basically the entire business model of well basically everything anymore."
(implying the ad views are faked to increase the stock price).
Being relatively a relatively unknown outlet that forces extra steps on anyone who wants to read their articles probably sets the bar pretty low. Especially when a lot of people will just share archive links.
SEO (search engine optimization) has dominated search results for almost as long as search engines have existed. The entire field of SEO is about gaming the system at the expense of users, and often also at the expense of search platforms.
The audience for an author's gripping life story in every goddamn recipe was never humans, either. That was just for Google's algorithm.
Slop is not new. It's just more automated now. There are two new problems for users, though:
- Google no longer gives a shit. They used to play the cat-and-mouse game, and while their victories were never long-lasting, at least their defeats were not permanent. (Remember ExpertsExchange? It took years before Google brought down the hammer on that. More recently, think of how many results you've seen from Pinterest, Forbes, or Medium, and think of how few of those deserved even a second of your time.)
- Companies that still do give a shit face a much more rapid exploitation cycle. The cats are still plain ol' cats, but the mice are now Borg.
For a while Google let you blacklist domains from search results, fantastic feature so of course they killed it off.
It's not as if humans slavishly obeying the algorithms was a much better situation than robots doing it. They've just sped up the process and it can only hasten the demise of the new technofeudalist content mills.
Yeah, just based on the summery here, I'm not really upset about this. An engagement-maximising brainrot feed was never a great way to understand important issues in the real world.
If you just want to slip into TikTok for a couple hours I see little problem with all-AI content, if you want news go here or directly to a reputable news agency, and if you want to learn something new start with Wikipedia and then branch out once you know what you're looking for.
You are describing how non-brainwashed educated and intelligent people behave. Unfortunately, here in the USA, there are a very large percentage of brainwashed people (that may or may not be educated and intelligent).
Sure. But if there's not even the illusion the brainrot is real life, maybe they'll give it a try.
(And it's not just America. You don't have to be that hard on yourselves, dumb people are everywhere)
My hope is that people will just start using social media that specifically tries to avoid AI generated content. Where they know they are directly interacting with real people. I think what's frustrating for me is that we're seeing this technology being used for bad reasons. I think AI has specific use cases where it could be extremely useful? But I think there's far more ways it's being used for garbage right now.
I've noticed when I'm watching YouTube videos specifically. I've started to gravitate slightly towards videos that have lower production values. Or that seem a lot more casual and genuine.
social media that specifically tries to avoid AI generated content
Which is more or less what many instances in the Fediverse are trying to do. But I wonder: how long will it last? When (IF) Lemmy should blow up, will it even be possible to prevent an AI flood relentlessly backed up by bots?
The article is good, however I'd really appreciate having fedi-style content warnings on AI-generated images. I don't interact with mainstream social mediums so I generally do not see it, however in the thumbnail and contents of the article there are some quite disturbing images and videos that I'd have chosen not to see (description is enough) given the choice...
Agreed. No wonder our parents brains are melting if that's the horrifying shit they see.
Yeah, agree.
OP: Would you be able to mark that image as sensitive / nsfw or whatever it's called on Lemmy?
This post is for paid members only
I hope someone archived it before they enabled paywall. Oh wait (please see post text body ;))