This list included words like “underscore,” “comprehend,” “bolster,” “boast,” “swift,” “inquiry,” and “groundbreaking,” in addition to “delve” and “meticulous.” The researchers then tracked the frequency of these words in over a million YouTube videos and podcast episodes from before and after ChatGPT’s launch.
Sounds more like YouTube "content producers" are likely using AI to generate the words they read aloud.
That is something I've noticed on lots of scientific content. You start realizing the person saying the words doesnt actually understand it and the who/what ever wrote it doesn't really understand it either.
Extrapolating social media content into everyday human speech is just fucking ridiculous though, especially when it's so heavily censored. Real people don't talk like that in person.