But I don't think that's necessarily a problem that can't be solved. LLM and so on are ultimately simply statistical analysis, and if you refine it and train it enough, it can absolutely summarise at least one paper at the moment. Google's Notebook LM is already capable of it, I just don't think it can quite pull off many of them yet. But the current state of LLMs is not that far off.
I agree with AIs being way over hyped and also just having a general dislike for them due to the way they're being used, the people who gush over them, and the surrounding culture. But I don't think that means we should simply ignore reality altogether. The LLMs from 2 or even 1 year ago are not even comparable to the ones today, and that trend will probably keep going that way for a while. The main issue lies with the ethics of training, copyright, and of course, the replacement of labor in exchange of what amounts to simply a cool tool.
Honestly, with adequate governance, companies would be required to submit reports on how much labor they're doing using AI, and pay those wages to either their employees or to a sort of "Universal Income" fund to prop up families in poverty. It should be called the AI tax.
The problem is that, with the current state of affairs, asking for regulation from anyone is impossible, and also even if the law were enacted, getting the money from the companies to people who need it instead of the ultra-rich is a major hurdle.
But at the very least, I don't think we should allow companies to simply cut down on human labor without also contributing economically to the employees they cut off.
I don't think anyone is dying to fill in Excel spreadsheets or to write corporate emails. No one is complaining about AI doing those jobs, but about people who lost their livelihoods because of it.