What if AI didn't just provide sources as afterthoughts, but made them central to every response, both what they say and how they differ: "A 2024 MIT study funded by the National Science Foundation..." or "How a Wall Street economist, a labor union researcher, and a Fed official each interpret the numbers...". Even this basic sourcing adds essential context.
Yes, this would be an improvement. Gemini Pro does this in Deep Research reports, and I appreciate it. But since you can’t be certain that what follows are actual findings of the study or source referenced, the value of the citation is still relatively low. You would still manually have to look up the sources to confirm the information. And this paragraph a bit further up shows why that is a problem:
But for me, the real concern isn't whether AI skews left or right, it’s seeing my teenagers use AI for everything from homework to news without ever questioning where the information comes from.
This is also the biggest concern for me, if not only centred on teenagers. Yes, showing sources is good. But if people rarely check them, this alone isn’t enough to improve the quality of the information people obtain and retain from LLMs.
An article brought to you by the leading authority on cutting-edge computer science research: BBC.