Even with LMs supposedly specialising in the areas that I am knowledgable (but by no means an expert) in, it's the same. Drill down even slightly beyond surface-level, and it's either plain wrong, or halucinated when not immediately disprovable.
And why wouldn't it be? These things do not possess knowledge, they possess the ability to generate texts about things we'd like them to be knowledgable in, and that is a crucial difference.
Makes me feel warm around the heart to hear that it's not just me 🫠