Think of a graph: LLMs recognize edges but have no concept of what the nodes mean. It makes for a nice analogy of what hallucinations really are; a trace through a topological space where the path matches but the LLM is blind to the (wrong) nodes. This is all anecdotal of course, but compelling.