Believing that requires an optimistically expansive idea of an LLM's capabilities coupled with a regressive view of what a human mind is capable of, which probably explains why so many of them fall back on "What, so a human mind never makes mistakes?"