Yeah. Anyone claiming these LLMs can reason or understand anything is full of it, that's not how they are built, and it's not what they do. The tech, by design, can never be 100% accurate and factual, and that is also why it can never be 100% sure it isn't outputting forbidden subjects.