It's still early days for the technology so it's hard to say how error-prone it's going to be. If the next-gen LLMs are still hallucinating sources then yeah people should be made aware the LLMs are not infallible. But ultimately it's a critical thinking / media literacy issue like in the past.