Telling me is that you don't know what these models are. If you do, then you're cherry-picking the data. Yes, AI will hallucinate. But this is not how you measure that. This is like saying βShoes are bad because 95% of them either don't fit or don't last," without accounting for size or use case.