LLM hallucinations are a different kind of wrongness from simple statistical error, because transformer-based deep recurrent networks do a much more complex thing than, say, an image classifier.
LLM hallucinations are a different kind of wrongness from simple statistical error, because transformer-based deep recurrent networks do a much more complex thing than, say, an image classifier.
I am not saying AI-based facial recognition systems should be used the way that they are, but your argument against them does need to be more nuanced.
No it doesnt. What needs to be more nuanced is peoples understanding of the goddamn biology. This is not a simple task and peoples lives are on the line. There are plenty of examples of these things fucking up just from one machines images to another because of contrast values. I dont trust it.