And it's not like the interrogator knows how *human* intelligence works, so asking that they be informed about AI internals is a bit much.
And it's not like the interrogator knows how *human* intelligence works, so asking that they be informed about AI internals is a bit much.
(Okay, they should know a lot about the failure modes of human intelligence. But then you should reject failures on both sides, right? Anyhow this is where Yudkowsky et al *started* so I'm staying away.)
Seems to me that the actual answer is that the "real" Turing Test must be a iterative process, where you learn more about what you're looking for as examples come in. Not moving the goalposts, but refining where you think they are. That's what's real-life happened over the past five years.
So I'm giving up on a bright-line know-for-sure test -- fine! We couldn't even say what a "mammal" is without decades of research and proposed redefinitions. Why should "sentient" be simpler? The fun question is now whether the iteration converges.
Yeah, I think the desire for a firmly refereed set of rules is largely a product of Turing’s plain autism and the fact that he didn’t anticipate how much people would want to project onto AIs. Skepticism still counts for a lot. A simple $20 reward for each successful ID would improve rigor nicely.
It’s notable that the strong version of the test (comparative success with a human male at impersonating a woman) is resistant to goalpost moving.