avatar
Andrew Plotkin @eblong.com

And it's not like the interrogator knows how *human* intelligence works, so asking that they be informed about AI internals is a bit much.

aug 21, 2025, 1:46 am • 1 0

Replies

avatar
Andrew Plotkin @eblong.com

(Okay, they should know a lot about the failure modes of human intelligence. But then you should reject failures on both sides, right? Anyhow this is where Yudkowsky et al *started* so I'm staying away.)

aug 21, 2025, 1:46 am • 1 0 • view
avatar
Andrew Plotkin @eblong.com

Seems to me that the actual answer is that the "real" Turing Test must be a iterative process, where you learn more about what you're looking for as examples come in. Not moving the goalposts, but refining where you think they are. That's what's real-life happened over the past five years.

aug 21, 2025, 1:46 am • 1 0 • view
avatar
Andrew Plotkin @eblong.com

So I'm giving up on a bright-line know-for-sure test -- fine! We couldn't even say what a "mammal" is without decades of research and proposed redefinitions. Why should "sentient" be simpler? The fun question is now whether the iteration converges.

aug 21, 2025, 1:46 am • 1 0 • view
avatar
Elizabeth Sandifer @eruditorumpress.com

Yeah, I think the desire for a firmly refereed set of rules is largely a product of Turing’s plain autism and the fact that he didn’t anticipate how much people would want to project onto AIs. Skepticism still counts for a lot. A simple $20 reward for each successful ID would improve rigor nicely.

aug 21, 2025, 1:54 am • 0 0 • view
avatar
Elizabeth Sandifer @eruditorumpress.com

It’s notable that the strong version of the test (comparative success with a human male at impersonating a woman) is resistant to goalpost moving.

aug 21, 2025, 2:07 am • 0 0 • view