avatar
Andrew Plotkin @eblong.com

I should add that, as a Hofstadter nerd, I believe that: - The Turing Test is not only a way to detect sentience, it's the only philosophically justifiable way to define sentience - ChatGPT is manifestly passing the Turing Test right now - ChatGPT is not sentient, ha ha, who are you trying to kid

aug 20, 2025, 7:44 pm • 3 0

Replies

avatar
Andrew Plotkin @eblong.com

There's an interesting (not tweet-sized) discussion which starts with the above three points, but I have no problem defending my belief in all three at once.

aug 20, 2025, 7:46 pm • 2 0 • view
avatar
Nathan Curtis @tshellstudio.bsky.social

I wholly disagree with the second point. ChatGPT isn't passing the Turing Test any more than ELIZA was. It's responding in a way that satisfies people who are not primed to think critically about it.

aug 20, 2025, 8:02 pm • 2 0 • view
avatar
Elizabeth Sandifer @eruditorumpress.com

Yeah. Turing specifies an untrained interrogator; this turns out to have been a mistake.

aug 20, 2025, 8:14 pm • 0 0 • view
avatar
Andrew Plotkin @eblong.com

I agree (and with @tshellstudio.bsky.social ). I'm letting the definition of "Turing Test" slip. (You can call either Turing's version or a notionally-updated-for-2025 version the "real" Turing Test, up to you.) Where does that leave me?

aug 21, 2025, 1:45 am • 1 0 • view
avatar
Andrew Plotkin @eblong.com

"Use a skeptical interrogator with moderate knowledge of AI's failure modes" isn't very satisfying. What's moderate? How skeptical? I'm trying to come up with something philosphically satisfying here!

aug 21, 2025, 1:46 am • 1 0 • view
avatar
Andrew Plotkin @eblong.com

It's no fun letting someone be so skeptical that they just answer "no" to everything that doesn't bleed protoplasm. "Move the AI goalposts as computers do more" is an old joke but begs the question.

aug 21, 2025, 1:46 am • 1 0 • view
avatar
Andrew Plotkin @eblong.com

And it's not like the interrogator knows how *human* intelligence works, so asking that they be informed about AI internals is a bit much.

aug 21, 2025, 1:46 am • 1 0 • view
avatar
Andrew Plotkin @eblong.com

(Okay, they should know a lot about the failure modes of human intelligence. But then you should reject failures on both sides, right? Anyhow this is where Yudkowsky et al *started* so I'm staying away.)

aug 21, 2025, 1:46 am • 1 0 • view
avatar
Andrew Plotkin @eblong.com

Seems to me that the actual answer is that the "real" Turing Test must be a iterative process, where you learn more about what you're looking for as examples come in. Not moving the goalposts, but refining where you think they are. That's what's real-life happened over the past five years.

aug 21, 2025, 1:46 am • 1 0 • view
avatar
Andrew Plotkin @eblong.com

So I'm giving up on a bright-line know-for-sure test -- fine! We couldn't even say what a "mammal" is without decades of research and proposed redefinitions. Why should "sentient" be simpler? The fun question is now whether the iteration converges.

aug 21, 2025, 1:46 am • 1 0 • view
avatar
Elizabeth Sandifer @eruditorumpress.com

Yeah, I think the desire for a firmly refereed set of rules is largely a product of Turing’s plain autism and the fact that he didn’t anticipate how much people would want to project onto AIs. Skepticism still counts for a lot. A simple $20 reward for each successful ID would improve rigor nicely.

aug 21, 2025, 1:54 am • 0 0 • view
avatar
Nathan Curtis @tshellstudio.bsky.social

Also, we're not using Turing test here in the strict sense of human versus computer. Just because people baselessly trust the answers it gives, and particularly credulous ones attribute sentience to it, doesn't mean they wouldn't be able to tell it apart from a human in that scenario.

aug 20, 2025, 8:29 pm • 1 0 • view
avatar
Elizabeth Sandifer @eruditorumpress.com

I think 2 is true mostly due to infelicities of Turing’s framing. It turns out you should have a skeptical interrogator with moderate knowledge of AI’s failure modes so as to avoid the ELIZA effect.

aug 20, 2025, 8:02 pm • 1 0 • view