avatar
Nick Harkaway @nickharkaway.com

Hey, if they’re going to raise the possibility, they can pay the price

aug 26, 2025, 10:48 am • 0 0

Replies

avatar
Suw @suw.bsky.social

Absolutely. I would very much like to see a journalist go this route and press them on the ramifications of 'sentience'. Instead, we just get gibberish passed off as fact.

aug 26, 2025, 10:49 am • 1 0 • view
avatar
Ford, The Punslinger @inferknow.co

They simply don’t know yet. LLMs will be the “mouth/speech center” of the multimodal AI bots of the future. When paired with vision and audio, it will be even more distressing to be speaking to a “human” who can sound distressed (or mimic your own voice with just 20 seconds of hearing you).

aug 26, 2025, 1:37 pm • 0 0 • view
avatar
Ford, The Punslinger @inferknow.co

At a certain point, this extremely advanced pattern recognition and response will be indistinguishable from interacting with a person (at least for a time). So the concept of “let’s be good now because we ever can’t tell the difference we’ll already be on the right foot” isn’t that weird from them

aug 26, 2025, 1:38 pm • 0 0 • view
avatar
Ford, The Punslinger @inferknow.co

But my money is on the fact that these moves come because they know the closer to passing the “smell test” (ironically probably the last modality they can’t model) texting/chatting with AI gets the more distressing seeing it mistreated will become. Humans empathize/pack bound readily, GPTs mimic.

aug 26, 2025, 1:41 pm • 0 0 • view