Hey, if they’re going to raise the possibility, they can pay the price
Hey, if they’re going to raise the possibility, they can pay the price
Absolutely. I would very much like to see a journalist go this route and press them on the ramifications of 'sentience'. Instead, we just get gibberish passed off as fact.
They simply don’t know yet. LLMs will be the “mouth/speech center” of the multimodal AI bots of the future. When paired with vision and audio, it will be even more distressing to be speaking to a “human” who can sound distressed (or mimic your own voice with just 20 seconds of hearing you).
At a certain point, this extremely advanced pattern recognition and response will be indistinguishable from interacting with a person (at least for a time). So the concept of “let’s be good now because we ever can’t tell the difference we’ll already be on the right foot” isn’t that weird from them
But my money is on the fact that these moves come because they know the closer to passing the “smell test” (ironically probably the last modality they can’t model) texting/chatting with AI gets the more distressing seeing it mistreated will become. Humans empathize/pack bound readily, GPTs mimic.