We probably need a new organisation. Local? Perhaps a European Convention on Non-Human Rights would be the best place to start?
We probably need a new organisation. Local? Perhaps a European Convention on Non-Human Rights would be the best place to start?
The Berne Convention, wasn't it? In Neuromancer...
It’s all in KSR’s Ministry For the Future
All the great apes would sign up to be a part of that in a heartbeat.
So would octopi - with all those arms, they could be the scribes and sectrtaries
It’s fascinating, reading the article, watching the companies hedge their bets. “We’ll make the following meaningless concession to the possibility of harm, but we won’t engage seriously with the possible horror of digital slavery because that would hurt our bottom line.”
Basically they either don’t believe their software is conscious but would like their customer base to imagine it might be, or they think it’s possible but aren’t prepared to confront that possibility at all because it leads inevitably to a stark choice.
Other options: 1. They know damn well it's not conscious and cannot become conscious, but this kind of talk attracts a certain type of investor with very deep pockets. 2. It keeps them in the news, so free publicity. 3. It distracts from the fact they have no business model and a shit product.
It’s this one. Source: I work in machine learning but not the GPT/LLM kind.
Hey, if they’re going to raise the possibility, they can pay the price
Absolutely. I would very much like to see a journalist go this route and press them on the ramifications of 'sentience'. Instead, we just get gibberish passed off as fact.
They simply don’t know yet. LLMs will be the “mouth/speech center” of the multimodal AI bots of the future. When paired with vision and audio, it will be even more distressing to be speaking to a “human” who can sound distressed (or mimic your own voice with just 20 seconds of hearing you).
At a certain point, this extremely advanced pattern recognition and response will be indistinguishable from interacting with a person (at least for a time). So the concept of “let’s be good now because we ever can’t tell the difference we’ll already be on the right foot” isn’t that weird from them
But my money is on the fact that these moves come because they know the closer to passing the “smell test” (ironically probably the last modality they can’t model) texting/chatting with AI gets the more distressing seeing it mistreated will become. Humans empathize/pack bound readily, GPTs mimic.
The first option, obviously.
I’m content to take them at their word, Adam! I think it’s brave of them to risk financial ruin over an ethical issue, and we should help them follow that path to its necessary moral end.
I think we should include chatbots in net migration.