oh come on. We do know how AI works, and it is not sentient. It can't be. It's a simulacrum of "human".
oh come on. We do know how AI works, and it is not sentient. It can't be. It's a simulacrum of "human".
Sure, we know how LLMs broadly work. But not completely on the detailed level, hence all the interpretability work. I think it's unlikely current LLMs are conscious. But very hard to tell which architectural changes may change this if we don't know how consciousness emerges in the first place.
As you say, we don't understand how human consciousness works, but we do know it has arisen in organic human structures over millennia. There is no way a non-organic probability machine is going to develop consciousness.
I am not an expert on consciousness by any means but from my understanding it's not clear that consciousness needs organic structures (but again, we don't know enough about consciousness to be sure).
I won’t say a machine can never be conscious, but I’m sure we need to avoid the trap of thinking that because an LLM is so good at generating plausible text, it’s somehow ‘nearly there’. It’s not going to appear from another $B worth of GPUs and another couple of scale-ups of the context window.
Agreed. The appearance of consciousness is not evidence of consciousness. But that's reflected in the article (which was the whole catalyst of this discussion).
Why not? At the end of the day things with consciousness are just a collections of atoms, a non organic machine with atoms in the right configuration should be capable of consciousness no? Unless you bring god or woo into it or some such.
The article picks up a more basic point- AI are not only not digital humans, we should not want them to be. Humans can already make humans. Let's start with welfare rights of humans and other living things, before give rights to the tin parrots which are using resources in a way that threatens life
I don't disagree Joss, it's just that lots seem to be dismissing that a machine couldn't eventually be conscious, I don't think there is anything in physics that excludes this. So either there is something we don't know that will prevent it, or it will eventually happen.
LLMs are no more sentient than my reflection But that doesn't rule it out with the caveat just because you could does not mean you should. Aiming for that is like striving to create machines that grow by absorbing CO2 but with a large energy requirement that increases as they grow when trees exist
And then only in the very narrow sense of modelling human language use.
you cannot prove someone with autism is sentient