That’s *not* a scientific argument. Thanks autocorrect
That’s *not* a scientific argument. Thanks autocorrect
On the other hand, certainly there is more to humans than what chatGPt can generate - absolutely. Today.
But I can’t claim to really understand human brains not do I understand LLMs particularly well either. Given that ignorance I am at least willing to consider possibilities. If you think I’m mistaken I’d be delighted to learn more
A Large Language Model (LLM) is just a sophisticated statistical model of language. It just uses statistics to generate output that looks a lot like meaningful human communication. Human minds find meaning in the output bc they seek meaning but the LLM itself does not comprehend meaning.
It is possible that one day a model of a human mind could be created that would be sophisticated enough that it would be sapient in the same way a human mind is, but it won't look anything like an LLM. TL;DR: We know enough about both brains and LLMs to be sure LLMs are not intelligent
I’m hearing a lot of very confident assertions of what LLMs are or are not. That’s what bothers me; I for one don’t think I have a deep enough understanding of what’s going on to confidently assert much of anything. And crucially I don’t think anyone else does either - not even their designers.
They have more or less empirically arrived at recipes to make them. That’s not the same thing at all. I know how to *make* a baby. That does *not* mean I have the first clue about what’s actually going on. I have yet to see much evidence that our understanding of these LLMs has gone very far past
that point. Frankly though, our understanding of human brains is pretty limited, too. Hell, can someone explain why exactly human brains need sleep, and what is going on at the informational level when we do?
So my point is that I think this debate would benefit from fewer assertions of certainty and a bit more humility. LLMclearly do exhibit emergent behaviors ranging from an ability to at least sound like they’re reasoning (often imperfectly, but I challenge you to find a human who doesn’t make similar
mistakes), to weird tics. One thing that bothers me though is the immediate, strong, and frankly unthinking rejection bordering on hatred that I see in many people surrounding the topic of AI. (I equally abhor the bullshit hype cycle of the tech bros, to be clear)
That kind of violent rejection is of course deeply human - we feel threatened by something a) we don’t understand and b) that might well take our jobs. Xenophobia and status challenge in one package - I shouldn’t be surprised there haven’t been torch mobs already.
"And crucially I don't think anyone else does either - not even their designers." LOL you are completely wrong on this point. I'm sorry, but you are simply wrong. Their designers do understand what LLMs are. And I'm sufficiently familiar with computer science to get the basics.
I didn’t say “what they are”, I said “what is going on”. Do you understand the difference? To coin a phrase: do you grasp my meaning?