It’d be like if I was in sandwich research and then put cheese on a slice of ham without the bread and called it a sandwich. You’d be like “no, this might be a step towards a sandwich but it isn’t a sandwich”
It’d be like if I was in sandwich research and then put cheese on a slice of ham without the bread and called it a sandwich. You’d be like “no, this might be a step towards a sandwich but it isn’t a sandwich”
I think it's more like you're trying to make a sandwich and you create a really accurate 3D printed model of a sandwich. It would fool lots of people who see it, but could never actually become a sandwich. LLMs are an impressive simulation of verbal intelligence, but not really a step toward it.
"artificial intelligence" in the same sense as "artificial houseplant," or maybe as a more genuinely useful example, "artificial limb" the real concern becomes: are we cutting off our limbs to replace them with robotic limbs that we can't even really control? are we cutting out our brains?
that's a lofty goal that some have, but not really what the field is all about. it's just interdisciplinary theories of computing logic based on the ways human beings think rather than the traditional binary logic. grew out of the cognitive revolution. Chomsky came out of the same stew.
interestingly, critiques about the theoretical untrustworthiness of AI to determine facts, for example, were there almost from the beginning. the best argument was made in the '80s, that because symbols have no meaning to a computer, the logical mode can't really be compared to humans.
i tend to think the problem is more a function of sci-fi stories. science fiction directly connected AI with like robots that have emotions, when it was actually being used to develop speech recognition, language translation, data mining, search engines, video processing, chess software, etc.