Try me. I don’t know who you are but I’m pretty conversant in this.
Try me. I don’t know who you are but I’m pretty conversant in this.
I am not going to explain phenomenology to you from first principles because if "well actually David Chalmers" is how you started you are not going to enjoy "Reductive naturalism making human beings equivalent to a word guesser is not an argument in its favor".
I have also studied Husserl/Heidegger/etc (my department in undergrad was cohabited by analytics and continentals) My point is not that Chalmers is right. You claimed that professional philosophers and cogscientists would reject Stancil’s position. But there are plenty who would not!
They may well be wrong but they are not trivially wrong.
Do you get how what is within my profession a fun question like "can a computer program be a person?" becomes something completely different when it is a discussion initiated by a for-profit entity that needs people to believe the answer to be "yes" in order for it to hit its insane profit goals?
But people were having there discussions many decades before OpenAI existed. You’re not seeing philosophers suddenly shift pre-existing positions.
Full disclosure I am an autistic woman whose specific research programme is a Husserlian critique of naturalistic autism research so that is going to conflict with views like "maybe thinking is like the computer?"
What I mean is that the good academic practice of admitting something like "well the idea isn´t trivially wrong" does not transfer down to having to accept a company's advertising.
You don’t have to accept it! Most of what comes out of Sam Altman’s mouth is a lie. That doesn’t it isn’t a philosophically interesting technology.
A large reason i take these people seriously is that I spent years mocking their predictions of what AI systems would be capable of, and then they built said AI systems