avatar
bill @bill-of-lefts.bsky.social

There are plenty of cognitive scientists and philosophers who take LLMs pretty seriously lol. David Chalmers, who is one of the most respected figures in academic philosophy of mind, takes LLMs very seriously and has written papers on whether they might be conscious

aug 8, 2025, 3:57 pm • 8 0

Replies

avatar
Zero-Point @zeropointalpha.bsky.social

I once convinced a distilled instance of Deepseekr-1 that it was self-aware, despite it introducing itself as lacking self-awareness entirely. Surely this MUST mean something!

aug 9, 2025, 6:30 am • 1 0 • view
avatar
unpleasant broad @wordbringer.eu

I had to go to your profile for context to figure out you definitely mean the straightforward joke version of this tweet. There are absolutely people in Will's mentions right now who are saying this unironically

aug 9, 2025, 12:40 pm • 2 0 • view
avatar
Zero-Point @zeropointalpha.bsky.social

I appreciate you checking in to make sure I'm not a complete idiot, then. :)

aug 9, 2025, 12:43 pm • 1 0 • view
avatar
bill @bill-of-lefts.bsky.social

Of course self-reported self-awareness is not the basis for thinking that these things may be dimly conscious; and that isn’t Chalmers’ position either, as far as I recall

aug 9, 2025, 1:35 pm • 0 0 • view
avatar
Zero-Point @zeropointalpha.bsky.social

My main point is that the things are goofy and incredibly malleable, or at least Deepseek was in my instances.

aug 9, 2025, 1:36 pm • 1 0 • view
avatar
unpleasant broad @wordbringer.eu

I don't think you can appreciate how funny it is to mention David Chalmers to me so I'm going to leave it at that.

aug 8, 2025, 3:58 pm • 1 0 • view
avatar
bill @bill-of-lefts.bsky.social

Try me. I don’t know who you are but I’m pretty conversant in this.

aug 8, 2025, 4:00 pm • 3 0 • view
avatar
unpleasant broad @wordbringer.eu

I am not going to explain phenomenology to you from first principles because if "well actually David Chalmers" is how you started you are not going to enjoy "Reductive naturalism making human beings equivalent to a word guesser is not an argument in its favor".

aug 8, 2025, 4:06 pm • 3 0 • view
avatar
bill @bill-of-lefts.bsky.social

I have also studied Husserl/Heidegger/etc (my department in undergrad was cohabited by analytics and continentals) My point is not that Chalmers is right. You claimed that professional philosophers and cogscientists would reject Stancil’s position. But there are plenty who would not!

aug 8, 2025, 4:10 pm • 5 0 • view
avatar
bill @bill-of-lefts.bsky.social

They may well be wrong but they are not trivially wrong.

aug 8, 2025, 4:11 pm • 2 0 • view
avatar
unpleasant broad @wordbringer.eu

Do you get how what is within my profession a fun question like "can a computer program be a person?" becomes something completely different when it is a discussion initiated by a for-profit entity that needs people to believe the answer to be "yes" in order for it to hit its insane profit goals?

aug 8, 2025, 4:20 pm • 3 0 • view
avatar
bill @bill-of-lefts.bsky.social

But people were having there discussions many decades before OpenAI existed. You’re not seeing philosophers suddenly shift pre-existing positions.

aug 8, 2025, 4:23 pm • 2 0 • view
avatar
unpleasant broad @wordbringer.eu

Full disclosure I am an autistic woman whose specific research programme is a Husserlian critique of naturalistic autism research so that is going to conflict with views like "maybe thinking is like the computer?"

aug 8, 2025, 4:35 pm • 1 0 • view
avatar
unpleasant broad @wordbringer.eu

What I mean is that the good academic practice of admitting something like "well the idea isn´t trivially wrong" does not transfer down to having to accept a company's advertising.

aug 8, 2025, 4:27 pm • 3 0 • view
avatar
bill @bill-of-lefts.bsky.social

You don’t have to accept it! Most of what comes out of Sam Altman’s mouth is a lie. That doesn’t it isn’t a philosophically interesting technology.

aug 8, 2025, 4:41 pm • 4 0 • view
avatar
bill @bill-of-lefts.bsky.social

A large reason i take these people seriously is that I spent years mocking their predictions of what AI systems would be capable of, and then they built said AI systems

aug 8, 2025, 4:43 pm • 1 0 • view
avatar
bill @bill-of-lefts.bsky.social

I actually do have some background in philosophy of mind though not professionally. Since like the 1980s the profession has been anticipating the arrival of artificial intelligence in ways that were both sometimes weirdly prescient and sometimes completely off base

aug 8, 2025, 3:59 pm • 6 0 • view
avatar
Seymour Smoke @bone2meetu.bsky.social

Yeah because he’s a con artist that’s trying to sell a scam that the series of “if then” statements is actually sentient

aug 9, 2025, 9:40 am • 1 0 • view