avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

You can phrase that in a way that plays up how humans will bullshit or be insincere - "Just like a politician, amirite?" - but that doesn't change the fact that the LLM did not form an intention or decide anything. It executed a process with its weighted model to produce a likely-enough response. 3/

aug 29, 2025, 5:33 pm • 23 0

Replies

avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

The fact that it can't count the letter R in the word strawberry is not a cute hallmark of commonality is has with human fallibility - it is a demonstration of the LLM's lack of any thought process. When it outlines step by step how it "arrived" at the conclusion, the steps are clear nonsense. 4/

aug 29, 2025, 5:33 pm • 19 0 • view
avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

Most thinking beings under most circumstances would have spotted the mistake as soon as they articulated the steps. "You said most, not all!" Hold up, buddy. It's not articulating its reasoning. We could almost say it is backfilling its reasoning after the fact, but it's not even doing that. 5/

aug 29, 2025, 5:33 pm • 16 0 • view
avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

It is producing a reply that statistically resembles an answer to a prompt asking someone to list the steps they used to arrive at the conclusion. There was no actual process of formulating a plan with those steps and then following them. It's just providing a likely response. 6/

aug 29, 2025, 5:33 pm • 21 0 • view
avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

This is why it matters that an LLM has neither interiority nor knowledge of the outside world: it can't say "I don't know, man" when asked what's in your pocket because it doesn't know what your pocket is and it doesn't know that it doesn't know that. 7/

aug 29, 2025, 5:33 pm • 25 2 • view
avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

It does not deal with the world through... well, it doesn't deal with the world at all. But it simulates dealing with the world not by categorizing things as known/knowable or not and proceeding accordingly. No matter how many sources it devours, it has no knowledge base to draw from. 8/

aug 29, 2025, 5:33 pm • 19 0 • view
avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

If you ask it what the capital of Lithuania is, it could easily output an answer that is correct as that's not a definite enough thing that a reply statistically resembling a correct one is likely to be correct. If you ask it how many fingers you're holding up, though? 9/

aug 29, 2025, 5:33 pm • 15 0 • view
avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

It will find enough sources to come up with an answer resembling an answer to that question, which may or may not be correct but will in neither case have anything to do with the actual situation. It does not understand the difference between these situations. They're both the same operation. 10/

aug 29, 2025, 5:34 pm • 17 0 • view
avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

The human being asked to count how many letters in a word is actually counting, even if they get it wrong. Or blurting out what they think the answer is, if they think they know it. Either way, there is a mind that is aware of the word "strawberry" and the letter "r", answering in a way that...

aug 29, 2025, 5:35 pm • 17 0 • view
avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

...relates to those concepts and their own experience of them in the real world. To an LLM, both "strawberry" and "R" and numbers are abstract tokens, connected to nothing but other tokens, and signifying nothing. They are symbols in a mathematical system.

aug 29, 2025, 5:36 pm • 16 0 • view
avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

I started out numbering these because I wanted to make it clear the whole thing is building continuously in support of a conclusion, as you have a nasty habit of replying to part of a thought with "So what you're saying is [something else], right?" and then countering your own substitute argument.

aug 29, 2025, 5:37 pm • 13 0 • view