avatar
Damien Stanton @damienstanton.bsky.social

In some indie research I’ve “interviewed” multiple generations of Claude and Gemini models, framing their internal ontological systems in terms of category theory and topology, and essentially there is no personality at all, nor conceptual persistence.

jul 26, 2025, 1:10 pm • 2 0

Replies

avatar
Damien Stanton @damienstanton.bsky.social

Think of a graph: LLMs recognize edges but have no concept of what the nodes mean. It makes for a nice analogy of what hallucinations really are; a trace through a topological space where the path matches but the LLM is blind to the (wrong) nodes. This is all anecdotal of course, but compelling.

jul 26, 2025, 1:10 pm • 2 0 • view
avatar
Pekka Lund @pekka.bsky.social

That just sounds confused, not compelling at all. They aren't like graphs, they are fundamentally neural nets and so universal function approximators like us. As such, they understand edges and nodes and whatnot very much like us. Which is computation all the way down.

jul 26, 2025, 1:24 pm • 0 0 • view
avatar
Damien Stanton @damienstanton.bsky.social

No, a trace through the token prediction computation is a graph -- nothing re: the Transformer architecture or neural nets themselves. I'm telling you the way in which the foundation models self-reported the process I described; no different than Hinton's example of the prism/subjective experience.

jul 26, 2025, 2:09 pm • 2 0 • view
avatar
Pekka Lund @pekka.bsky.social

Self-reported what process? Tracing their thoughts through the networks? Like you would expect people to describe their internal processes?

jul 26, 2025, 2:23 pm • 0 0 • view