avatar
Tom Weinstein @tweinstein.bsky.social

LLMs encode context, which is only a very weak proxy for meaning. This is why they hallucinate so easily.

apr 27, 2025, 1:05 pm • 2 0

Replies

avatar
Kevin Bennett @kbkev.bsky.social

In the most literally sense possible, they absolutely encode the semantic meaning of the language they are given. It's how they work. datasciencedojo.com/blog/embeddi...

apr 27, 2025, 1:12 pm • 0 0 • view
avatar
Tom Weinstein @tweinstein.bsky.social

Embeddings model contextual similarity, which is also a flawed proxy for meaning. How good it is depends on how comprehensive your training data is in containing the meaning you're trying to capture

apr 27, 2025, 1:49 pm • 0 0 • view