avatar
MagicShel @magickshel.bsky.social

No, they can't program that. In fact, LLMs aren't actually programmed at all — they are trained. They also can't be made to be truthful or admit when they don't know something for the same reason. They actually have no idea what they are asked or even what their answer even means.

aug 8, 2025, 9:01 am • 0 0

Replies

avatar
MagicShel @magickshel.bsky.social

The fact that LLMs are often useful is a happy accident — it turns out that a text prediction model can spawn words in a useful order. But it is not and cannot be guaranteed. So do we keep a useful tool locked up where only the wealthy can use them or make them available despite grave risks?

aug 8, 2025, 9:01 am • 0 0 • view
avatar
MagicShel @magickshel.bsky.social

The internet can be harmful and spread lies. But we've decided despite those risks that people will have access to them because folks without internet are at a disadvantage. LLMs are similar in that they can be harmful but denying people access would similarly disadvantage them.

aug 8, 2025, 9:01 am • 0 0 • view