I’d speculate it’s not the queries as much as the fact it’s trained on human-created content.
I’d speculate it’s not the queries as much as the fact it’s trained on human-created content.
yeah this is exactly what i mean -- LLMs reflect us, and we see this effect strongest with language-oriented framings because language is how we emit aspects of our selves and worlds. so the most interesting bits imo aren't when LLMs are "human-like" (though good to document), it's when they're not
we need a theory of latent spaces