LLM outputs reflect the judgment of whoever is making the deployment decisions, not the AI itself. Companies use AI as a way to launder responsibility—as if the AI model made a decision—when in fact a human set up the chessboard and wrote the rules of the game