You might have heard AI boosters say: "it's okay that this tech is wrong half of the time, because the human operator 'partnering' with the LLM will catch errors." Except no they fucking won't. The actual context in which these systems are deployed make checking the AI's "work" impossible.