More seriously—and I’m sure this is already done to some extent. But if the issue is fundamentally that large context windows cause drift, couldn’t you have a second, censor-GPT review all outputs before they go to the user? “Here is an output, does it look like it’s promoting suicide”?