The bigger concern would be a kid in crisis where we don't want the AI to encourage or push further towards self-harm. And I absolutely disagree that code can't be written to minimize the possibility of that happening. It's just a matter of companies caring enough to prioritize it.