It's just code. Even if you are writing additional code that filters the chat bots output or looks at the input it would be possible to detect questions on self-harm. The problem is they didn't do it, not that it isn't possible.
It's just code. Even if you are writing additional code that filters the chat bots output or looks at the input it would be possible to detect questions on self-harm. The problem is they didn't do it, not that it isn't possible.
I'm not here to defend openai, to be clear - but what you're saying is just not true. We currently have no clue how to reliably parse any kind of sentiment with "just code".