I just don’t understand why it’s so hard to tell ChatGPT “do not encourage anyone to take their life or the life of others.” How is that a hard protocol to introduce?
I just don’t understand why it’s so hard to tell ChatGPT “do not encourage anyone to take their life or the life of others.” How is that a hard protocol to introduce?
It’s a good sign that the technology has not reached a point of refinement where it doesn’t forget prompts almost instantly
I’m just saying, I’ve heard nothing about Claude encouraging its users to kill themselves. Or even Grok. This does seem to be a solvable problem.