avatar
MGoCoder @mgocoder.bsky.social

It's just code. Even if you are writing additional code that filters the chat bots output or looks at the input it would be possible to detect questions on self-harm. The problem is they didn't do it, not that it isn't possible.

aug 27, 2025, 6:37 am • 0 0

Replies

avatar
"Anatol" @dngrs.bsky.social

I'm not here to defend openai, to be clear - but what you're saying is just not true. We currently have no clue how to reliably parse any kind of sentiment with "just code".

aug 27, 2025, 11:32 am • 1 0 • view