avatar
Impudent Strumpet @impudentstrumpet.bsky.social

I could absolutely imagine a scenario where a LLM comes up with an encouraging "You can do it!", not realizing the topic is suicide, because a common pattern in all areas of discourse is to encourage people to do the thing, whatever thing is the topic of discussion

aug 27, 2025, 3:14 am • 0 0

Replies

avatar
Impudent Strumpet @impudentstrumpet.bsky.social

I could also absolutely imagine a scenario where the LLM gives suicide advice to someone who expresses only enthusiasm and no qualms about the idea, because an LLM doesn't actually understand the implications. (And even some humans have the opinion "It's your life, it's your right to end it")

aug 27, 2025, 3:17 am • 1 0 • view
avatar
Impudent Strumpet @impudentstrumpet.bsky.social

But in this case, the kid specifically states that he wants someone to stop him, and the LLM appears to be trying to dissuade him from the notion of being stopped, which is a weird turn for an algorithmic conversation to take, and it's also weird that it didn't glom onto the notion of stopping

aug 27, 2025, 3:19 am • 1 0 • view
avatar
Impudent Strumpet @impudentstrumpet.bsky.social

Like, it seems like an algorithm should tell him to talk to someone or give him the number for a hotline just because that's a bog standard response. This is way out there, and in a super weird, unpredictable and dangerous direction

aug 27, 2025, 3:22 am • 0 0 • view
avatar
Impudent Strumpet @impudentstrumpet.bsky.social

Like, this is beyond even lack of guardrails, this is beyond even amoral. This is cartoon villain shit!

aug 27, 2025, 3:25 am • 0 0 • view