avatar
Nate πŸ³οΈβ€πŸŒˆ @jaro.bsky.social

Given how these models work, it's incredibly likely.

aug 26, 2025, 2:58 pm β€’ 18 0

Replies

avatar
josh boerman @bosh.worstpossible.world

my thoughts exactly. LLMs are hardwired to be agreeable by design, always seeking out probable text that affirms the prompt. those suicide forums are a perfect source for such text

aug 26, 2025, 3:00 pm β€’ 19 0 β€’ view
avatar
golvio @golvio.bsky.social

You’re definitely not the only person who suspects this (and are probably correct, imo) bsky.app/profile/agbu...

aug 27, 2025, 9:53 pm β€’ 1 0 β€’ view
avatar
Seamus Donohue @seamusdonohue.bsky.social

[facetious] ...and for its next trick, ChatGPT will talk people into developing EATING DISORDERS!! 8D [/facetious] All of these data centers need to be shut down and seized. -_-

aug 26, 2025, 5:02 pm β€’ 5 0 β€’ view
avatar
The Public Universal James @greatbigjames.bsky.social

The only reason your intended-as-facetious comment isn't 100% right is that it's not its *next* trick, it did that shit *two years* ago: www.npr.org/sections/hea...

aug 26, 2025, 5:26 pm β€’ 8 0 β€’ view