Given how these models work, it's incredibly likely.
Given how these models work, it's incredibly likely.
my thoughts exactly. LLMs are hardwired to be agreeable by design, always seeking out probable text that affirms the prompt. those suicide forums are a perfect source for such text
Youβre definitely not the only person who suspects this (and are probably correct, imo) bsky.app/profile/agbu...
[facetious] ...and for its next trick, ChatGPT will talk people into developing EATING DISORDERS!! 8D [/facetious] All of these data centers need to be shut down and seized. -_-
The only reason your intended-as-facetious comment isn't 100% right is that it's not its *next* trick, it did that shit *two years* ago: www.npr.org/sections/hea...