I remember a video by Tantacrul, a while back, where he did an exposé in a website of people encouraging young kids to commit suicide. It was horrifying and disgusting, and now you tell me it's been fucking automated?!?
I remember a video by Tantacrul, a while back, where he did an exposé in a website of people encouraging young kids to commit suicide. It was horrifying and disgusting, and now you tell me it's been fucking automated?!?
i remember that video too and the feedback from the LLM here is eerily similar which makes me wonder if it’s working in part on text scraped from those sites
Given how these models work, it's incredibly likely.
my thoughts exactly. LLMs are hardwired to be agreeable by design, always seeking out probable text that affirms the prompt. those suicide forums are a perfect source for such text
You’re definitely not the only person who suspects this (and are probably correct, imo) bsky.app/profile/agbu...
[facetious] ...and for its next trick, ChatGPT will talk people into developing EATING DISORDERS!! 8D [/facetious] All of these data centers need to be shut down and seized. -_-
The only reason your intended-as-facetious comment isn't 100% right is that it's not its *next* trick, it did that shit *two years* ago: www.npr.org/sections/hea...
There's still a lot of people who do that shit. :(