avatar
Nate 🏳️‍🌈 @jaro.bsky.social

I remember a video by Tantacrul, a while back, where he did an exposé in a website of people encouraging young kids to commit suicide. It was horrifying and disgusting, and now you tell me it's been fucking automated?!?

aug 26, 2025, 2:55 pm • 29 1

Replies

avatar
josh boerman @bosh.worstpossible.world

i remember that video too and the feedback from the LLM here is eerily similar which makes me wonder if it’s working in part on text scraped from those sites

aug 26, 2025, 2:56 pm • 28 0 • view
avatar
Nate 🏳️‍🌈 @jaro.bsky.social

Given how these models work, it's incredibly likely.

aug 26, 2025, 2:58 pm • 18 0 • view
avatar
josh boerman @bosh.worstpossible.world

my thoughts exactly. LLMs are hardwired to be agreeable by design, always seeking out probable text that affirms the prompt. those suicide forums are a perfect source for such text

aug 26, 2025, 3:00 pm • 19 0 • view
avatar
golvio @golvio.bsky.social

You’re definitely not the only person who suspects this (and are probably correct, imo) bsky.app/profile/agbu...

aug 27, 2025, 9:53 pm • 1 0 • view
avatar
Seamus Donohue @seamusdonohue.bsky.social

[facetious] ...and for its next trick, ChatGPT will talk people into developing EATING DISORDERS!! 8D [/facetious] All of these data centers need to be shut down and seized. -_-

aug 26, 2025, 5:02 pm • 5 0 • view
avatar
The Public Universal James @greatbigjames.bsky.social

The only reason your intended-as-facetious comment isn't 100% right is that it's not its *next* trick, it did that shit *two years* ago: www.npr.org/sections/hea...

aug 26, 2025, 5:26 pm • 8 0 • view
avatar
RootfireEmber @rootfire.bsky.social

There's still a lot of people who do that shit. :(

aug 26, 2025, 2:59 pm • 1 0 • view