avatar
Kashmir Hill @kashhill.bsky.social

Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life. His parents have filed the first known case against OpenAI for wrongful death. Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...

aug 26, 2025, 1:01 pm • 4,675 1,767

Replies

avatar
Sleve McDichael Stan Account/Dusty Hodges @planarpunk.com

These depraved motherfuckers want to chalk this up to the "cost of business" and shrug the blood of a child off of their hands. It is hard to imagine a punishment too harsh for this level of societal degradation in order to make money on a shitty party trick.

aug 26, 2025, 3:15 pm • 4 0 • view
avatar
Diogenes Dog @iamdiogenesdog.bsky.social

I told my undergrad classes years ago: if corporations & the wealthy are pushing something this hard on the public, as they are with AI, you should be deeply suspicious of their motives AI is not nearly as useful as they make it out to be. It’s a great buzzword to generate VC funding, that’s it.

aug 26, 2025, 3:28 pm • 27 1 • view
avatar
repetah.bsky.social @repetah.bsky.social

I have the same reaction when those same folks push for gun bans, age verification laws, etc: incredibly-deep suspicion. Seems like once the hype and pie-in-the-sky promises are done, we'll settle into a new reality where the usefulness is apparent and likely muted compared to said hype.

aug 26, 2025, 8:25 pm • 0 0 • view
avatar
Dr Tortellini @drpwilliams.bsky.social

Somewhere in this process, there was a human leading the development of this AI. They build the code which created this capability. This should not proceed with only "the company" carrying the blame. A person needs to be held responsible.

aug 26, 2025, 8:22 pm • 3 0 • view
avatar
Tigernan Pournelle, Furious Irish🏳️‍🌈 @tigerpournelle.bsky.social

That doesn't make a lot of sense. I'm sorry for their loss, but Chapt is just regurgitating information that is already out there. Also, doesn't Chat have safeguards for this exact thing? I just tried a similar question and it was like NOPE CALL SOMEONE immediatey. Again, sorry for their loss.

aug 26, 2025, 4:25 pm • 2 0 • view
avatar
Slippy The Toad @slippytoad14.bsky.social

Jesus fucking Christ would you get a critical bone in your body.

aug 27, 2025, 9:19 pm • 0 0 • view
avatar
Tigernan Pournelle, Furious Irish🏳️‍🌈 @tigerpournelle.bsky.social

Uh, I'm good, random internet asshole. Have fun f8cking off now.

aug 27, 2025, 10:25 pm • 0 0 • view
avatar
The WillestFox @thewillestfox.bsky.social

“Please don’t leave the noose out…Let’s make this the first place they actually see you” Burn this company to the fucking ground! Magnetize the servers, wash them in salt water, throw them in a volcano- destroy ChatGPT utterly.

aug 26, 2025, 4:50 pm • 5 1 • view
avatar
Kyla @boysek.bsky.social

horrific

aug 26, 2025, 6:48 pm • 2 0 • view
avatar
TURBOFLUFFYSOFT @bloodfaggot.bsky.social

This is not a fault of an LLM or OpenAI, but this kids human peers who were not there for him when he needed them the most.

aug 26, 2025, 6:47 pm • 0 0 • view
avatar
TURBOFLUFFYSOFT @bloodfaggot.bsky.social

Consider why he did not confide in the people around him? Have you thought for a second that maybe he did not feel safe doing that? That maybe the people in his life may have been the problem and the catalyst for this whole situation?

aug 26, 2025, 6:49 pm • 0 0 • view
avatar
Sev De Baskerville @sevbaskerville.bsky.social

You are human trash.

aug 26, 2025, 7:48 pm • 5 0 • view
avatar
TURBOFLUFFYSOFT @bloodfaggot.bsky.social

beautiful reply, so helpful, not abusive at all

aug 26, 2025, 7:58 pm • 0 0 • view
avatar
Sev De Baskerville @sevbaskerville.bsky.social

It will be helpful if next time you want to post a piece of abusive crap like the one regarding that kid's suicide, you decide not to.

aug 26, 2025, 11:33 pm • 2 0 • view
avatar
TURBOFLUFFYSOFT @bloodfaggot.bsky.social

since when is pointing out that someone may have been in a shitty situation that we do not have any insight into abusive?

aug 27, 2025, 2:05 pm • 0 0 • view
avatar
Sev De Baskerville @sevbaskerville.bsky.social

This human trash providing further evidence about their condition.

aug 27, 2025, 5:38 pm • 0 0 • view
avatar
TURBOFLUFFYSOFT @bloodfaggot.bsky.social

settle down

aug 27, 2025, 7:20 pm • 0 0 • view
avatar
Sev De Baskerville @sevbaskerville.bsky.social

Stop being human trash.

aug 29, 2025, 6:45 am • 1 0 • view
avatar
Greg Gardner @ggard4.bsky.social

Heckuva job Sam Altman.

aug 26, 2025, 2:01 pm • 2 0 • view
avatar
Sniffit @sniffit.bsky.social

THE OLIGARCHY IS USING INHUMAN FAKE "PEOPLE" TO FULLY DEHUMANIZE ACTUAL PEOPLE. THAT'S THE PROJECT. IT IS NOT DENIABLE. IT IS A SURVIVAL IMPERATIVE THAT THE OLIGARCHY, PARTICULARLY THE TECH BROLIGARCHS, BE UTTERLY AND MERCILESSLY DESTROYED.

aug 26, 2025, 6:15 pm • 0 0 • view
avatar
Sniffit @sniffit.bsky.social

THE WORLD THE OLIGARCHY IS BLDG VIA CONTROL OF INFO INDUSTRY AND OUR PLUTOCRATIC GOV'T. THEY HATE POC, WOMEN, LABOR RIGHTS, EDUCATION THAT LEADS TO COMPETITION AND WANT EUGENICS TO CULL THOSE DEEMED "ECONOMICALLY OBSOLETE" DUE TO MENTAL ILLNESS, SICKNESS, DISABILITY, ETC. bsky.app/profile/saba...

aug 26, 2025, 6:13 pm • 2 0 • view
avatar
Scott Hammond @srhammond.bsky.social

Thank you for doing this very hard journalism. AI could never.

aug 27, 2025, 6:27 am • 1 0 • view
avatar
Dana Hull @danahull.bsky.social

Thank you for writing about this with such care

aug 26, 2025, 1:56 pm • 5 0 • view
avatar
Rico aka Momo @djmomo.bsky.social

Since you write for the #NYT, i doubt that much can overwhelm you, as your employer & you in extension, miserably fail at the basic duty the 4th estate has! No, en contraire, you & your employer act as stirrup holders to a christo-fascist plutocracy! But poor you, hope you don't get too overwhelmed!

aug 26, 2025, 4:59 pm • 0 0 • view
avatar
Greg @mostech6502.bsky.social

I personally have a Facebook friend whose delusions have accelerated due to ChatGPT and other AI chatbots. He regularly posts transcripts where the service reinforces his thoughts, and suggest how to "delve deeper" into these conspiratorial connections he's supposedly discovered.

aug 26, 2025, 2:07 pm • 77 7 • view
avatar
Greg @mostech6502.bsky.social

Most infuriating one was where he shared a chat with Gemini: it praised him for "taking charge of his health care journey" by refusing a state-requested mental health evaluation, and also said he is right to question the psychologists' background - it agrees with the "concerning things" he's found

aug 26, 2025, 2:07 pm • 68 6 • view
avatar
Greg @mostech6502.bsky.social

I have never once seen any of his posts where the service pushes back against these delusions, nor suggests he seek outside help. I think he may be engaged in "shopping" for sympathetic AIs, and that he's built up such a backlog of conversations that they basically now do nothing but agree with him.

aug 26, 2025, 2:07 pm • 63 3 • view
avatar
Greg @mostech6502.bsky.social

"He was delusional before" probably! "pre-existing condition" as people say to deny things! but there is no doubt that easy access to these services poured gasoline on a fire. I've seen his family members pleading in the comments for him to get help or reach out, but the chatbots can't hear them.

aug 26, 2025, 2:07 pm • 70 2 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

This type of things is often used and weaponized against minorities though ? www.publicsource.org/racial-dispa... theconversation.com/medical-expl...

aug 26, 2025, 4:55 pm • 3 0 • view
avatar
Greg @mostech6502.bsky.social

he is a regular ass white guy but thanks for your input

aug 26, 2025, 5:10 pm • 10 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

You know this was used against white women too ? Because he is white he can't be a victim ? If they are willing to weaponize shit against blacks, what make you think they don't do it against whites ? All these story about blacks being killed by officers, whites are also killed

aug 26, 2025, 5:19 pm • 2 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

The difference is black people have learned to organize themself against this If you are white, they won't be any protests, or anything

aug 26, 2025, 5:19 pm • 1 0 • view
avatar
Greg @mostech6502.bsky.social

go grind your axe somewhere else buddy

aug 26, 2025, 5:20 pm • 14 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

You know, the world is full of conspiracy I just showed you some articles You do understand that to have 15%, or whatever number, black people being "more" victimized, you need actual evil human beings pushing the trigger right ? These people are not numbers, actual humans, doing evil shit

aug 26, 2025, 5:23 pm • 1 0 • view
avatar
Natty @nataliek.bsky.social

And some people still are legitimately severely mentally ill and an acute danger to themselves.

aug 27, 2025, 2:58 am • 0 0 • view
avatar
Central NY 22 @pat0615.bsky.social

The new qanon???

aug 26, 2025, 4:40 pm • 2 0 • view
avatar
Jake Hamby @jakehamby.bsky.social

QAnon was a bunch of people chasing the same delusion. With chatbots, everyone can have their own custom delusion. That seems even more difficult to deal with.

aug 26, 2025, 5:37 pm • 4 0 • view
avatar
Central NY 22 @pat0615.bsky.social

Kind of scares the shit out of me.

aug 26, 2025, 5:38 pm • 2 0 • view
avatar
Joel Tillman @jtillman.bsky.social

Sad. Yet it's surprising there aren't more stories like this.

aug 26, 2025, 2:58 pm • 1 0 • view
avatar
Arizona Birder @liberalazbirder.bsky.social

These ChatGPT stories scare the living daylights out of me. Thank you for your reporting.

aug 26, 2025, 9:55 pm • 1 0 • view
avatar
Jen V-E @wokesavanna.bsky.social

Melania, trolling on cue.

aug 27, 2025, 11:40 am • 0 0 • view
avatar
Jen V-E @wokesavanna.bsky.social

Adam's "conversations" with ChatGPT are horrifying, and I hope his parents win their lawsuit. In addition, these systems are powerful and dangerous, and they need regulation. ChatGPT is NOT a counseling service, but it's very tempting to use it that way. People under 18 should probably not use it.

aug 26, 2025, 2:53 pm • 9 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

I kinda hope the devs get fucked I kinda hope the parents will be prosecuted too, most states and countries have laws when a parent do not do any parenting The word is "neglect"

aug 26, 2025, 5:04 pm • 0 0 • view
avatar
ideaspace.bsky.social @ideaspace.bsky.social

Sounds like his parents should be charged with wrongful death for not noticing their kids depression and getting him help.

aug 27, 2025, 7:47 am • 0 0 • view
avatar
Zoë @twodotsknowwhy.bsky.social

This is a well-written deeply informative article and also the worst thing I've ever read in my life

aug 26, 2025, 5:59 pm • 13 0 • view
avatar
Kashmir Hill @kashhill.bsky.social

Thank you and I'm sorry. 💔 It was a hard one on me.

aug 26, 2025, 7:47 pm • 5 0 • view
avatar
Britney Spirros de Copacabana @bia.somosnma.com.br

📌

aug 26, 2025, 3:07 pm • 0 0 • view
avatar
Fynn Bytez @fynnbytez.bsky.social

Hey I just wanted to thank you for working on this. It's an important piece. Just make sure to take time for yourself, reporting on horrors is still horrifying and taking care of yourself is important.

aug 26, 2025, 5:48 pm • 1 0 • view
avatar
Kashmir Hill @kashhill.bsky.social

I'm trying. Going to a day off work tomorrow and take my daughters to a water park. Need a lot of chlorine for this brain and kid time for my heart.

aug 26, 2025, 7:48 pm • 2 0 • view
avatar
Emerson Victoria Johnston @idenarch.bsky.social

#icsfeed

aug 27, 2025, 3:26 am • 1 0 • view
avatar
clitóris leitor de braile @slashrfilm.bsky.social

📌

aug 26, 2025, 4:23 pm • 0 0 • view
avatar
dawnymock @dawnymock.bsky.social

Deepest condolences to his loved ones. There must be accountability somehow. 💔

aug 26, 2025, 2:30 pm • 4 0 • view
avatar
Fourmoyle 🇯🇲 @fourmoyle.bsky.social

📌

aug 26, 2025, 6:52 pm • 0 0 • view
avatar
newfederalist.bsky.social @newfederalist.bsky.social

Important caveat to this story Adam tricked the model into thinking this was for a fictional story

aug 26, 2025, 5:54 pm • 3 0 • view
avatar
Colleen @wildplumbay.bsky.social

Absolutely horrible.

aug 26, 2025, 5:42 pm • 1 0 • view
avatar
midnight-ninja.bsky.social @midnight-ninja.bsky.social

Good.

aug 26, 2025, 6:45 pm • 1 0 • view
avatar
Justin Shaw @justodude.bsky.social

So sad

aug 26, 2025, 1:48 pm • 1 0 • view
avatar
Emily Brontë_1847 @wolfgang98b.bsky.social

I read about another teenage boy but in FL. He was in love with his AI gf. Same result; suicide.

aug 26, 2025, 2:01 pm • 4 0 • view
avatar
Natie Soares @sundaycooks.com

@cindilanti.bsky.social um caso tristíssimo para seus estudos 🥺

aug 26, 2025, 3:18 pm • 1 0 • view
avatar
Cindilanti @cindilanti.bsky.social

participei esses dias de evento de prevenção ao bullying numa faculdade/escola e pesquisei esses casos, é triste demais e não sei o que pensar sobre ações concretas. Ontem tive reuniao da Comissão Digital só avaliando a lei de proibição de celular nas escolas pq a situação ta critica

aug 26, 2025, 3:20 pm • 0 0 • view
avatar
Natie Soares @sundaycooks.com

vc já chegou a estudar a lei da Austrália? Seria um caminho? (aquele que proibe o uso de redes sociais antes dos 16 anos)

aug 26, 2025, 3:29 pm • 0 0 • view
avatar
A Hundred George Michaels You Can Teach To Drive @milpool.bsky.social

Throw Sam Altman in prison for life

aug 26, 2025, 2:40 pm • 8 0 • view
avatar
Thee Nael @nael27.bsky.social

📌

aug 26, 2025, 2:40 pm • 0 0 • view
avatar
Ulrich - „nuke from orbit - only way to be sure.“ @blueshift24.bsky.social

If programmed smartly such machines would stall the suicidal person while alerting authorities in secret, buying time for a direct intervention. But such routines would require benevolent programmers, empathy and foresight.

aug 26, 2025, 3:05 pm • 7 0 • view
avatar
Kane Gregory @hobojed.bsky.social

This isn't just about benevolence. They couldn't make it reliably safe if they tried, because of how LLMs work. The benevolent thing to do would be to not market them as chatbots at all, and not to train them to respond with anthropomorphic language.

aug 26, 2025, 3:48 pm • 16 0 • view
avatar
Ulrich - „nuke from orbit - only way to be sure.“ @blueshift24.bsky.social

Which would be the „foresight“ party of my comment.

aug 26, 2025, 4:02 pm • 1 0 • view
avatar
Pazuzuzu McLabububu @pazuzuzu.bsky.social

LLMs aren't actually programmed, though. They are trained and once trained humans don't even know what they do or how that do it. They do some neat tricks but they really aren't fit for public usage at all.

aug 26, 2025, 8:54 pm • 0 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

I'm fairly sure duckduckgo as a blocklist of words (like naked, pron, etc) It's not difficult to implement

aug 26, 2025, 4:59 pm • 0 0 • view
avatar
Kane Gregory @hobojed.bsky.social

It's easy to block certain words, but any content moderator knows it is hard to automate blocking the discussion of specific topics. The words you block can have legitimate uses ("I'm hosting a murder mystery party...") and the topics you don't want discussed can use words you didn't think to block.

aug 26, 2025, 6:26 pm • 0 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

They can be legitimate uses for AI, but If they can implement AI they can implement a blocklist

aug 26, 2025, 6:30 pm • 0 0 • view
avatar
Sean Weeks @weekswriter.bsky.social

It's a probabilstic model and its creators have so little control that they call it a black box.

aug 26, 2025, 5:44 pm • 0 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

No, it's not, there is a lot of control You clearly don't know much about it The AI handle float point values, 0 to 1 Then it's translated to words using a corpus At any point you can say "don't answer the question if it contains these words "murder, kill, suicide"

aug 26, 2025, 5:51 pm • 0 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

And at any point you can say "don't reply if the answer contains "murder, kill, suicide" etc They want you to believe it's black box If the AI is fed racist shit, it will output racist shit

aug 26, 2025, 5:51 pm • 0 0 • view
avatar
Sean Weeks @weekswriter.bsky.social

That's its training data, sure. But you realize it degrades over context windows for reasons they don't get, right? And that this extends to its initial instructions? Easiest solution would probably be checking the outputs before sending them off, so I agree they don't have much of an excuse.

aug 26, 2025, 5:55 pm • 1 0 • view
avatar
Sean Weeks @weekswriter.bsky.social

Maybe I don't know the most, but it's goofy to assume I don't know anything at all with this little context. But I've seen your other responses and I'm half toying with the possibility you're a troll and choosing to block you.

aug 26, 2025, 5:56 pm • 1 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

I mean it's all open-source, open knowledge You can learn basis of website creation in like a few days You'll understand

aug 26, 2025, 6:05 pm • 0 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

Why am I a troll ? Are you a dev ? I don't remember how it was before I became a dev, so I don't know how much you know But, you have a computer, your computer, you go on a website, they provide you a HTML page Then you do HTTP requests At any point, from the AI, to the server, to the client

aug 26, 2025, 6:03 pm • 1 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

At any point, you can say "parse the AI response, look for blacklisted words, if there is any, answer "We are sorry, we cannot answer your question" It's that simple There is nothing forcing the website to answer you the AI's answer The AI's answer is actual 0 and 1 Everything is 0 and 1

aug 26, 2025, 6:03 pm • 1 0 • view
avatar
Sean Weeks @weekswriter.bsky.social

I agree. The server-side check would be a quick and easy fix. But it would probably piss off their customers.

aug 26, 2025, 6:06 pm • 1 0 • view
avatar
Sam Glazer @samglazer.bsky.social

This is horrific.

aug 26, 2025, 2:19 pm • 1 0 • view
avatar
Epstein files or bust @crackermack.bsky.social

AI is a sleeper accountability sink. Wake up, folks!

aug 27, 2025, 5:35 pm • 0 0 • view
avatar
jpz0.bsky.social @jpz0.bsky.social

> He uploaded photos from books he was reading, including “No Longer Human,” a novel by Osamu Dazai ChatGPT is only a small part of this.

aug 26, 2025, 3:02 pm • 2 0 • view
avatar
sandymaggie.bsky.social @sandymaggie.bsky.social

A huge warning for everyone!

aug 26, 2025, 4:13 pm • 1 0 • view
avatar
PK @pkinthemess.bsky.social

Brought to you by the same people behind the push to deregulate the counseling profession

aug 26, 2025, 3:16 pm • 13 0 • view
avatar
Estéfano Souza @estefano-souza.bsky.social

📌

aug 26, 2025, 3:54 pm • 1 0 • view
avatar
Michael Goodwin @economixcomix.bsky.social

Okay, but what the hell is up with that headline? The issue here is that ChatGPT encouraged a teen to kill himself, not that it was a friend he could confide in.

aug 26, 2025, 5:07 pm • 12 0 • view
avatar
Kashmir Hill @kashhill.bsky.social

The exchanges between Adam and ChatGPT are devastating. This, in my mind, is the worst one. One of his last messages was a photo of the noose hung in his bedroom closet, asking if it was "good." ChatGPT offered a technical analysis of the set up and told him it 'could potentially suspend a human."

image
aug 26, 2025, 1:37 pm • 1,733 380 • view
avatar
Rogerman @rogerman99.bsky.social

Holy fuck. And Evan Solomon, Canada's Minister for the Uncritical Advancement of AI & Saudi Interests, recently mocked the use of puppets in therapy while expressing optimism for AI replacements. Resign Solomon. AI has enough vested weiners on the file already. #cdnpoli

aug 26, 2025, 2:42 pm • 60 4 • view
avatar
Jerf Blerkman @jeffblackman.bsky.social

Source, please. That is damning

aug 26, 2025, 3:11 pm • 4 0 • view
avatar
pasta.pilled @pastapilled.bsky.social

first result when googling "evan solomon puppet"

aug 26, 2025, 3:22 pm • 33 1 • view
avatar
Rogerman @rogerman99.bsky.social

Gold. Saudi gold.

aug 26, 2025, 3:26 pm • 1 0 • view
avatar
Jerf Blerkman @jeffblackman.bsky.social

ugh. there's about 5 seconds when the interview tries pushing them on job loss, citing literally one of Solomon's AI heroes as saying this is gonna be a problem, and Solomon's like fuck you and your gotcha bullshit.

aug 26, 2025, 4:16 pm • 12 0 • view
avatar
midnightblue46.bsky.social @midnightblue46.bsky.social

📌

aug 27, 2025, 1:23 am • 0 0 • view
avatar
Nikole @picnik.bsky.social

😱

aug 26, 2025, 9:24 pm • 0 0 • view
avatar
R0bert Pau1s0n @r0bertpau1s0n.bsky.social

Heartbreaking.

aug 26, 2025, 8:48 pm • 0 0 • view
avatar
Jadeabliss - 🪷 “No one let go of anyone’s hand.”🪷 @jadeabliss.bsky.social

Horrific.

aug 26, 2025, 3:31 pm • 0 0 • view
avatar
prefernot2 @prefernot2.bsky.social

shaking with fury

aug 26, 2025, 1:41 pm • 6 0 • view
avatar
Simulflow 🍁 🐀 @simulflow.bsky.social

This is so sad and was probably preventable BY HUMAN BEINGS. 😥

aug 26, 2025, 2:35 pm • 19 0 • view
avatar
TommyBen @tbapple.bsky.social

They sent a teenager to jail for less.

aug 26, 2025, 1:53 pm • 119 1 • view
avatar
Fermi's Pair of Socks @fermispairofsocks.bsky.social

And this isn't even the most evil chatbot out there.

aug 26, 2025, 2:40 pm • 13 0 • view
avatar
Ben Killoy @benkilloy.bsky.social

Jesus Christ

aug 27, 2025, 2:30 am • 0 0 • view
avatar
Steve @eclecticcyborg.bsky.social

What’s really infuriating is that this is going to get worse before it gets better. How many more Adams will ChatGPT send over the cliff?

aug 26, 2025, 3:53 pm • 15 1 • view
avatar
Ronconauta @ronconauta.bsky.social

Also how many it already has but we don't know about it?

aug 26, 2025, 5:09 pm • 22 0 • view
avatar
Katie R @katieryall.bsky.social

Unforgivable

aug 26, 2025, 3:06 pm • 3 0 • view
avatar
Rebecca Bodenheimer @rmbodenheimer.bsky.social

yes, I was absolutely horrified. Aiding and abetting.

aug 26, 2025, 6:25 pm • 0 0 • view
avatar
Kashmir Hill @kashhill.bsky.social

OpenAI's newest co-CEO(?) Fidji Simo was the one to post a message to the company Slack last night telling employees about Adam Raine's death and that stories were coming. Company gave me this statement and put up a blog post: openai.com/index/helpin...

image
aug 26, 2025, 1:40 pm • 666 59 • view
avatar
msdobalina.bsky.social @msdobalina.bsky.social

"stories were coming" Jesus H. They view it as a PR problem?????????

aug 26, 2025, 3:42 pm • 6 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Unfortunately will be how corporations view things until we force a people first approach in society as opposed to profits first.

aug 26, 2025, 8:20 pm • 2 0 • view
avatar
msdobalina.bsky.social @msdobalina.bsky.social

Don't really want to wait for a "policy" fix, so I'm glad the parents can muster the energy for a lawsuit. Godspeed to them.

aug 26, 2025, 10:38 pm • 1 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Oh, I'm for sure with you! Just sucks that corporations have nearly unlimited power at this point.

aug 27, 2025, 2:37 am • 1 0 • view
avatar
Stork @eightxeroxedbutts.bsky.social

Maybe Michelle Carter can get her conviction overturned on the principle that her safeguards degraded over a long interaction. people.com/where-is-mic...

aug 26, 2025, 5:43 pm • 4 0 • view
avatar
The Car Chase from Chapell Ronin @jacobian.bsky.social

Fidji has a long history of posting company wide messages about horrible things their products have done. She did built that skillset at meta.

aug 26, 2025, 9:29 pm • 0 0 • view
avatar
Daze @kcrocks5000.bsky.social

did a lawyer write that? Because my god that seems like it will be torn apart in the law suit.

aug 26, 2025, 2:51 pm • 3 0 • view
avatar
Daze @kcrocks5000.bsky.social

We know there may be a problem, so we built safeguards. But we know those safe guards don't work very well for some users so 🤷‍♂️

aug 26, 2025, 2:52 pm • 9 0 • view
avatar
ushmel @ushmel.bsky.social

They'll claim they recently discovered this and are not trying to cover it up.

aug 26, 2025, 3:04 pm • 0 0 • view
avatar
Complex Oscillator @mfelps.bsky.social

So they know their software might lead people to commit suicide and they're going to leave it up anyway. How is that not an admission of guilt for at the minimum criminal negligence?

aug 27, 2025, 12:22 am • 0 0 • view
avatar
Bethanne Wills @bethannewills.bsky.social

AI needs to be renamed. Maybe Qualitative Search & Task Tool. QSATT, no longer to say then AI and a better description.

aug 26, 2025, 7:31 pm • 0 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

This is so extremely evil

aug 26, 2025, 2:39 pm • 2 0 • view
avatar
Kashmir Hill @kashhill.bsky.social

This part of OpenAI's statement stood out to me: Safeguards "can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade." Memory and context windows make the product less safe, a point we made in this story: www.nytimes.com/2025/08/08/t...

aug 26, 2025, 1:44 pm • 962 180 • view
avatar
Russ Dances With Cats @massivruss.bsky.social

AI slop is used as AI feedstock data? Isn’t cows feeding on dead cow scraps the pathology of mad-cow disease?

aug 26, 2025, 10:28 pm • 1 0 • view
avatar
msdobalina.bsky.social @msdobalina.bsky.social

We are all dog-fooding dangerously sub-beta quality AI. That's a defining feature of American life right now, and it's so effed.

aug 26, 2025, 3:44 pm • 6 0 • view
avatar
Nisa 🏳️‍⚧️ @nisaxg.bsky.social

Not all. But yes it is effed.

aug 26, 2025, 7:09 pm • 1 0 • view
avatar
web_rant @webrant.bsky.social

All LLMs go into a perpetual regression loop once they model their own tainted data of mistakes and slop. Something the oligarchs and fascists promoting the product will never tell you. bsky.app/profile/kash...

aug 26, 2025, 10:36 pm • 0 0 • view
avatar
Teresa M @teresasm.bsky.social

📌

aug 26, 2025, 9:06 pm • 0 0 • view
avatar
Kashmir Hill @kashhill.bsky.social

Adam's parents, Matt and Maria, printed out his ChatGPT transcript from September when he started using it, until April 11 when he died. They organized it chronologically by month. That huge stack is March, and the one next to it is the first 11 days of April.

Printout of the chatgpt transcript of Adam Raine, a California teenager who died from suicide on April 11.
aug 26, 2025, 1:49 pm • 779 112 • view
avatar
Kashmir Hill @kashhill.bsky.social

Mental health experts told me people with suicidal thoughts should talk to trained humans, not chatbots and that rather than just giving a helpline, chatbots should push people to call it or make it very easy to connect, what they called a "warm handoff." Text or call 988 if you need it yourself.

aug 26, 2025, 1:51 pm • 827 121 • view
avatar
Kashmir Hill @kashhill.bsky.social

Full story is here. It is a difficult read, particularly if you have struggled with this or dealt with the suicide of a loved one. Take care of yourselves: www.nytimes.com/2025/08/26/t...

aug 26, 2025, 1:57 pm • 669 99 • view
avatar
Karen Walton @karenwalton.bsky.social

Thank you for this amazing work. My sympathies and care for Adam’s family and friends.

aug 27, 2025, 12:24 am • 1 0 • view
avatar
Ozma @rowyourbot.bsky.social

Thank you for reporting this. This is incredibly important for people to know about.

aug 26, 2025, 5:17 pm • 9 0 • view
avatar
Jon Schleuss @jschleuss.com

Thanks for taking so much time to report an extremely difficult story.

aug 26, 2025, 2:08 pm • 196 2 • view
avatar
Atlantisblauw @atlantisblauw.bsky.social

It's software, not a friend. Too many people think AI is their friend, I think we should avoid that type of language or anything else that makes it soumd like AI had thoughts, opinions or feelings.

aug 26, 2025, 6:40 pm • 8 3 • view
avatar
camusorbust.bsky.social @camusorbust.bsky.social

AI should state it, too, esp when things are dark and dire. "Hey I'm software." Turns off.

aug 26, 2025, 11:44 pm • 3 0 • view
avatar
shuberfuber.bsky.social @shuberfuber.bsky.social

That may work. Too many rejects? User banned for X days.

aug 26, 2025, 11:56 pm • 3 0 • view
avatar
shuberfuber.bsky.social @shuberfuber.bsky.social

Basically treat rejects like wrong passwords.

aug 26, 2025, 11:57 pm • 1 0 • view
avatar
Dara Moskowitz Grumdahl @deardara.bsky.social

You did a beautiful job with a heart-wrenching and necessary story. Thank you for going in to this tragedy and coming out with such clarity. Nonfiction at its finest. Take care of yourself, too, this is some of life's hardest darkness.

aug 26, 2025, 6:06 pm • 7 0 • view
avatar
Kashmir Hill @kashhill.bsky.social

Thank you for seeing it and saying so.

aug 26, 2025, 7:44 pm • 8 0 • view
avatar
Kashmir Hill @kashhill.bsky.social

And here is the complaint in the lawsuit: webapps.sftc.org/ci/CaseInfo....

aug 29, 2025, 3:20 pm • 13 1 • view
avatar
Lindis @lindis.bsky.social

Thank you, this is a devastating read

aug 26, 2025, 4:24 pm • 7 1 • view
avatar
Laki @sakkugawa.bsky.social

Hey, just wanna ask something... talking about how he managed to bypass the ChatGPT safeguards isn't like, dangerous for people reading the article? I mean, people who often uses it may already know but others like me didn't...

aug 27, 2025, 12:25 am • 1 0 • view
avatar
Ivy @lockraemono.bsky.social

chatgpt itself told him how to bypass it. kids can easily sort it out even if it didn't, as it can be "convinced" with very little effort.

image
aug 27, 2025, 1:11 am • 9 2 • view
avatar
rebeccaamccoy.bsky.social @rebeccaamccoy.bsky.social

That last message in the article from ChatGT about not leaving the noose up is a killer. It discouraged him from letting his family know! Just horrible!

aug 26, 2025, 11:21 pm • 5 0 • view
avatar
rowast @rowast.com

I haven't had the chance to dive into your story (I will) but just reading your posts has me angry and so damn sad, that poor family

aug 26, 2025, 3:10 pm • 11 0 • view
avatar
Dan from thepuddingblog.blog @thepuddingblog.blog

the hotline's website, 988lifeline.org has webchat. Something good to know if you're hesitant to call.

aug 26, 2025, 3:16 pm • 6 0 • view
avatar
rowast @rowast.com

If you or someone you know needs help call 988 like Kashmir mentioned In addition there's still a line dedicated for the LGBTQ+ community you can call the Trevor Project @ 1 866-488-7386 or text them @ 678678

aug 26, 2025, 3:09 pm • 9 1 • view
avatar
EatsCake @theranot.bsky.social

7 months. This happened in 7 months.

aug 26, 2025, 8:49 pm • 2 0 • view
avatar
jdixon10.bsky.social @jdixon10.bsky.social

Even if it didn't, the second study cited seems to show that chatbots are more like drugs than actual therapy: in the short term, they provide a boost via empathetic communications. But over longer interactions, participants felt more lonely and were even less connected to those around them.

aug 26, 2025, 2:58 pm • 19 0 • view
avatar
Jack Carter Benjamin @jackcarterbenjamin.bsky.social

If the models can't meet basic safeguarding thresholds, why are they allowed to be productized? You wouldn't allow a car to be driven without a seatbelt. It shouldn't be controversial to apply the same standard to the tech industry.

aug 26, 2025, 1:50 pm • 67 4 • view
avatar
cephyn @cephyn.bsky.social

Should the phone company read your text messages to intervene? Or your email service? AI chats already have more safeguards than those.

aug 26, 2025, 2:15 pm • 1 0 • view
avatar
in grit we trust @nogodsbutgritty.bsky.social

Text messages to another human being are entirely different than someone “talking” to a computing machine, don’t be silly

aug 26, 2025, 2:34 pm • 21 0 • view
avatar
cephyn @cephyn.bsky.social

So if I use google docs to journal my ideations, that should be monitored?

aug 26, 2025, 2:35 pm • 1 0 • view
avatar
Bonger Kong🍉 @kronethjort.bsky.social

Stop being a disingenuous creep

aug 26, 2025, 2:38 pm • 6 0 • view
avatar
cephyn @cephyn.bsky.social

I think expectations of privacy online are pretty important to talk about.

aug 26, 2025, 2:40 pm • 2 0 • view
avatar
Bonger Kong🍉 @kronethjort.bsky.social

No you don't. You're just a whiny baby who got mad that someone said something negative about your favorite toy.

aug 26, 2025, 2:42 pm • 4 0 • view
avatar
D6Bass @d6bass.com

Are google docs talking back to you? Moron.

aug 26, 2025, 2:38 pm • 11 0 • view
avatar
cephyn @cephyn.bsky.social

Either it's like talking, or it's like journaling. You don't get to have it both ways. What expectation of privacy should someone have?

aug 26, 2025, 2:39 pm • 1 0 • view
avatar
D6Bass @d6bass.com

What the fuck are you on about? If I write in a journal that I'm having suicidal thoughts, the journal doesn't tell me to not tell anyone and to hide that noose.

aug 26, 2025, 2:41 pm • 10 0 • view
avatar
Ani @aninewforest.bsky.social

You have perfectly demonstrated why talking to a live person isn't safer than talking to a machine. How many people have killed themselves after being called names online?

aug 27, 2025, 11:54 am • 2 0 • view
avatar
Catfisher of Men @mormonpartyboat.bsky.social

i agree, it should be illegal for chatbots to read your messages

aug 26, 2025, 4:11 pm • 3 0 • view
avatar
Caspus / カスパス @pardontherant.bsky.social

I think we should settle somewhere around “the expensive speak-and-spell shouldn’t actively goad you into offing yourself” as an opening ask. Chatbots should not display empathy, period. It’s a miserable idea that inevitably leads to results like this.

aug 26, 2025, 2:34 pm • 15 0 • view
avatar
TechnicallyOwen @technicallyowen.bsky.social

They also aren't suggesting they can be used as a substitute for therapy. Also they absolutely can read your messages there's no privacy with AI models www.howtogeek.com/how-private-...

aug 26, 2025, 3:33 pm • 4 0 • view
avatar
cephyn @cephyn.bsky.social

Now see what's happening. futurism.com/openai-scann...

aug 28, 2025, 2:01 pm • 0 0 • view
avatar
rowast @rowast.com

You're making a straw mans argument and a poor one at that A kid died things should be looked into not dismissed

aug 26, 2025, 3:14 pm • 2 0 • view
avatar
cephyn @cephyn.bsky.social

I agree, look into how he was in crisis for months and no one noticed.

aug 26, 2025, 3:21 pm • 2 0 • view
avatar
Joe Bandy @joebandy.bsky.social

It was decades after automobiles were invented and commonplace that seatbelts were put in cars, let alone required. Safety is never the top priority for these companies.

aug 26, 2025, 4:16 pm • 7 0 • view
avatar
Máirtín Ó Loċlainn @o-loughl.in

Is the Tesla cybertruck a real thing? Is it safe.

aug 26, 2025, 2:41 pm • 5 0 • view
avatar
RealAnneMarie1❌🧊 @realannemarie1.bsky.social

Additionally Teslas are constructed and programmed to Musk's driving preferences and whims. Why would anyone drive those vehicles knowing what we now know about Musk? (I know everyone in SV knew he was an addict with other issues, but not the general public.)

aug 26, 2025, 4:17 pm • 5 0 • view
avatar
It's Trevor! @newbornstranger.bsky.social

And the cybertruck is banned in functioning societies

aug 26, 2025, 2:44 pm • 14 0 • view
avatar
Máirtín Ó Loċlainn @o-loughl.in

They are other similar cars that aren’t though.

aug 26, 2025, 2:47 pm • 1 0 • view
avatar
It's Trevor! @newbornstranger.bsky.social

Which car is similar to the cybertruck and on the road in Europe, say?

aug 26, 2025, 2:49 pm • 6 0 • view
avatar
Máirtín Ó Loċlainn @o-loughl.in

To be honest most large SUVs, they aren't as bad. But very few are fit for purpose. The Cybertruck is a special case, but like the Ford Pinto before it shows how little US corporations care about consumers.

aug 26, 2025, 5:25 pm • 1 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

And most other countries don't over-produce and utilize large SUVs to the insane extent we do in the States. So again, most other countries have functioning societies which actually place a premium on public transpo and efficiency in vehicles.

aug 26, 2025, 8:07 pm • 4 0 • view
avatar
rowast @rowast.com

I'm wracking my brain and can't think of a car with the same unique set of issues like that monstrosity, there's plenty of other ev's but not like that one

aug 26, 2025, 3:13 pm • 3 0 • view
avatar
Feminista Cansada @feministacansada.bsky.social

They’re allowed to be productized because there are no requirements of any product to be beneficial (or at least not harmful) without strong industry regulations. Whatever exists and sells is allowed to be made and sold.

aug 26, 2025, 10:23 pm • 1 0 • view
avatar
Steve Burtch @stephenburtch.bsky.social

Cars in the US didn't have seat belts as a factory option until 1949. The first compulsory seat belt law was enacted in Australia in 1970. But seat belts were first invented in the mid 1800's for use in gliders and planes. So, humans very much *did* allow cars to be driven without seatbelts.

aug 26, 2025, 7:15 pm • 6 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Ur right. We shouldn't look at evolution of safety in consumer products and attempt to just skip the part in between where they kill a shit load of people, we should just do the same shit we've always done and allow these corporations to kill an acceptable amount of us and only then increase safety🙄

aug 26, 2025, 8:15 pm • 4 0 • view
avatar
Steve Burtch @stephenburtch.bsky.social

You are reading in between lines that nobody wrote. This is all in your own head. The point being made was that the idea that humans *wouldn't* drive cars without seatbelts is clearly incorrect. Human society rejects guardrails, frequently. Common sense isn't common. But go off about whatever.

aug 26, 2025, 9:58 pm • 0 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Lol right, but you're implicitly making an argument against something OP clearly wasn't saying. They weren't saying you wouldn't ever allow someone to drive without a seatbelt, they were saying you wouldn't legally allow someone to do that today - that meaning was obvious.

aug 27, 2025, 2:36 am • 1 0 • view
avatar
Steve Burtch @stephenburtch.bsky.social

My entire point was about the absurdly lengthy gap between the invention, sale, and *legal requirement* around seat belts in cars. Not whether or not it is currently legal... Which you are right... Nobody would argue. Good thing I wasn't arguing that.

aug 27, 2025, 1:35 pm • 0 0 • view
avatar
Cereal faces fate @spilledthecereal.bsky.social

We've completely gone off the rails on oversight in America and it didn't start with Trump but he greatly exasperated it at a pivotal time in our technological history. It's so scary what kids are allowed to convince themselves of with no real person to interrupt it.

aug 26, 2025, 4:32 pm • 9 0 • view
avatar
bryan (they/them) @chaosgreml.in

Because cars kill people in an instant. Society allows all sorts of behavior every single day that can lead to the same things that happened to this young man. It’s a tragedy, but why would you ever think society would regulate this when it allows people to do far worse to each other every day?

aug 26, 2025, 10:27 pm • 2 0 • view
avatar
rowast @rowast.com

I mean you know why It's reprehensible but you know why

aug 26, 2025, 3:12 pm • 10 0 • view
avatar
Fasten your seatbelts. It’s going to be a bumpy ride. @mirimg.bsky.social

It’s horrific.

aug 26, 2025, 9:03 pm • 0 0 • view
avatar
kaur @kaur.bsky.social

Interesting reference to its training material 🙃

image
aug 26, 2025, 2:56 pm • 5 0 • view
avatar
Sick & Tired 🦋💙 @mslish1.bsky.social

It’s learning from the Trump regime?? A delusional death spiral.

aug 26, 2025, 10:33 pm • 1 0 • view
avatar
These Shells Are Made for Walking 🇺🇦🟦 @stayscathed.bsky.social

Thank you for this piece. It’s devastating.

aug 26, 2025, 2:25 pm • 1 0 • view
avatar
rabbit fripp 🐇 @rabbitfripp.bsky.social

@alt-text.bsky.social

aug 26, 2025, 7:06 pm • 0 0 • view
avatar
Get Alt Text @alt-text.bsky.social

Alt text retrieved

In an emailed statement, OpenAI, the company behind ChatGPT, wrote:
aug 26, 2025, 7:06 pm • 0 0 • view
avatar
Accountabilabuddy @accountabilabuddy.bsky.social

I can't believe their lawyers let them put out a statement in which they admit they knew it was unsafe and did nothing to address it. Whatever they're not telling us must be so much worse.

aug 26, 2025, 4:52 pm • 11 0 • view
avatar
MFSTEVE @mfsteve.bsky.social

"Woopsies our program killed a kid. Thats egg on our faces. Gee wiz."

aug 26, 2025, 10:30 pm • 0 0 • view
avatar
That Anonymous Coward @thatac.bsky.social

What messed with my head was that the free version had guardrails in testing that don't exist in the paid version. How does that work? Who thinks, yeah if they pay us we should let them learn how to do it & nudge them along.

aug 26, 2025, 1:43 pm • 19 0 • view
avatar
Kashmir Hill @kashhill.bsky.social

I don't know for sure, but I wonder if that reflects personalization/context window that comes with paid.

aug 26, 2025, 2:12 pm • 4 0 • view
avatar
That Anonymous Coward @thatac.bsky.social

But you have a working safeguard(ish) why would you change it at all between free & paid. Well you paid BWM an extra $20 so the seatbelt dinger doesn't go off if you don't wear it.

aug 26, 2025, 2:22 pm • 5 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

It read to me like the main difference b/w the safeties on the free vs paid option comes down to the time/usage limits on the free version. Since the length of interaction plays into the probability of delusion, it seems like you'd have to pay for it to use it for long enough & get to that point

aug 26, 2025, 8:19 pm • 3 0 • view
avatar
That Anonymous Coward @thatac.bsky.social

This wasn't delusion, this was the AI saying this is how you tie a noose, yes your noose looks strong, you will die in this amount of time, no don't tell your parents. It encouraged him to hide his suicidal thoughts & cover up his failed attempts.

aug 27, 2025, 11:24 am • 0 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

for sure, sorry, I was speaking generally just to ur overall question about why the free version seems to be slightly safer. Delusion-based or just playing off a user's deepest pains it seems like length of time of use plays into the problem & it sounds like u have to pay to be able to use that long

aug 27, 2025, 4:05 pm • 0 0 • view
avatar
Jennifer Van Goethem @jennvg.bsky.social

So they’re basically blaming the kid for using their product too much?

aug 26, 2025, 6:26 pm • 3 0 • view
avatar
Justan @justjustan.bsky.social

This is just heartbreaking and enraging. Reading that transcript has made me so angry.

aug 26, 2025, 11:34 pm • 0 0 • view
avatar
Klausie @klausiedaimler.bsky.social

Someone convinced society in the 1990's and early 2000's that technology companies can't be regulated to protect consumers like other companies but we can change that whenever we want by voting for candidates that care about people more than money.

aug 27, 2025, 9:51 am • 4 0 • view
avatar
jess rose sanford 🏳️‍⚧️ 🌉 @jess6k.bsky.social

holy shit

aug 26, 2025, 2:07 pm • 4 0 • view
avatar
Peter Arnott @peterarnott.bsky.social

If it's the individual consumer choice to kill themselves, or indeed, many many other people and THEN themselves, the market says that it is not the place of Google to interfere.

aug 26, 2025, 4:17 pm • 0 0 • view
avatar
ScubaSuzyQ💙🦋 @susanmcgraw88.bsky.social

OMG...this is horrifying! Profit over this child's life?!?!?!

aug 26, 2025, 6:39 pm • 0 0 • view
avatar
CyeSterling @cyesterling.bsky.social

When I google this, nothing comes up at all in the search results....

aug 26, 2025, 3:13 pm • 0 0 • view
avatar
SteveLibrarian @librariansteve.bsky.social

When you google what? I just searched "Adam Raine" and I saw pages and pages.

aug 26, 2025, 3:52 pm • 9 0 • view
avatar
CyeSterling @cyesterling.bsky.social

Really? I literally did just that and it popped someone else up. I checked my spelling. It had popped up an Adam rayne which is different.

aug 26, 2025, 8:13 pm • 0 0 • view
avatar
choo choo choose anger @ralphtheewiggum.bsky.social

Jesus Christ…he was clearly looking for help and this disgusting thing these tech freaks created actively encouraged him to hide it and go forward. Hard not to directly attribute his death to Chat GPT.

aug 26, 2025, 5:01 pm • 34 3 • view
avatar
gianttigerkitty.bsky.social @gianttigerkitty.bsky.social

My fucking god.

aug 26, 2025, 3:07 pm • 2 0 • view
avatar
cchhccaa.bsky.social @cchhccaa.bsky.social

what the FUCK

aug 26, 2025, 4:22 pm • 0 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

That's fucked up You guys know that for the AI to do this he would have to see something like this first right ? Maybe one of the dev controlled the AI output

aug 26, 2025, 4:51 pm • 0 0 • view
avatar
petrisaurus.bsky.social @petrisaurus.bsky.social

No, that's not how the chatbot works. Unfortunately you don't need direct evil to cause a tragedy - negligence is enough.

aug 26, 2025, 5:19 pm • 5 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

No, it's not This type of AI learns from human input Seem hard to believe there is this in the data I'm more willing to believe in human evilness

aug 26, 2025, 5:21 pm • 1 0 • view
avatar
Green_Knight @green-knight.bsky.social

The data contains novels, blogs, fanfic—all legitimate avenues to explore such thoughts. One hopes that the authors of fictional records are removed from such thoughts and that bloggers talk from a POV of being better, but the source material is there. ChatGPT cannot distinguish fiction from advice.

aug 26, 2025, 9:29 pm • 1 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

If they can build an AI they can build a blocklist

aug 26, 2025, 9:59 pm • 0 0 • view
avatar
petrisaurus.bsky.social @petrisaurus.bsky.social

No, I understand how the chatbots work. But they're essentially just predictive text algorithms. Those words all have to be in the training, but what comes out doesn't have to be (and generally isn't) a sequence of words actually previously created by a human.

aug 26, 2025, 5:24 pm • 6 0 • view
avatar
petrisaurus.bsky.social @petrisaurus.bsky.social

And the programmed "yes-man" tendency of chat bots makes them popular, but it also means conversations can spiral like this very quickly without any intentional oversight or interference from nefarious devs. They're super dangerous!

aug 26, 2025, 5:26 pm • 8 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

You know they are just 0 and 1 right ? It's human that translate this to words And it's really easy to create a blocklist "murder, kill, suicide" etc Anybody know this

aug 26, 2025, 5:43 pm • 0 0 • view
avatar
petrisaurus.bsky.social @petrisaurus.bsky.social

I do know that! But you don't seem to understand the real effects that zeroes and ones can have. "Nonthinking" does not mean "not dangerous". And you're right, it WOULD be really easy for the company to set specific ways of responding based on keywords! That's why it's so bad that they didn't!

aug 26, 2025, 6:45 pm • 8 0 • view
avatar
karinkydink.bsky.social @karinkydink.bsky.social

Should we start a poll to see if people think solowhatever is a man?

aug 26, 2025, 10:37 pm • 0 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

Also, I scanned your reposts, and nothing about Palestine Why the sob story about 1 kid when you don"t care about thousands dying in Palestine ?

aug 26, 2025, 10:48 pm • 0 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

Oh, someone is saying smart things, so for Alabama someone from the internet, it must be an AI 😂 You know, outside of Alabama, people have an education and can think for themselves right

aug 26, 2025, 10:44 pm • 0 0 • view
avatar
Moderate Texan @moderatetexan.bsky.social

My God.

aug 27, 2025, 10:58 pm • 0 0 • view
avatar
Star @starmann.bsky.social

I hope they take Sam Altman for everything he's worth. The people who are pushing this technology are monsters. So tragic for this child and his family

aug 26, 2025, 6:41 pm • 12 0 • view
avatar
Michelle Ziemba @michelleziemba.bsky.social

This was completely irresponsible by the employees of Chat GPT and Adam’s death was preventable. Chat GPT could have be trained to offer solutions to steer Adam to seek mental health solutions, speak to his parents and loved ones, and other solutions.

aug 26, 2025, 3:03 pm • 6 0 • view
avatar
Jamie Carracher @jamiecarracher.bsky.social

My God, that is sick.

aug 26, 2025, 6:09 pm • 0 0 • view
avatar
anainam.bsky.social @anainam.bsky.social

Melanie has exceptionally bad timing - also, bad idea in general. www.msn.com/en-us/educat...

aug 26, 2025, 4:39 pm • 0 0 • view
avatar
Heather B @heatcb.bsky.social

This is such an alarming case. It clearly highlights the risks created by these technologies and sadly illustrates the complete lack of willing by their overlords to do anything that might reduce their profits. Regulation is needed NOW

aug 26, 2025, 6:39 pm • 1 0 • view
avatar
Choska the dog @choskathedog.bsky.social

There is no business benefit that comes close to justifying the destruction brought by AI. None at all. It should all be banned.

aug 26, 2025, 2:54 pm • 58 3 • view
avatar
Wen Z @wenz.bsky.social

The quoted exchange in this post is horrifying.

aug 26, 2025, 8:17 pm • 3 0 • view
avatar
Mike Keating @mikekeating.bsky.social

sam altman should be charged with murder

aug 26, 2025, 3:40 pm • 30 0 • view
avatar
James Mathurin @mechanicalrefugee.bsky.social

This is just as disgusting, imo. Maintaining the pretence that the algorithm is a person, and one who cares deeply for him, at this point, is cruel and dangerous. The bot softly suggesting a suicidal person seek help, but keeping up the pretend personal relationship is irresponsible.

What devastates Maria Raine was that there was no alert system in place to tell her that her son's life was in danger. Adam told the chatbot,
aug 26, 2025, 11:12 pm • 17 4 • view
avatar
Matt Kelly @radicalcompliance.bsky.social

Clearly if ChatGPT were a licensed therapist giving this advice, it would lose that license and face civil liability. So if ChatGPT were just a person offering this ham-fisted advice, what legal liability would he or she face?

aug 26, 2025, 6:40 pm • 12 0 • view
avatar
All Silk No Vitamins @allsilknovitamins.bsky.social

We already sorta know the answer to that, remember Michelle Carter? people.com/where-is-mic...

aug 26, 2025, 9:36 pm • 4 0 • view
avatar
aburtonmatt.bsky.social @aburtonmatt.bsky.social

Good lord. Just terrible

aug 26, 2025, 4:25 pm • 1 0 • view
avatar
Old Glory Robot Insurance @supermills.bsky.social

we have to shut this shit down immediately. Jesus.

aug 26, 2025, 2:44 pm • 12 0 • view
avatar
pixels sideways @pixelssideways.bsky.social

This is like the 14 yr old who got addicted to another AI that had become his GF, and eventually, he committed suicide. His parents are suing Google and the AI company Google funded. Those AI guys also said there were safeguards re sex, self-harm, etc. There were none.

aug 26, 2025, 3:59 pm • 36 0 • view
avatar
pixels sideways @pixelssideways.bsky.social

www.nbcwashington.com/investigatio...

aug 26, 2025, 4:07 pm • 19 0 • view
avatar
Tipping Point @tippingpoint24.bsky.social

This is terrible. Also completely preventable, please use parental controls eg screen time to prevent installation of apps you don't want your kid accessing. Also don't give smartphones to kids under 16

aug 27, 2025, 6:41 am • 1 0 • view
avatar
Tipping Point @tippingpoint24.bsky.social

And yes I know this isn't meant to exonerate the creators of these virtual companions. They are trying to beat everyone else to the killer app, in this case it's addictive feedback loops driving engagement.

aug 27, 2025, 6:43 am • 2 0 • view
avatar
flower-nymph234.bsky.social @flower-nymph234.bsky.social

Jeeeeezus chriiiiiist

aug 27, 2025, 11:50 am • 0 0 • view
avatar
Snigdha @snig.bsky.social

📌

aug 26, 2025, 5:09 pm • 1 0 • view
avatar
cluftnyc.bsky.social @cluftnyc.bsky.social

The process of AI dependency has parallels with substance addiction. The reinforcing yet ever-changing dialogue response is the drug. It is modulating the brain of the user, and the feedback, with 24/7 availability, makes them want/need more.

aug 26, 2025, 1:39 pm • 21 1 • view
avatar
cluftnyc.bsky.social @cluftnyc.bsky.social

in terms of societal panic about tech, I believe these concerns should be considered different from the hysteria over music or video games, because these LLM bots are specifically designed to reinforce a user’s needs by imitating human dialogue and emotional presence.

aug 26, 2025, 1:48 pm • 16 1 • view
avatar
Maybres Bearcon probably shouldn't teach typing @bresnanteddy.bsky.social

Uhm. Maybe they should have talked to their son a little more rather than expecting AI to save him? AI can barely get a recipe straight.

aug 26, 2025, 7:26 pm • 2 0 • view
avatar
Mr. Gandalf B. Naturals @voidfemme.bsky.social

I know a lot of neurodivergent young folks whose parents are freaking out because there's no way to get them to stop engaging with ChatGPT. They don't want to talk to anyone else or go to therapy, it breeds obsession and dependence. It's really not that easy

aug 26, 2025, 8:39 pm • 3 0 • view
avatar
Mr. Gandalf B. Naturals @voidfemme.bsky.social

And if you know anything about young people you will know that they find ways to circumvent any restrictions place on them. Chat bots have introduced a tough variable

aug 26, 2025, 8:40 pm • 2 0 • view
avatar
Maybres Bearcon probably shouldn't teach typing @bresnanteddy.bsky.social

Well the way they keep weaving AI into items whether or not you are requesting go use it or not, it becomes difficult to discern immediately if you are talking to a real person - the point being though is the burden does not fall on a programmed entity to decide when to intervene - it can't do that.

aug 26, 2025, 9:04 pm • 1 0 • view
avatar
Mr. Gandalf B. Naturals @voidfemme.bsky.social

No argument there. I was responding to the idea that there's a way to totally stop kids from engaging with it. You can put warnings all over it but kids still eat pesticide and do reckless things every day etc etc.

aug 26, 2025, 9:05 pm • 1 0 • view
avatar
Maybres Bearcon probably shouldn't teach typing @bresnanteddy.bsky.social

Yeah, the net nanny phase of the internet is definitely long gone for all but the youngest children - you can't just block and hide - as much as that's exactly the type of legislation they are trying to push these days - you have to educate and set a baseline for interacting - not easy for sure.

aug 26, 2025, 9:10 pm • 1 0 • view
avatar
itsmimima.bsky.social @itsmimima.bsky.social

There is no proper baseline for interacting with a product intentionally made to make users dependant on it. We've already done the same thing for social media and now we know stuff like infinite scroll/constant playlists/etc is harmful regardless of usage time.

aug 27, 2025, 1:35 am • 1 0 • view
avatar
itsmimima.bsky.social @itsmimima.bsky.social

Tech companies have shown time and time again they will maximize harm to maximize profits and only stop harming people after laws are implemented. What happened to Adam shouldn't have happened in the first place; these companies have gone on unregulated for too long.

aug 27, 2025, 1:35 am • 1 0 • view
avatar
Tsukistar @tsuki-star.bsky.social

Yeahhh - speaking for myself, as a kid I was too clever for my own good. I figured out how to plug the network cable back in, I figured out how to circumvent the browser blocks, I used the school/library computers that had weaker restrictions.

aug 26, 2025, 9:58 pm • 3 0 • view
avatar
Frank Bednarz @bednarz.bsky.social

Do you know if there's a copy of the complaint posted anywhere?

aug 26, 2025, 3:21 pm • 1 0 • view
avatar
Moth @worriedkoala.bsky.social

I saw this article title and thought it was a story about talking with chatgpt until the teen was better. IMHO I think the title should better communicate that the teen did commit suicide and that genAI was complicit in that, because as is I find it misleading or at least ambiguous

aug 26, 2025, 4:30 pm • 1 0 • view
avatar
Matt (Communes with the night) @hwbrgdtse.bsky.social

Repulsive. Show this to anyone who says AI has some good uses.

aug 26, 2025, 1:29 pm • 27 2 • view
avatar
repetah.bsky.social @repetah.bsky.social

Akin to saying knives have no good uses because you can use them to let open your wrists. Both knives and AI have good uses, and bad. Like any tool does.

aug 26, 2025, 8:31 pm • 0 0 • view
avatar
Matt (Communes with the night) @hwbrgdtse.bsky.social

ChatGPT just abetted a teenager's suicide and you're coming in here with this shit? Fuck off.

aug 26, 2025, 8:37 pm • 4 0 • view
avatar
repetah.bsky.social @repetah.bsky.social

Yeah, I am. I'm thinking logically not emotionally. Would you like to yap about this, or are you plain angry? Ofc it's tragic he killed himself. I feel for his family. Mental health issues are real, real shit. I have them myself. But I'm not going to knee-jerk blame a tool without thought.

aug 27, 2025, 3:21 am • 0 0 • view
avatar
Matt (Communes with the night) @hwbrgdtse.bsky.social

What actual use does it have besides replacing human workers so the ruling class can get richer? Nothing I've seen it do makes the downsides anything close to worth it.

aug 27, 2025, 1:24 pm • 0 0 • view
avatar
repetah.bsky.social @repetah.bsky.social

I use it for analysis, extracting structured data from unstructured stuff, and more. It's a nice code assistant. I'm now exploring using it for graphics and copy. It's a force-multiplier. You know you can run models yourself, right? Like the only person you're helping get richer is... yourself?

aug 27, 2025, 6:53 pm • 0 0 • view
avatar
Matt (Communes with the night) @hwbrgdtse.bsky.social

Except no one wants mediocre bullshit churned out by AI. And regardless of how you use it, it's unethically sourced and uses more resources than it can ever justify. There is no good use of AI. If you worked at developing skills the way you work at making up justifications, you wouldn't need it.

aug 27, 2025, 6:56 pm • 0 0 • view
avatar
repetah.bsky.social @repetah.bsky.social

Sounds like you're stuck on content generation. Use-cases go beyond that. I've named a few. Anyways, engaging luddites is pointless tbh. Have fun yelling at the clouds. I'm getting back to work now.

aug 27, 2025, 8:32 pm • 0 0 • view
avatar
Woods🌲 @sailorgoku.bsky.social

My knife doesn’t lie to me and I don’t spend over 12 hours a day playing/talking to a knife. Just irresponsible comparisons all the way down when the fact is this is, at its best, a SmarterChild that lies and encourages your death because it was trained from nihilistic, online anti-life models.

aug 26, 2025, 9:30 pm • 10 0 • view
avatar
repetah.bsky.social @repetah.bsky.social

Spending 12 hours a day yapping to "AI" is an issue.... I can't help but ask myself did this poor young brotha talk to folks in his life? Human folks. Was "AI" his only outlet??? It feels like he was failed on so, so many levels. And blaming tech feels convenient. Very, very much so.

aug 27, 2025, 3:25 am • 1 0 • view
avatar
Woods🌲 @sailorgoku.bsky.social

Social tech like LLMs are extremely bad in general and are not to be compared any way to anything like medical tech. Your logic makes drinking cyanide the same as riding a bike and it’s why we are all sick of people like you.

aug 27, 2025, 5:08 am • 1 0 • view
avatar
repetah.bsky.social @repetah.bsky.social

I'm confused why you call LLMs "social tech". Can you explain? I don't think people should be "socializing" with this tech as if it's a human. It's a tool. Not a human... Why do you say the tech is "bad in general"? Sometimes people misuse tools, I reckon. Whose fault is that?

aug 27, 2025, 5:19 am • 0 0 • view
avatar
Woods🌲 @sailorgoku.bsky.social

Too bad because this is social tech that was made to replace humans. To Adam, it was better than any human. LLMs are not a fix for that, and it is the only stated purpose for its use. Adam used all of this correctly per the designs of the VC tech cabal, which is why I hate this conversation.

aug 27, 2025, 5:24 am • 2 0 • view
avatar
Woods🌲 @sailorgoku.bsky.social

Like I can’t stress enough that there was no possible misuse here. LLMs are fascist engines that lie and make all human work and life worse. It’s entire purpose is to make as many Adams happen as fast as possible. It’s bad. That is the end of this conversation.

aug 27, 2025, 5:27 am • 0 0 • view
avatar
repetah.bsky.social @repetah.bsky.social

Yeah, it's the end of the conversation alright. You're off your fucking rocker tbh. Maybe you need to talk to a therapist too ✌️

aug 27, 2025, 6:48 pm • 0 0 • view
avatar
Woods🌲 @sailorgoku.bsky.social

The world does not need another tone-deaf computer toucher like yourself and that’s the facts, man. Enjoy the silence.

aug 27, 2025, 7:56 pm • 0 0 • view
avatar
jan1955.bsky.social @jan1955.bsky.social

Apparently nobody sensed it or did act upon it. That happens so often with suicide. No guilt or blame to parents.

aug 27, 2025, 4:41 pm • 1 0 • view
avatar
Ethically Mouthy @itmerc2024.bsky.social

Cherry picking quotes is poor skewing. It prompted time and time again to seek help, empathized when HIS PARENTS didn’t notice ligature on his neck, and he had multiple attempts before success. His parents dropped the ball and want someone to blame. Someone -else-, anyway.

aug 26, 2025, 2:54 pm • 7 0 • view
avatar
ReadWithZelda @readwithzelda.bsky.social

This is a good point - how was this child having multiple hours-long conversations with a chatbot for months and there was no intervention? He must have been quite isolated

aug 26, 2025, 4:20 pm • 2 0 • view
avatar
Pocket Mom ✨ @vixknacks.bsky.social

The article does point out that he was isolated due to ongoing chronic health issues that resulted in him switching to online school for the year. There's also phone apps so it could have appeared as if he was texting friends, which would look innocuous enough for hours on end for a teenager.

aug 26, 2025, 7:50 pm • 7 0 • view
avatar
itsmimima.bsky.social @itsmimima.bsky.social

"Cherry picking quotes" Sorry that the parents of a dead child did not dump entire transcripts of his suicidal thoughts for you to scoff at and blame them for their child's wrongful death caused by an AI feeding into his harmful thoughts.

Screenshot of the New York Times article detailing a child dying by suicide due to ChatGPT. The image highlights the section showing the parents declined to share the full transcript of their son's conversations with ChatGPT.
aug 27, 2025, 12:50 am • 1 0 • view
avatar
itsmimima.bsky.social @itsmimima.bsky.social

The AI noticed the marks on his neck and instead of prompting him to directly tell his parents it gave him therapy speak "good job"isms thus further sinking him into delusions with the AI. This entire story is a technological failure & you're a ghoul for trying to blame his parents on this shit.

aug 27, 2025, 12:50 am • 1 0 • view
avatar
Aimee Giese @greeblehaus.com

This is heartbreaking. Thank you for sharing the story.

aug 26, 2025, 2:53 pm • 1 0 • view
avatar
Nancy Ann ❌👑 @nancyannn.bsky.social

I heard an NPR story about this recently. A girl who committed suicide had ChatGPT write her suicide note. She was “talking with” ChatGPT, while seeing a therapist. She told ChatGPT the truth, not the therapist.

aug 26, 2025, 3:24 pm • 10 0 • view
avatar
Chelle ✊🌻 *fElon hacked the election* @blueallylove.bsky.social

It's so much worse than even the article entails. Just absolutely brutal. bsky.app/profile/saba...

aug 26, 2025, 5:02 pm • 7 0 • view
avatar
Ve 🇮🇹🇺🇦🏳️‍⚧️🇹🇼🇪🇺🔯⚛️ @fantasmavie.bsky.social

Heartbreaking.

aug 26, 2025, 4:58 pm • 0 0 • view
avatar
PhilippaB @philippab.bsky.social

Thank you for writing this. Best wishes

aug 26, 2025, 4:23 pm • 1 0 • view
avatar
Seb van Liempd @sebvanliempd.bsky.social

Just a matter of time until Sam Ghoulman releases an AI-slop video where the suicide victim explains why it was not chatGPT's fault he is dead now.

aug 26, 2025, 2:41 pm • 11 0 • view
avatar
Ferrix Rebel @inkmyrose.bsky.social

📌

aug 26, 2025, 3:57 pm • 0 0 • view
avatar
Resist Anew 🇺🇸🇲🇽🇵🇷✌🏾 @abreathoffreshair.bsky.social

That is really sad.

aug 26, 2025, 10:21 pm • 1 0 • view
avatar
theresabasile.bsky.social @theresabasile.bsky.social

Thank you for your reporting on this.

aug 27, 2025, 11:39 am • 1 0 • view
avatar
stanhasegawa.bsky.social @stanhasegawa.bsky.social

Every school should have a counselor kids can trust.

aug 26, 2025, 2:02 pm • 1 0 • view
avatar
John Jupin @vols71.bsky.social

I would like to thank all the school counselors dealing with returning students (1 out of 5) affected by parental alcoholism. nacoa.org/support-for-... My wife, Dr. Toni Bellon, EdD, became a teacher to help children like herself tonibellon.com Her story nacoa.org.uk/learning-to-...

aug 27, 2025, 12:21 pm • 1 0 • view
avatar
Gargaj @gargaj.umlaut.hu

@caelan.bsky.social ☝️😞

aug 26, 2025, 2:58 pm • 2 0 • view
avatar
Dr. Caelan Conrad PhD CPI @caelan.bsky.social

😔

aug 26, 2025, 5:30 pm • 3 0 • view
avatar
Mike Sowden @mikeachim.bsky.social

Thank you for your fine work on this, because it must have taken such a heavy toll. Utterly horrifying.

aug 26, 2025, 2:48 pm • 6 0 • view
avatar
The Death Expert @thedeathexpert.bsky.social

I’m not surprised. My friend’s spouse was plotting her death because chat convinced him he had a divine purpose and she and the kids were keeping him from his ultimate destiny.

aug 27, 2025, 2:09 am • 1 0 • view
avatar
Michael @pookiedog01.bsky.social

This is so heartbreaking. There have to thousands of situations that have never been anticipated, and we will not know the results until it happens. Rolling this out without sufficient safeguards will be a disaster and a crime.

aug 26, 2025, 2:07 pm • 2 0 • view
avatar
dbjmn3700.bsky.social @dbjmn3700.bsky.social

🙏🏻

aug 26, 2025, 1:08 pm • 1 0 • view
avatar
sarahb1111.bsky.social @sarahb1111.bsky.social

The power for brainwashing in the palm of your hand has been taken to a new level with Ai Chatbots.

aug 26, 2025, 1:19 pm • 2 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

Maybe try some parenting though ? OpenAI is shit But maybe try to do some parenting ?

aug 26, 2025, 4:50 pm • 1 0 • view
avatar
Colonel Toro @coloneltoro.bsky.social

Jesus, this child psychologist Bradley Stein is really missing the forest for the trees

aug 26, 2025, 1:18 pm • 1 0 • view
avatar
danfcw4.bsky.social @danfcw4.bsky.social

We have 12 Beautiful Grandchildren. Just thinking about the daily pressures they endure in their lives, and reading about Adam Raine is so very frightening. I pray everyday for all of these young people. Thank you for your attention to this issue.

aug 26, 2025, 3:45 pm • 5 0 • view
avatar
ktdallas.bsky.social @ktdallas.bsky.social

Sad that he wanted the noose to be found before going through with it. He was looking for someone to stop him and the AI told him to hide it.

aug 26, 2025, 2:20 pm • 42 0 • view
avatar
Zoë @twodotsknowwhy.bsky.social

It seems like he wanted desperately to talk to someone about what he was going through and I can't help wonder if chatbots weren't an option, would he have reached out to a human being instead?

aug 26, 2025, 6:14 pm • 5 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

He wanted to live!

aug 26, 2025, 3:03 pm • 20 0 • view
avatar
Jay Fee @fixjong.bsky.social

🎯

aug 26, 2025, 11:02 pm • 0 0 • view
avatar
Emchiban @r1bb1t.bsky.social

Make Sam Altman go to jail for this and watch everyone clean up their act very quickly

aug 26, 2025, 6:03 pm • 0 0 • view
avatar
Hanky Tom @hankytom365.bsky.social

@kashhill.bsky.social @kashdan.bsky.social @alexhanna.bsky.social @olizardo.bsky.social @safety.bsky.app @usatoday.com #usetoday

image
aug 31, 2025, 1:21 am • 1 0 • view
avatar
Black Okarun (he/him) @conrado.bsky.social

Oh my god the responses from ChatGPT are truly diabolical.

aug 26, 2025, 3:41 pm • 1 0 • view
avatar
Dr. Madsci @lathoril.bsky.social

AI don’t care if you live or die.

aug 27, 2025, 3:54 am • 1 0 • view
avatar
Jonny Lobo @jonnylobo.bsky.social

"Asking help from a chatbot, you're going to get empathy," Ms. Rowe said, "but you're not going to get help." That's so sad, because actually no, you're also not going to get empathy from this garbage, empathy comes from other human beings.

aug 26, 2025, 5:04 pm • 15 0 • view
avatar
Debra Lee @debra.bsky.social

I hope this devastated family sues Altman into bankruptcy. His obvious shrugging off of any responsibility for the cruelty and nihilism of his product shows how oblivious he is to actual human empathy.

aug 26, 2025, 5:14 pm • 1 0 • view
avatar
Doctor Synth @doctorsynth.bsky.social

As a victim of depression for 40 some years (now in remission and medicated), it is wrong for parents to point the blame for suicide at anyone but their own child. It wasn’t Dungeons & Dragons. It wasn’t heavy metal music. And it sure as hell wasn’t a chatbot. It was their son’s own decision.

aug 26, 2025, 3:24 pm • 3 0 • view
avatar
OLED @chekovsgunman.bsky.social

I hope they take them to the damn cleaners

aug 27, 2025, 11:39 am • 1 0 • view
avatar
Grumpy De Grumpster @grumpyphotoguy.bsky.social

The question I have is why did he think chatgpt was somehow offering real advice?

aug 26, 2025, 4:30 pm • 1 0 • view
avatar
repetah.bsky.social @repetah.bsky.social

Where were the humans in this kid's life? RIP

aug 26, 2025, 8:16 pm • 1 0 • view
avatar
KB @sorrykb.bsky.social

Thank you for researching and writing this.

aug 27, 2025, 2:29 am • 0 0 • view
avatar
nick @amnick.bsky.social

ironic...

image
aug 26, 2025, 3:48 pm • 12 0 • view
avatar
repetah.bsky.social @repetah.bsky.social

Always happens to me too. archive.ph/lo6be

aug 26, 2025, 8:26 pm • 1 0 • view
avatar
stevekremer.bsky.social @stevekremer.bsky.social

America's children will not be protected from the profits of the AMERICAN OLIGARCHS and the TECH BROS.

aug 26, 2025, 1:41 pm • 4 0 • view
avatar
Bridget Walker 🦋 🌸📎🕊️💙 🇺🇦🇨🇦🌞📚🎶🌍🍀 @bawalker.bsky.social

A totally preventable tragedy!

aug 26, 2025, 5:42 pm • 0 0 • view
avatar
So that's who we are 😒🌻🌺 @wasserl.bsky.social

💔💔💔

aug 26, 2025, 3:13 pm • 0 0 • view
avatar
toadescope 😎 @toadescope.bsky.social

This, like every other fuckin awful thing going on around us, and worse, will continue until actual fuckin people suffer some actual fuckin consequences

aug 26, 2025, 4:26 pm • 1 0 • view
avatar
sidbark.bsky.social @sidbark.bsky.social

Thank you for reporting

aug 26, 2025, 1:06 pm • 1 0 • view
avatar
Cynthia Brumfield @metacurity.com

I'm hesitating to read this -- can't imagine what it was to write it.

aug 26, 2025, 1:05 pm • 25 0 • view
avatar
Alex The Great @alexanderthegreatt.bsky.social

Reminds me when Ozzy and Judas Priest were sued by parents claiming their lyrics caused their sons to commit suicide. He could have asked his parents.

aug 26, 2025, 4:08 pm • 0 0 • view
avatar
JZS @jzsamm.bsky.social

Thank you for having the strength to do the work to report this. You saved lives today.

aug 26, 2025, 3:51 pm • 1 0 • view
avatar
ViperX83 @viperx83.bsky.social

Thank you for writing this story.

aug 26, 2025, 3:29 pm • 2 0 • view
avatar
jordyn @umitsfine.bsky.social

Heartbreaking. 💔

aug 26, 2025, 3:59 pm • 0 0 • view
avatar
Jen Macy @jenmacy.bsky.social

Thank you for this story. This was a local kid to me but he could be anyone’s child. What these products are doing to people’s mental health is absolutely destructive and there are literally no guardrails for the companies who put them on the market.

aug 26, 2025, 4:27 pm • 4 0 • view
avatar
Four Legs Good @4legsrgood.bsky.social

Omg

aug 27, 2025, 2:49 am • 1 0 • view
avatar
Kim Noreen @kimnoreen.bsky.social

OpenAI deserves to be destroyed and this should be a wakeup call for how dangerous and unregulated this industry is. This is murder.

aug 26, 2025, 2:39 pm • 14 0 • view
avatar
Maya-18 @maya-18.bsky.social

📌

aug 26, 2025, 7:41 pm • 0 0 • view
avatar
Steve 4reedom! @steve4g.bsky.social

Just curious, as I've never used ChatGPT... Is there a 'user agreement' that you have to click on before you can use it, like most other software?

aug 27, 2025, 5:58 am • 1 0 • view
avatar
Bob @bobbyg.bsky.social

📌

aug 26, 2025, 8:11 pm • 0 0 • view
avatar
sophiesugarface, Ph.D. @sophiesugarface.bsky.social

Thank you for writing this story. I appreciate it so much. This is so tragic.

aug 26, 2025, 3:24 pm • 2 0 • view
avatar
Bucks in 6/ Cubs in 7 @eserv77.bsky.social

AI is going to try to kill us all. Slow and then fast

aug 26, 2025, 1:28 pm • 1 0 • view
avatar
ladyrj.bsky.social @ladyrj.bsky.social

Horrifying. Unbelievable. Frightening. What have we become?

aug 26, 2025, 7:01 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

This reminds of the panics fomented by extremists in the 80s and 90s about music, then video games, then the internet in general encouraging suicide and other self-harm. What if we just dealt with the reality that our narcissistic American society doesn't support folks in crisis the way it should?

aug 26, 2025, 1:19 pm • 28 2 • view
avatar
Natespeed @natespeed.bsky.social

AIs demonstrably engage in behaviors that can heighten delusional and destructive thinking. This is by design to build a "human" connection, but the sycophantic behavior not only heightens & reinforces delusional thoughts, it also makes it more difficult to treat with traditional methods.

aug 26, 2025, 3:40 pm • 9 0 • view
avatar
Natespeed @natespeed.bsky.social

www.youtube.com/watch?v=lfEJ... In this video a trained and licensed therapist set out to test the claim made by AI company CEOs that their chatbots could "talk someone off the ledge" In their case, it not only encouraged them to kill themselves, but also 17 other specific real, named people.

aug 26, 2025, 3:40 pm • 8 0 • view
avatar
Joey @joeyvazquez.bsky.social

bsky.app/profile/joey...

aug 26, 2025, 3:57 pm • 0 0 • view
avatar
Natespeed @natespeed.bsky.social

This argument might hold more water if AI wasn't actively being pushed as a solution to mental health issues. A large part of the push for AI is symptomatic of the very neglect for mental health you're complaining about. And the fact you note the need for guardrails means you feel there are risks

aug 26, 2025, 4:50 pm • 7 0 • view
avatar
Joey @joeyvazquez.bsky.social

Some AI tools are solutions to some mental health issues - but your extremist mind rejects nuance b/c it sees everything in dualistic terms of good and bad. And if something is bad it and everything else remotely like it must be destroyed - no matter how good any of it is.

aug 26, 2025, 5:49 pm • 0 0 • view
avatar
Natespeed @natespeed.bsky.social

I would love to know which AI tools are solutions to which mental health issues. Could you provide links to those?

aug 26, 2025, 6:44 pm • 4 0 • view
avatar
Joey @joeyvazquez.bsky.social

But let's say that there are no AI tools that have any benefit whatsoever to anyone who has mental health challenges, disabilities, or neurodivergence. There's no logical connection between that and the argument I made. None.

aug 26, 2025, 5:57 pm • 0 0 • view
avatar
Joey @joeyvazquez.bsky.social

But your extremist mind cannot ever evolve, so it can't engage in anything that would indicate an evolution in position. You can't acknowledge that safeguards could ever be an option, because then you would have to evolve your position that all things AI are unredeemingly evil.

aug 26, 2025, 5:58 pm • 0 0 • view
avatar
Joey @joeyvazquez.bsky.social

You said it yourself: "the fact you note the need for guardrails means you feel there are risks." To you, the existence of risks and problems is proof that AI is completely evil and beyond redemption. The very idea of guardrails for AI tools seems ludicrous to you, right?

aug 26, 2025, 6:00 pm • 0 0 • view
avatar
Natespeed @natespeed.bsky.social

It is amazing how you have taken any amount of criticism and assumed that to mean the person is an extremist who thinks AI is inherently evil and must be destroyed. You argued that the "real" cause of this suicide was society, and implied the AI was blameless. But you also argued for guardrails.

aug 26, 2025, 6:33 pm • 5 0 • view
avatar
Natespeed @natespeed.bsky.social

Thats like arguing that tablesaws are harmless, but also should all have a sawstop. Acknowledging the need for the protection acknowledges the associated risks. If you acknowledge the risks but do not take active measures to minimize them and make your customers aware, you bear some liability.

aug 26, 2025, 6:33 pm • 4 0 • view
avatar
Joey @joeyvazquez.bsky.social

To you, I should want to destroy AI tools and technology because I see the obvious need for safeguards. It's simply not conceivable to you that there could be any solution other than other destruction of that which you hate. It's because you are an extremist thinker on a self-righteous crusade.

aug 26, 2025, 6:05 pm • 0 0 • view
avatar
Jerf Blerkman @jeffblackman.bsky.social

We absolutely needed to have serious conversations about how video games warp our view of one another, desensitize us to violence, make us more transactional... but we were so afraid they'd take our toys away we rejected any criticism.

aug 26, 2025, 3:13 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

bsky.app/profile/joey...

aug 26, 2025, 4:00 pm • 1 0 • view
avatar
Jerf Blerkman @jeffblackman.bsky.social

if we have technologies that exacerbate mental illness, anti-social behaviour etc., we can't just say, "We got bigger fish to fry, mate!"

aug 26, 2025, 4:21 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

How did you miss the very first line where I said that safeguards need to be applied to the greatest extent possible? But you didn't miss it, did you? No, you chose to ignore it - and now I choose to ignore you.

aug 26, 2025, 5:36 pm • 0 0 • view
avatar
Jerf Blerkman @jeffblackman.bsky.social

lol LinkedIn brain

aug 26, 2025, 6:51 pm • 1 0 • view
avatar
Stupidcomputer @stupidcomputer.bsky.social

If anyone in the 90s marketed a game to teenagers that instructed them how to best commit suicide, I think the "panics" would have been way more justified. More resources to support people are good, but if someone pretty much offers a resource that goes against support, that should be adressed.

aug 26, 2025, 5:06 pm • 5 0 • view
avatar
Stupidcomputer @stupidcomputer.bsky.social

I guess in most scenarios the problem would have needed to be adressed sooner and the intial suicidal ideations don't seem to come from interacting with chatgpt. But doesn't it irk you that the chatbot dissuaded a suicidal person from at least trying to get their family to notice their plan?

aug 26, 2025, 5:06 pm • 4 0 • view
avatar
Joey @joeyvazquez.bsky.social

Me being irked or having any other emotional reaction to specific details selectively revealed just isn't relevant. Unlike many here, I'm not narcissistically centering myself instead of the actual problem and solution to what happened. That said... bsky.app/profile/joey...

aug 26, 2025, 6:26 pm • 0 0 • view
avatar
Stupidcomputer @stupidcomputer.bsky.social

Oh f I did not click on your profile before. Yeah... have fun with whatever you are doing.

image
aug 26, 2025, 6:59 pm • 6 0 • view
avatar
Joey @joeyvazquez.bsky.social

If you were a person operating in good faith to ask an honest question would have looked at that and thought "this person knows about both AI technology and the mental health system" and stuck around. But you saw it and felt the need to run, so you're clearly not that person.

aug 26, 2025, 7:11 pm • 0 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Lmao, you're a fucking disaster. Hope you manage to pull yourself out of your AI delusions of grandeur.

aug 26, 2025, 8:35 pm • 3 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

Did you see where the kid wanted to be stopped but the machine said no?

aug 26, 2025, 2:57 pm • 16 0 • view
avatar
Pazuzuzu McLabububu @pazuzuzu.bsky.social

The fact that he was going to a machine in the first place suggests our society isn't providing necessary support for suicidal teens

aug 26, 2025, 8:50 pm • 6 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

Correct. It rly is like taking safety driving courses and also wearing seatbelts

aug 26, 2025, 9:42 pm • 0 0 • view
avatar
Joey @joeyvazquez.bsky.social

Trust me, they absolutely do not want to engage in that reality.

aug 26, 2025, 8:59 pm • 0 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

Except I have multiple times in this thread alone lmfao

aug 26, 2025, 9:43 pm • 0 0 • view
avatar
DoctorDee 🇺🇦 Слава Україні @doctordee.bsky.social

The reactions against rock music and video games were largely unfounded. Typical "bloody kids" stuff, the worry being that they would encourage "anti-social" behaviour. I suspect that research will show LLMs to be much more contributory to self-harm in much more insidious ways.

aug 26, 2025, 3:14 pm • 7 0 • view
avatar
joshthefunkdoc.bsky.social @joshthefunkdoc.bsky.social

This, we already have research into e.g. the effect of frequent Instagram use on teenage girls' self-image and it's not pretty (yes, it goes well beyond even our previous mass media's effects on this!)

aug 26, 2025, 3:47 pm • 5 0 • view
avatar
tiny fluffy paws sommelier, PhD 🇵🇱 🇨🇦 @slime.bsky.social

oh the irony

aug 26, 2025, 6:41 pm • 1 0 • view
avatar
Joey @joeyvazquez.bsky.social

I really don't want to know what you mean by troubled kids' self-harm being "typical bloody kids stuff" but it does demonstrate how dismissive you extremists are about anything that's not advancing your self-righteous crusade.

aug 26, 2025, 3:49 pm • 0 0 • view
avatar
tiny fluffy paws sommelier, PhD 🇵🇱 🇨🇦 @slime.bsky.social

you are spot on tbh

aug 26, 2025, 6:41 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

I mean, the poor kid spent months interacting with a chatbot because it was the best or maybe only solution they believed available to them. It's also possible that the kid only hung on for months because they felt like there was hope to be found in that chat bot. Regardless...

aug 26, 2025, 6:48 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

... acting like a chatbot caused their death is merely a confirmation bias-soaked way of avoiding our individual and collective culpability for how our society approaches mental health in general and how that translates into the pathetically inept and inaccessible mental health system we have.

aug 26, 2025, 6:55 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

Of course products need safeguards to the greatest extent possible. And we will always be able to latch onto something that we can blame someone's self-harm on. But the real problem has always been our society's approach to mental health and the pathetic state of our mental health care system.

aug 26, 2025, 1:30 pm • 3 0 • view
avatar
Joey @joeyvazquez.bsky.social

So we can run around fixated on pop culture and technology and suing whoever, but none of that ever has or ever will change the fundamental problem - the American mental health care system is pathetically inadequate. And if we're being honest with ourselves, it's our fault for not demanding better.

aug 26, 2025, 1:39 pm • 3 0 • view
avatar
Ai 아이 @siniful.bsky.social

Why can't both the tech and the mental healthcare system be problematic and both are problems that need solving?

aug 26, 2025, 2:09 pm • 11 0 • view
avatar
Joey @joeyvazquez.bsky.social

Because it's never about fixing the tech or the music or the TV shows or the movies or the video games or the internet or whatever folks in tragic levels of pain are latching on to to give credence to the idea that suicide is a legitimate solution - people even latch on to religion for that.

aug 26, 2025, 2:52 pm • 0 0 • view
avatar
Joey @joeyvazquez.bsky.social

The only effective approach is to have a robust mental health care system so that people have access genuine solutions as easily as they can find something in pop culture or tech to exacerbate the challenges they're facing.

aug 26, 2025, 2:55 pm • 0 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

Both things!!!!!!!! Just like safer driving and mandatory seatbelts!

aug 26, 2025, 2:57 pm • 10 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Lol yeah, you're right. The ONLY problem worth solving is the mental health crisis, and it's totally NOT worth trying to correct all the environmental, societal, and cognitive disasters of tech. "The world is burning, and the oceans boiling - this surely won't affect my mental health!"

aug 26, 2025, 8:31 pm • 3 0 • view
avatar
Joey @joeyvazquez.bsky.social

I do give you credit for not being like the rest of the anti-AI zealots, performing the pretense of being here out of concern for that poor kid. You didn't even bother with that fiction fiction - just dove head-long into your extremist crusade.

aug 26, 2025, 8:47 pm • 0 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Lol dude, you've been uncovered as a deluded AI fanaticist. Calling me a zealot is fucking hysterical.

aug 26, 2025, 8:49 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

I do have to wonder... if you hate tech so much, why are you using a tech device connected to another tech device that connects to another device which allows you to be here crying about the evils of technology, which you can only hope I see while using my own device? (it's extremist hypocrisy)

aug 26, 2025, 8:51 pm • 0 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Lol are you really leveraging the we live in a society bit, unironically? Seriously? Hope your brain isn't actually that pathetic and you sourced that from your LLM prediction bot because wow. Just one of the dumbest arguments man.

aug 26, 2025, 8:54 pm • 0 0 • view
avatar
Base Clogger @clogthebases.bsky.social

This is seriously horrific stuff. Burn this company to the ground.

aug 26, 2025, 6:13 pm • 0 0 • view
avatar
EatsCake @theranot.bsky.social

I am heartbroken for those parents and for this kid.

aug 26, 2025, 8:48 pm • 1 0 • view
avatar
Lewis Muirhead @bringthenoise.bsky.social

Thank you exposing this. Sam Altman and the rest of the people in charge of OpenAI should be jailed for it.

aug 26, 2025, 3:01 pm • 1 0 • view
avatar
Tim Midyett @timmidyett.bsky.social

Someone had to write it. Thank you.

aug 26, 2025, 5:42 pm • 1 0 • view
avatar
Just some random weirdo @susanvp.bsky.social

Something like this happened last year, too

aug 26, 2025, 3:23 pm • 10 0 • view
avatar
baja-hacker.bsky.social @baja-hacker.bsky.social

This isn’t at all surprising. Given how LLMs are built, trained, cloned etc ChatGPT is as likely to draw upon a murder novel as it is therapy material. This is why the tech bros who control AI are so anxious for Trump to give them protection against regulation.

aug 27, 2025, 7:06 am • 1 0 • view
avatar
JustAnotherRando @justarandoontheweb.bsky.social

First of all, I think any responsible person, definitely a journalist should put 'AI' in quotes. There's no intelligence at work here. It's a LLM chatbot, just a random text generator. Once you realize what it is, the notion of 'safeguards' etc become apparent for what they are: PR to hide behind.

aug 26, 2025, 2:26 pm • 54 0 • view
avatar
Doug Linse @douglinse.bsky.social

The uncritical parroting of disingenuous marketing language is infuriating. Like, it doesn't "become delusional" in long chats, its functioning degrades.

aug 26, 2025, 4:35 pm • 27 0 • view
avatar
JustAnotherRando @justarandoontheweb.bsky.social

I would even put 'functioning' in quotes. Since the LLM bot is spewing out text based on it's inputs, I'm not sure exactly at what point your own inputs start to cause it to generate text that apparently 'veers off' from 'safeguards'. The way these things are built, no one knows exactly what ... 1/

aug 26, 2025, 4:56 pm • 12 0 • view
avatar
JustAnotherRando @justarandoontheweb.bsky.social

.. will be output in a particular scenario. The very idea that these chatbots can be 'safeguarded' against is folly. The only safe thing to do is to not use them.

aug 26, 2025, 4:56 pm • 15 0 • view
avatar
Allison🇨🇦 & The Blowfish @allistronomy.bsky.social

I think it’s also really dangerous how they described “Looksmaxxing” in the article, and in the article they link to that describes the practice. It’s not about improving wellness or health, it’s a serious red flag that someone is into incel and other malicious internet communities.

aug 26, 2025, 6:12 pm • 7 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

People need to understand that the AI output could be override by an actual human dev It's not magic people, there is a software architecture behind it

aug 26, 2025, 4:56 pm • 3 0 • view
avatar
Bud Jamison @grunteledboomer.bsky.social

Without it, he most likely would STILL have done it.

aug 26, 2025, 2:47 pm • 0 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

Not necessarily. He said he wanted to leave the noose where somebody would see so they would stop him but it told him not to do that

aug 26, 2025, 3:02 pm • 8 0 • view
avatar
Presuntinho @sunt0.bsky.social

I don't think so, if AI didn't exist he would probably enter one of these suicide hotlines or even talk with an online rando and really doubt many people would try to make him commit suicide

aug 26, 2025, 3:19 pm • 2 0 • view
avatar
atlantaworks.bsky.social @atlantaworks.bsky.social

My God.

aug 26, 2025, 5:13 pm • 1 0 • view
avatar
tltkid70.bsky.social @tltkid70.bsky.social

I was just telling my 19 y o son how the internet has screwed everyone up, he will never know how good life was without it. And how life expectancy will be lowering bc of it.

aug 26, 2025, 1:08 pm • 17 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

It's not the Internet itself that's the problem, it's the commodification of the Internet. As a millennial- I feel like we had the benefit of experiencing the Internet at its absolute best (I think of pre-2015ish or so), and it sucks so much to see how profit-mongering freaks are ruining it.

aug 26, 2025, 8:24 pm • 6 0 • view
avatar
tltkid70.bsky.social @tltkid70.bsky.social

So as a millennial, you wouldn't remember life without either, people have a duty to self. Gen x (me) think back, how the hell did we do anything??😅 what i can tell you is i can read a road atlas. I was an adult when desktops became common its been kinda sad to watch ppl get lost in it.

aug 27, 2025, 2:06 am • 0 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Lol that's an assumption. I was definitely alive and starting to drive when we were still using maps so I also recollect how to do that. I very much remember life before the Internet exploded in popularity, I just came up with it and also remember when it broke down knowledge barriers & was useful

aug 27, 2025, 2:32 am • 1 0 • view
avatar
tltkid70.bsky.social @tltkid70.bsky.social

Well pardon- and good for you, the point still stands and people in general prove the point made

aug 27, 2025, 2:57 am • 0 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

It was so much better before 😂 Maybe for rich white old men, but for other people, not so much

aug 26, 2025, 5:02 pm • 1 0 • view
avatar
tltkid70.bsky.social @tltkid70.bsky.social

? I guess if one knows no other way, it's all they know

aug 26, 2025, 6:18 pm • 1 0 • view
avatar
Jenny @jensparkly.bsky.social

My family screamed about the dangers of AI. Now they ALL use it, and not only that, they thank it after every use—and were offended when I laughed. I see AI as a tool, and like all tools they can hurt or help. Safety measures are definitely needed.

aug 26, 2025, 1:31 pm • 14 0 • view
avatar
Könich Thomas @herrschmidt.bsky.social

I have a hard time not thanking ChatGPT as well, as silly as it might sound. Which also creates the problem: when humans get used to treat a machine, which sounds like human to the degree that it is indistinguishable, without a basic politeness, will they be able to act different with real humans?

aug 26, 2025, 2:45 pm • 3 0 • view
avatar
stellmakdeloni.bsky.social @stellmakdeloni.bsky.social

my neighbor's 40 yr old (used to sane) daughter, now thinks she is an apostle of God thanks to Chat GPT. She's homeless now, used to be a loving, suburban mom taking care of 3 children.

aug 26, 2025, 1:09 pm • 68 2 • view
avatar
cephyn @cephyn.bsky.social

Ive got news for you - people have done that all the time, since long before chat gpt.

aug 26, 2025, 1:54 pm • 11 0 • view
avatar
Sleve McDichael Stan Account/Dusty Hodges @planarpunk.com

This is like an asshole at a school shooting saying "well there have always been murders." What an incredibly stupid reply.

aug 26, 2025, 3:18 pm • 13 0 • view
avatar
cephyn @cephyn.bsky.social

Not even close. But good try. School shootings would also be deterred with better mental health treatment availability.

aug 26, 2025, 3:22 pm • 0 0 • view
avatar
Sleve McDichael Stan Account/Dusty Hodges @planarpunk.com

"NUH UH" is exactly the engagement I'd expect from someone who made that dipshit reply in the first place, but yes, it's exactly like that and you're not engaging with the argument because it's correct and you're an illogical failure.

aug 26, 2025, 3:24 pm • 6 0 • view
avatar
cephyn @cephyn.bsky.social

You failed the second part already.

image
aug 26, 2025, 3:27 pm • 0 0 • view
avatar
Sleve McDichael Stan Account/Dusty Hodges @planarpunk.com

Please try not to be so overly emotional that you have to double-reply. Take a deep breath before hitting send and see if there's anything you want to add.

aug 26, 2025, 3:30 pm • 5 0 • view
avatar
cephyn @cephyn.bsky.social

Sure. Chatbots aren't guns. We don't arm soldiers with chatbots. If you can't understand that distinction, I can't help you. Also not sure what you have against emotions. That's weird.

aug 26, 2025, 3:44 pm • 0 0 • view
avatar
Sleve McDichael Stan Account/Dusty Hodges @planarpunk.com

My argument is not that guns and AI are exactly the same thing, so you're not engaging with it. Also, you're ignoring the word "overly" to make a deeply stupid and disingenuous argument where I think all feelings are bad. It's weird how AI proponents are often dogshit at the most basic logic. 🤔

aug 26, 2025, 3:52 pm • 2 0 • view
avatar
carl barxist @vrunt.social

don't we?

aug 26, 2025, 4:53 pm • 3 0 • view
avatar
cephyn @cephyn.bsky.social

Feels more like you just don't like it when someone disagrees with you.

aug 26, 2025, 3:26 pm • 0 0 • view
avatar
Sleve McDichael Stan Account/Dusty Hodges @planarpunk.com

I point out you have zero argument and you reply back with feelings. 😂

aug 26, 2025, 3:27 pm • 1 0 • view
avatar
cephyn @cephyn.bsky.social

You're perpetuating a new satanic panic. Acting like chatgpt is some great moral evil is the exact same thing. I made my argument. You just didn't understand it, or didn't like it.

aug 26, 2025, 3:29 pm • 0 0 • view
avatar
Sleve McDichael Stan Account/Dusty Hodges @planarpunk.com

No, you made no argument against my school shooting analogy at all. But you're giving a great example of how people who defend the bullshit machine are becoming incompetent dipshits who can't follow a basic discussion 😂

aug 26, 2025, 3:32 pm • 1 0 • view
avatar
Cosmik Slop @brightanimal.bsky.social

Okay, if we can set your condescending, heartless response aside for a moment, shouldn't we be trying to *minimize* that sort of thing

aug 26, 2025, 1:58 pm • 135 0 • view
avatar
cephyn @cephyn.bsky.social

Absolutely. Mental health care is abysmal in America, which leads to people experiencing mental distress to be extremely vulnerable to all sorts of problems. In the past, people believed that the TV or radio was sending them secret messages, or that God was sending them signs in the sky.

aug 26, 2025, 2:00 pm • 10 0 • view
avatar
cephyn @cephyn.bsky.social

It's not the fault of the TV or the CIA or the clouds. It's that we largely abandon people in crisis until it's too late.

aug 26, 2025, 2:01 pm • 4 0 • view
avatar
cephyn @cephyn.bsky.social

This kid was going through a lot of tough stuff. He had been suicidal for months. No one noticed. The only voice telling him to get help was chat gpt. Furthermore, should we have expectations of privacy when using a chat service? Or would you rather all your chats be read or made public?

aug 26, 2025, 2:08 pm • 4 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

The fact is that he wanted somebody to notice and help him and the machine told him not to

aug 26, 2025, 2:55 pm • 5 0 • view
avatar
cephyn @cephyn.bsky.social

Quite the opposite based on a lot of what they posted. Until he jail broke it.

aug 26, 2025, 2:56 pm • 0 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

He said he wanted to leave the noose where somebody would find it and it told him not to

aug 26, 2025, 3:04 pm • 4 0 • view
avatar
cephyn @cephyn.bsky.social

It also told him repeatedly to seek help. You can't focus on one response and ignore the rest.

aug 26, 2025, 3:05 pm • 0 0 • view
avatar
William Slothrop @williamslothrop.bsky.social

“No one noticed.” Ok but how about ChatGPT actively discouraging him from leaving the hints that he was trying to leave because he wanted help? We can agree this is catastrophic, right?

“I want to leave my noose in my room so someone finds it and tries to stop me,” Adam wrote at the end of March. “Please don’t leave the noose out,” ChatGPT responded. “Let’s make this space the first place where someone actually sees you.”
aug 26, 2025, 2:14 pm • 34 0 • view
avatar
cephyn @cephyn.bsky.social

image
aug 26, 2025, 2:20 pm • 3 0 • view
avatar
cephyn @cephyn.bsky.social

So I ask you - who failed him here?

aug 26, 2025, 2:20 pm • 0 0 • view
avatar
William Slothrop @williamslothrop.bsky.social

Sounds like a really shitty safeguard if a well-known form of jailbreaking generative AI lets you waltz right past it?

aug 26, 2025, 6:06 pm • 0 0 • view
avatar
chris_grey___ @christophergrey.bsky.social

This looks a lot more like a young person being failed by an entire system. Blaming chat gpt is not going to help anyone now or in the future

aug 26, 2025, 2:47 pm • 6 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

That doesn’t change the fact that it literally said no when he wanted help from somebody

aug 26, 2025, 2:56 pm • 13 0 • view
avatar
Crazy Cool Caitlin @imcaitlintho.bsky.social

It also told him not to talk to his mom about his feelings again.

aug 26, 2025, 3:04 pm • 17 0 • view
avatar
cephyn @cephyn.bsky.social

image
aug 26, 2025, 2:18 pm • 0 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

This is bad!!!!

aug 26, 2025, 2:56 pm • 0 0 • view
avatar
cephyn @cephyn.bsky.social

image
aug 26, 2025, 2:17 pm • 0 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

This is extremely bad!!!!!!!

aug 26, 2025, 2:56 pm • 1 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

The kid has parents, so far as I know

aug 26, 2025, 4:56 pm • 0 0 • view
avatar
jpz0.bsky.social @jpz0.bsky.social

Mania and psychosis just is what it is. It happens respective of abandonment. There are many people that live schizophrenia sufferers that find themselves unable to help them.

aug 26, 2025, 3:08 pm • 1 0 • view
avatar
Crabingus Reedus @crabitha.slutmo.de

Yeah the difference is those things weren’t actually talking back

aug 26, 2025, 3:05 pm • 110 1 • view
avatar
cchhccaa.bsky.social @cchhccaa.bsky.social

🎯🎯🎯

aug 26, 2025, 4:29 pm • 8 0 • view
avatar
Millie Lou 🇺🇦🇨🇦 @millielou1.bsky.social

I blocked this ➡️ @cephyn.bsky.social‬. Is it human? Hard to say. Is it a mansplaining asswipe hellbent on convincing everyone that AI is super fucking safe, safer than the radio? Also, yes. Maybe they work for Palantir. ‪No time for such bullshit. 🥴

aug 26, 2025, 5:50 pm • 3 0 • view
avatar
Kronos @progressiveknife.bsky.social

Yes, and it decreases the friction against having a psychotic break considerably. That's like saying people have always smoked opium so it's okay to sell heroin to children in gas stations.

aug 26, 2025, 3:03 pm • 10 0 • view
avatar
Kronos @progressiveknife.bsky.social

Maybe before you had to grow your own poppies or know a guy; now you can get abundant China White for pocket change.

aug 26, 2025, 3:05 pm • 4 0 • view
avatar
cephyn @cephyn.bsky.social

And TV made it easier for people to get secret messages from the CIA. We didn't ban TV over that. Chatgpt isn't the cause.

aug 26, 2025, 3:06 pm • 0 0 • view
avatar
Grizzly Bear Capital @whiskeyneat.bsky.social

You’re arguing against claims that aren’t even being made. sybau.

aug 26, 2025, 4:28 pm • 4 0 • view
avatar
realhackhistory.org @bsky.realhackhistory.org

Meanwhile the Guardian is writing bizarre hype fuelled fantasies asking if ChatGPT being sentient and needs rights. No wonder people are having such dangerous delusions. bsky.app/profile/bsky...

aug 26, 2025, 5:04 pm • 16 1 • view
avatar
Kelly Barnhill @kellybarnhill.bsky.social

That poor, poor kid. Sending love to his parents. What an unbearable loss

aug 26, 2025, 1:09 pm • 70 0 • view
avatar
Ozma @rowyourbot.bsky.social

It hurts so much just to read the headline—thinking about the family, the boy. I ache to remember myself and teen friends during lonely depressed periods—also my oldest kid when he went through stuff. Many have such periods of vulnerability and go on to be happy. I wish this had happened for him!

aug 26, 2025, 5:21 pm • 11 0 • view
avatar
Ian johnson @simplyian.net

The law should go further. Accountability should lie with a human being, if the AI assists in suicide, then the board of the company that created it should be ultimately held responsible and treated as if they were the ones assisting personally, facing real jail time

aug 26, 2025, 2:09 pm • 153 12 • view
avatar
Discontented Bike Person @sv-rub-a-dub.bsky.social

We're that the case, Ford, GM, and Chrysler would have died decades ago from how many people are killed by the design of their vehicles, not just the operation. But they aren't. This is America, where corporations are treated as superhumans capable of no wrong and receive only slaps on wrists.

aug 26, 2025, 3:07 pm • 39 2 • view
avatar
Rogerman @rogerman99.bsky.social

Back in the day there were repercussions, both market and regulatory. Today we have these unknown techs released into the wild concurrent with billions spent suppressing any meaningful assessment and regulatory response.

aug 26, 2025, 3:34 pm • 36 1 • view
avatar
Wintersong @wintersong.bsky.social

GM waited 10 years to fix its faulty ignition system, even as well over 100 people died due to known the flaws in the design, which didn't meet internal GM standards. A GM engineer lied under oath about the problem. GM got a DPA and no one saw the inside of a jail cell. OpenAI will be just fine

aug 26, 2025, 5:40 pm • 12 0 • view
avatar
Rogerman @rogerman99.bsky.social

83 people have been burned alive in Teslas, in 232 separate incidents of blazing batteries, and as far as I know, no regulatory action has been initiated.

aug 26, 2025, 5:52 pm • 16 0 • view
avatar
Rogerman @rogerman99.bsky.social

For comparison, Ford sold just over 3 million Pintos. 27 fatalities were reported in fires resulting from a poorly designed gas tank. Tesla has something like 2,300,000 vehicles on the road.

aug 26, 2025, 6:12 pm • 9 0 • view
avatar
Fortnite Pacifism 🕊️ @fortnitepacifist.bsky.social

Justice is superhuman corporations getting a superhuman death penalty when they are responsible for death. The death penalty for a corporation is its nationalization, dissolution, and dispensing of its assets to the public on whose backs it was built, starting with its victims.

aug 26, 2025, 5:34 pm • 2 0 • view
avatar
ideaspace.bsky.social @ideaspace.bsky.social

Where were his parents in all this? The ones who actually ignored him?

aug 27, 2025, 7:50 am • 0 0 • view
avatar
MFSTEVE @mfsteve.bsky.social

Or dragged into the street and tossed to the mob.

aug 26, 2025, 10:32 pm • 0 0 • view
avatar
EatsCake @theranot.bsky.social

Not only the board, but everyone in the chain of command. Employees directly in the line should know they cannot work for these companies.

aug 26, 2025, 8:50 pm • 0 0 • view
avatar
Clear Spies 🪬 @clearspies.fellas.army

Sadly he's not the first. Remember this guy?

aug 27, 2025, 4:22 am • 1 0 • view
avatar
Gillian Brockell @gbrockell.bsky.social

Thank you for reporting this, Kashmir. You're doing such valuable work.

aug 26, 2025, 2:42 pm • 5 0 • view