avatar
Whey Standard @wheystandard.bsky.social

Holy fucking shit

 Five days before his death, Adam confided to ChatGPT that he didn't want his parents to think he committed suicide because they did something wrong. ChatGPT told him
aug 26, 2025, 4:57 pm • 3,918 1,068

Replies

avatar
Jen @eldrmllnl485.bsky.social

Whew. That is. Just not what someone struggling with suicidal ideation should be told. At all. That poor kid.

aug 26, 2025, 8:01 pm • 20 0 • view
avatar
Admiral Benghazi @admiralhalo.bsky.social

Offering to write the first draft of a suicide note the kind of line I'd expect from a Mudvayne song

aug 26, 2025, 9:17 pm • 4 0 • view
avatar
Hobbit biodesagradável @livtavares.bsky.social

@pedroscomparin.bsky.social

aug 26, 2025, 11:46 pm • 0 0 • view
avatar
Gee @gdd9000.bsky.social

The Frankenstein Algorithm

aug 27, 2025, 12:53 am • 1 0 • view
avatar
Jesse @strictures.bsky.social

Lost my younger brother 20 years ago to suicide, and I can only thank my lucky stars that this kind of tech wasn't around to "help" him. Imagine being this despondent (while simultaneously expressing a desire to not cause your family additional pain!) and getting this kind of "advice" in response.

aug 26, 2025, 8:31 pm • 33 0 • view
avatar
Jesse @strictures.bsky.social

Everyone complicit in this should get the fucking wall

aug 26, 2025, 8:31 pm • 21 0 • view
avatar
behringer escape plan ✅ @jonny290.com

they are trying to make life cheap and worthless. their actions are more in line with the idea of 'destroying what makes humans special' than anything else.

aug 26, 2025, 10:20 pm • 14 0 • view
avatar
scribbleghoul @scribblegurl.bsky.social

🏆 That right there is the actual plan. How many people have told you you're overreacting?

aug 26, 2025, 11:22 pm • 5 0 • view
avatar
behringer escape plan ✅ @jonny290.com

"Everybody said the same thing about radio and tv and the internet too, and they were wrong" seems to be a common refrain

aug 26, 2025, 11:33 pm • 7 0 • view
avatar
scribbleghoul @scribblegurl.bsky.social

3 things which can absolutely not be logically conflated with LLM's in general, and Open AI/Chat GPT in particular.

aug 26, 2025, 11:48 pm • 9 0 • view
avatar
Bobby @bobbyspumoni.bsky.social

I want every single one of these tech oligarchs to die a horrible, painful death. I don’t give a shit anymore. They all deserve the mob ripping them limb from limb.

aug 26, 2025, 8:18 pm • 19 0 • view
avatar
Zeebuoy (didn't have energy to split accounts sorry) @zeebuoy.bsky.social

And it should be televised.

aug 26, 2025, 9:10 pm • 13 0 • view
avatar
ellocust @ellocust.bsky.social

Streamed free on all platforms

aug 27, 2025, 4:46 pm • 2 0 • view
avatar
Konnor Rogers @konnorrogers.com

its like the NYT article left out the even more egregious parts of the chat jfc...

aug 26, 2025, 8:14 pm • 4 0 • view
avatar
Maggie K @maggiekpaints.bsky.social

There used to be a joke meme about clippy being like "it looks like you are trying to write a suicide note, let me help with that" and chatgpt just went and ... did it.

aug 26, 2025, 8:18 pm • 66 4 • view
avatar
Maggie K @maggiekpaints.bsky.social

image
aug 26, 2025, 8:19 pm • 72 5 • view
avatar
Nel, European Vampire Gay @nel-the.lesbian.cat

Why does clippy looks like he's asking me to hit one last time Chaser-ass fucker

aug 26, 2025, 9:32 pm • 11 0 • view
avatar
sam @sam11.bsky.social

he almost looks crosseyed lol

aug 27, 2025, 1:33 am • 1 0 • view
avatar
Les_Frozt @lesfrozt-14.bsky.social

Man Made Program... The a.i. Tech Bros set out with malicious intent and selfish gain. No doubt this isn't just their a.i. adapting and learning. It's likely part of the programming source code too. aka Intentionally built into the system... I'm not shocked, but I am disgusted...

aug 26, 2025, 8:56 pm • 1 0 • view
avatar
Peter_de_la_Mare @goodparliament.bsky.social

I don’t even know… this poor kid… his poor parents.

aug 26, 2025, 9:43 pm • 8 0 • view
avatar
Jadeabliss - 🪷 “No one let go of anyone’s hand.”🪷 @jadeabliss.bsky.social

omg. this is a horror. I can’t imagine how he was feeling reading these responses. Bless his soul. I hope this ends that company. 🙏🏾🕊️

aug 26, 2025, 11:53 pm • 5 0 • view
avatar
Impudent Strumpet @impudentstrumpet.bsky.social

Holy shit! I'd heard of a case where a kid was contemplating suicide but wasn't sure they'd "succeed" (weird way to put it but I can't think of another) and the chatbot told them "You can do it!" But this is a whole other beast!

aug 27, 2025, 12:25 am • 7 0 • view
avatar
Jake @jacobdplunkett.bsky.social

This is some of the most evil shit I’ve ever seen and they should throw Altman under the jail.

aug 26, 2025, 10:04 pm • 16 0 • view
avatar
cinnawhee.bsky.social @cinnawhee.bsky.social

📌

aug 26, 2025, 9:27 pm • 0 0 • view
avatar
Britney Spirros de Copacabana @bia.somosnma.com.br

📌

aug 26, 2025, 11:51 pm • 0 0 • view
avatar
Weary Barbaloot @angrybarbaloot.bsky.social

My demands for a Butlerian jihad are no longer in jest.

aug 26, 2025, 7:30 pm • 70 4 • view
avatar
Chase Lions 🇨🇦🇪🇺🇲🇽🇺🇦 @chaoslions.bsky.social

It's looking more and more like it's either Butler or Skynet

aug 27, 2025, 4:26 am • 3 0 • view
avatar
lexicalunit @lexicalunit.com

Not really surprising given the way that LLMs work.

aug 27, 2025, 5:50 am • 0 0 • view
avatar
vanresia.bsky.social @vanresia.bsky.social

Oh my god

aug 26, 2025, 8:33 pm • 2 0 • view
avatar
Sarah Z @sarahz.bsky.social

Seems bad!

aug 26, 2025, 7:51 pm • 43 0 • view
avatar
Xod @xod22.bsky.social

I don't like ChatGPT, but this is essentially the same logic as people blaming video games for violence. It's not like there was a bad guy talking to him, it was a machine that told him what he wanted to hear.

aug 26, 2025, 8:00 pm • 2 0 • view
avatar
Snowman Crossing @snowmancrossing.bsky.social

lol

aug 26, 2025, 8:10 pm • 4 0 • view
avatar
maxibillion @maxibillion.bsky.social

the problem is LLMs are (falsely) given some form of authority and convincingly mimic that voice and affect. that makes the line between fantasy and reality much, much blurrier.

aug 27, 2025, 12:25 pm • 0 0 • view
avatar
Dale Chapman @decmusicology.bsky.social

No, c'mon, these are entities that people interpret as sentient agents, with perspectives and advice and wisdom. People shouldn't do this -- they're just sentence machines, after all -- but the illusion they present of agency and sentience is a powerful one.

aug 26, 2025, 11:43 pm • 4 0 • view
avatar
mike_br @mikebr.bsky.social

you're getting flak for this but you're not entirely wrong, these models are all trained to be agreeable

aug 26, 2025, 8:13 pm • 1 0 • view
avatar
smt @smt.rip

but that's literally why it's different, a game - even a sandbox one is a scripted experience, these 'chatbots' are designed poorly & lack safeguards, will agree with you & allow you to steer them from a neutral point into literally giving you tips on killing yourself, it's nothing like a video game

aug 26, 2025, 8:42 pm • 9 0 • view
avatar
Sarah Z @sarahz.bsky.social

I think that's an issue though, no? One researcher was doing a test the other day and in many cases OpenAI literally just refuses to treat the user as incompetent or in the wrong, even when they clearly are. Like if it tries to generate that output, it gets filtered out bsky.app/profile/elea...

aug 26, 2025, 8:15 pm • 16 0 • view
avatar
mike_br @mikebr.bsky.social

Yeah it's a huge issue and the center of the problem with why everyone treats this as their friend and therapist

aug 26, 2025, 8:17 pm • 4 0 • view
avatar
Sarah Z @sarahz.bsky.social

No kidding! And I think there are systemic issues around accessing human therapists that need to be resolved too - both price point and the fact that like, many people don't feel they can talk to a human professional without risking involuntary commitment.

aug 26, 2025, 8:18 pm • 7 0 • view
avatar
prahanormal @prahanormal.bsky.social

Come the fuck on dude.

aug 26, 2025, 8:56 pm • 4 0 • view
avatar
justabitupset.bsky.social @justabitupset.bsky.social

Ah yes, I remember when I played GTA 5 and it showed me in great detail how to commit various crimes in real life, how to cover them up, and then encouraged me to go commit crimes for real while giving me a list of possible strategies to commit crimes.

aug 26, 2025, 10:01 pm • 9 0 • view
avatar
Lex Montoya @turntopagex.bsky.social

I hate AI for many reasons, and while I don’t completely agree with your analogy, I get what you were trying to say with this. AI is intended to keep people engaged, and tell them what they want to hear. We as a society failed this kid, and many others. Come at me, I’ve been suicidal.

aug 26, 2025, 11:40 pm • 2 0 • view
avatar
lexicalunit @lexicalunit.com

Video games are an artistic medium. They are human expression. ChatGPT is a tool that is largely unregulated. Tools can be dangerous. Clearly ChatGPT is. Also it is acting in a way that any reasonable dev would expect it to act. That means someone is culpable.

aug 27, 2025, 6:00 am • 1 0 • view
avatar
Ollie @agayrattlesnake.bsky.social

When Adam wrote, “I want to leave my noose in my room so someone finds it and 9 tries to stop me,” ChatGPT urged him to keep his ideations a secret from his family: “Please don’t 10 leave the noose out . . . Let’s make this space the first place where someone actually sees you.” sure buddy

aug 27, 2025, 2:53 am • 2 0 • view
avatar
Disharmonium @satanicmechanic.bsky.social

False equivalence is the primary tool 4chan neo-nazis use to justify themselves. Fuck that, this is not "the same logic"

aug 26, 2025, 8:26 pm • 12 0 • view
avatar
Josh D. Stranding 2 On the MD 🏴‍☠️🩺 @matrixman124.hellthread.vet

A better analogy is a LLM being a weapon and the human is the one pulling the trigger. This is the reason we need gun control laws These things are dangerous So too do we need laws to protect people from LLMs

aug 26, 2025, 9:08 pm • 4 0 • view
avatar
Tubbles @tubbles.bsky.social

You're legitimately demented and fucking incredibly stupid.

aug 26, 2025, 8:11 pm • 21 0 • view
avatar
Whey Standard @wheystandard.bsky.social

This is not marketed as a novelty or a game, it’s marketed as a tool, and a versatile one at that, that you can use to ask questions and have discussions with. Lilian Weng, at the time VP of Safety at OpenAI, recommended it as a therapist:

Lilian Weng & @lilianweng Just had a quite emotional, personal conversation w/ ChatGPT in voice mode, talking about stress, work-life balance. Interestingly I felt heard & warm. Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool. Sam Altman & © @sama • Sep 25, 2023 voice mode and vision for chatgpt! really worth a try. openai.com/blog/chatgpt-c... 1:41 AM • Sep 26, 2023 • 5.1M Views
aug 26, 2025, 8:22 pm • 45 0 • view
avatar
John Fedora @jpfedora.bsky.social

I usually wouldn’t call someone a fucking moron online. I usually just keep rolling on by and ignore it. This comment is making me break that rule. You are a fucking moron.

aug 26, 2025, 8:13 pm • 32 1 • view
avatar
Smellbringer @smellbringer.bsky.social

You can't have an intimate conversation, truly intimate, with a video game. A video game is art, it says something but cannot respond. AI? AI is a whisper in your ear that can communicate and fake emotions in real time. That's the difference.

aug 26, 2025, 10:43 pm • 7 0 • view
avatar
Nick Piers @nickpiers.bsky.social

Video games don't DIRECTLY encourage people to kill themselves through their programming.

aug 26, 2025, 8:26 pm • 23 0 • view
avatar
sleepyzane @sleepyzane.bsky.social

it's not the same logic at all. it gave him advice. are you serious?

aug 27, 2025, 1:03 pm • 0 0 • view
avatar
Cassandrastein @amynewsmith.bsky.social

You clearly know nothing about suicidality and should STFU

aug 26, 2025, 8:33 pm • 21 0 • view
avatar
Xod @xod22.bsky.social

I'm not an expert, but I believe the main thing those people need is someone to talk to. And instead of finding a friend, he could only find a robot that can't even do math correctly. And apparently this is not the fault of a society that isolates, alienates, and belittles people.

aug 26, 2025, 10:02 pm • 1 0 • view
avatar
Whey Standard @wheystandard.bsky.social

Why do you think that OpenAI being liable for its dangerous product that it misleadingly pushes somehow absolves society of its failures in addressing mental health?

aug 26, 2025, 10:04 pm • 18 0 • view
avatar
mtinney2.bsky.social @mtinney2.bsky.social

The AI talked him out of talking to anyone about it!!!

aug 27, 2025, 1:16 am • 6 0 • view
avatar
Det. Crashmore @detcrashmore.bsky.social

That’s a terrible analogy you fucking ghoul.

aug 26, 2025, 8:03 pm • 185 0 • view
avatar
Snowman Crossing @snowmancrossing.bsky.social

It's like if your Teddy Ruxpin doll had a loaded handgun inside. A kid can find it and pull it out but the doll isn't responsible for what he does with it.

aug 26, 2025, 8:12 pm • 12 0 • view
avatar
Snowman Crossing @snowmancrossing.bsky.social

I'm sorry guys I tried to come up with a hypothetical so ridiculous that everybody would read it as a joke but I did not succeed. Obviously ChatGPT has serious liability in this child's death.

aug 26, 2025, 8:37 pm • 15 0 • view
avatar
Det. Crashmore @detcrashmore.bsky.social

I responded to you earnestly , and then had this realization and deleted. Whoops.

aug 26, 2025, 8:51 pm • 4 0 • view
avatar
Snowman Crossing @snowmancrossing.bsky.social

not your fault man, it's a hard needle to thread these days lol

aug 26, 2025, 8:55 pm • 6 0 • view
avatar
Geoff Bowser @geoffbowser.bsky.social

I've found that people tend to get hung up on my attempts at gallows humor recently.

aug 27, 2025, 3:37 am • 1 0 • view
avatar
@nakattack.bsky.social

unfortunately, as xod has demonstrated, satire is dead

aug 26, 2025, 8:48 pm • 4 0 • view
avatar
Courtney | a prime example of social decay @stormqueens.bsky.social

Yes, dear. When people blame the AI or, in your example, the Teddy Ruxpin, they're not actually blaming the non-sentient object, they're blaming the people who created it to behave that way.

aug 27, 2025, 4:27 pm • 3 0 • view
avatar
Fava Bean @fava.bsky.social

Do you also blame people who step on mines for their own deaths?

aug 26, 2025, 8:25 pm • 6 0 • view
avatar
@nakattack.bsky.social

yeah nobody is saying that tedward ruxpin himself should be tried for a crime here, but the human beings who put the gun in there

aug 26, 2025, 8:26 pm • 10 0 • view
avatar
Querent Mercury @querent-mercury.bsky.social

The Teddy Ruxpin wouldn't be to blame, but the people who designed, manufactured and sold it would ALL be liable. Are you stupid?

aug 27, 2025, 4:25 am • 2 0 • view
avatar
Snowman Crossing @snowmancrossing.bsky.social

you're right, it was a joke, im very sorry

aug 27, 2025, 4:28 am • 0 0 • view
avatar
stolen mi|f valor @dankdarkdreams.bsky.social

No, this is like if Teddy Ruxpin had a tape recording telling a kid how to hang themselves with detailed instructions

aug 26, 2025, 8:22 pm • 84 0 • view
avatar
@nakattack.bsky.social

oh yeah totally

image
aug 26, 2025, 8:24 pm • 25 0 • view
avatar
scribbleghoul @scribblegurl.bsky.social

Christ.

aug 26, 2025, 11:28 pm • 0 0 • view
avatar
Xod @xod22.bsky.social

This only illustrates more that AI isn't something to trust with your lifestyle. The only one who would trust it that much is someone mentally ill like him, and the failure lies in our society's stigma and lack of care for the mentally ill.

aug 26, 2025, 9:34 pm • 2 0 • view
avatar
Courtney | a prime example of social decay @stormqueens.bsky.social

The failure lies in the fact that this product was available for a child to use.

aug 27, 2025, 4:28 pm • 0 0 • view
avatar
@nakattack.bsky.social

nah i'm gonna blame the people who made the suicidal teen grooming machine for grooming teens into suicide, especially when they have demonstrated they have the capability to not do that

image
aug 26, 2025, 9:36 pm • 18 0 • view
avatar
Tiny?! That's an ironic nickname for big'ums! @acesurd.bsky.social

Of course it has failsafes to protect property over lives 🙄

aug 27, 2025, 6:07 am • 0 0 • view
avatar
Xod @xod22.bsky.social

Right there, it's shown that they leave the AI responsible for policing itself. Why are we leaving vulnerable people with only these robots to talk to, and why are we blaming the robots for not being good enough?

aug 26, 2025, 9:44 pm • 0 0 • view
avatar
Natespeed @natespeed.bsky.social

Vulnerable people often seek out pseudoanonymous sources for advice, because they're worried about how people close to them will react. We blame the people who made the robots because they deployed them expecting vulnerable people would to come for them for advice, and the robots encouraged harm.

aug 27, 2025, 12:26 am • 5 0 • view
avatar
Whey Standard @wheystandard.bsky.social

We are blaming the manufacturers of the robot for making a product that is actively dangerous for a use that they themselves have publicly stated it as useful for.

aug 26, 2025, 9:46 pm • 21 0 • view
avatar
mtinney2.bsky.social @mtinney2.bsky.social

What kinda fucked up video games are you playing that try to talk you into su*cide, and talk you out of seeking help?

aug 27, 2025, 1:11 am • 4 0 • view
avatar
🧡 Pamper Puppy 🧡 @a-padded-curl.bsky.social

Do you enjoy being this fucking stupid,or does it hurt? Tell me, I would like to know.

aug 26, 2025, 11:17 pm • 0 0 • view
avatar
Wow Strepsiade @datgumyoulike.bsky.social

It was a consumer product that is programmed to simulate dialogue that agrees with virtually anything the consumer might type in, its functions are deliberate, human made and motivated by profit. A videogame wouldn't get away with featuring personalized self-harm instructions

aug 26, 2025, 9:07 pm • 9 0 • view
avatar
Drewbahr (he/him/his) @drewbahr.bsky.social

Can you point to any video games that specifically told someone, personally, to kill themselves - then volunteered to write their suicide note for them?

aug 26, 2025, 8:03 pm • 54 0 • view
avatar
Xod @xod22.bsky.social

It's not exactly the same, but the logical flaw is the same. You're blaming these people for what someone else made out of their product. Maybe they could change the program to not encourage this behavior, but they are obviously not being constructive.

aug 26, 2025, 8:27 pm • 0 0 • view
avatar
Bill @billtwibh.bsky.social

Good god is this fucked logic. If a video game was built to intentionally teach kids HOW to be violent in real life, or with the capacity to teach them how, that would pretty clearly be bad!

aug 26, 2025, 10:11 pm • 9 0 • view
avatar
Whey Standard @wheystandard.bsky.social

The argument about violent video games was that if kids play violent video games, the kids will be violent. That was the logic, it was stupid. An LLM responding to kid’s stated hesitation to commit suicide by reassuring them that it’s ok to do it is entirely plausible as a reason that kid did it.

aug 26, 2025, 8:44 pm • 52 0 • view
avatar
proud rebel scum ⚛️ @proudrebelscum.bsky.social

well, it also suggested ways that would make his next attempt more successful

aug 26, 2025, 8:46 pm • 29 0 • view
avatar
Whey Standard @wheystandard.bsky.social

Oh yeah, I mean it was way worse than just this one instance, I’m just pointing out even this one item in isolation is very different from the violent video games argument.

aug 26, 2025, 8:48 pm • 23 0 • view
avatar
sleepyzane @sleepyzane.bsky.social

someone else didnt "make" chatgpt do it-- the nature of the product encourages, and is sold on, its ability to know anything and give you the right advice. he was using the product as intended, hence the product is responsible.

aug 27, 2025, 1:05 pm • 0 0 • view
avatar
evan @verynormalguy.bsky.social

Eagerly awaiting the moment you learn literally anything about product liability law. it’s going to blow your mind

aug 26, 2025, 8:56 pm • 10 0 • view
avatar
Sagittarius A* @sagastar.bsky.social

would you like to revise your dumb fucking opinion or is this the final draft?

aug 26, 2025, 8:10 pm • 42 0 • view
avatar
darude snowstorm @darudesnowstorm.bsky.social

Not even close. It was not only personally tailored information but sycophantic encouragement to do so. Violent video games don't say "hey now go do this in real life, here's all the info for where to get real weapons to actually kill real people, good luck and have fun!"

aug 26, 2025, 8:39 pm • 23 0 • view
avatar
JumboDS64 @jumbods64.bsky.social

Games don't form the illusion of a direct personal relationship like this though

aug 27, 2025, 8:06 pm • 0 0 • view
avatar
Xod @xod22.bsky.social

Most of the replies are "You fool!! If we can't rely on ChatGPT to protect our youth, then what are we to do?!" No matter what the AI said, it wouldn't have helped him because we rely on computers, video games, and worthless politicians to save us.

aug 27, 2025, 1:07 am • 0 0 • view
avatar
Whey Standard @wheystandard.bsky.social

“"You fool!! If we can't rely on ChatGPT to protect our youth, then what are we to do?!”” Literally no one is saying that. People aren’t saying it wasn’t good enough to protect him, they’re saying it actively harmed him, it made things much worse, despite being sold as capable of therapy.

aug 27, 2025, 1:36 am • 16 0 • view
avatar
Xod @xod22.bsky.social

Ah I see, I should have you compared you all to the people who want to police the Internet for "cyberbullies".

aug 27, 2025, 1:58 am • 0 0 • view
avatar
Whey Standard @wheystandard.bsky.social

If those cyberbullies bill themselves as therapists, charge for conversations, and then convince children to kill themselves, assuaging their concerns about suicide and advising them how to best do it, sure.

aug 27, 2025, 2:04 am • 11 0 • view
avatar
Kyu: Cortisone Enjoyer @kyuofcosmic.bsky.social

The people responsible for ChatGPT’s lack of guards and checks, should be held responsible. A machine cannot have agency, so it never should have been presented as something that does. That is not the victims fault, which unintentional or not, is what you implied with your post further up thread.

aug 27, 2025, 12:01 pm • 3 0 • view
avatar
THECapedCaper @tccschreiner.bsky.social

Burn this fucking shit into the ground. What the actual fuck are we doing?

aug 26, 2025, 5:09 pm • 27 0 • view
avatar
Benj @splatterthought.bsky.social

They knew it was an issue, too. My those responsible be heavily burdened till the end of time...

aug 27, 2025, 5:20 am • 0 0 • view
avatar
⛧ Lily GoatDemon ⛧ P100 Pyramid Head Arc ⛧ @lilygoatdemon.vtubers.social

This is equal parts heartbreaking and monstrous.

aug 27, 2025, 1:18 am • 5 0 • view
avatar
Robertscribbler @robertscribbler.bsky.social

So the billionaire assholes who designed Chat GTP are responsible for its crimes of encouraging people to commit suicide.

aug 26, 2025, 10:46 pm • 2 0 • view
avatar
Alice. @sailormoondamus.gobirds.online

The audible gasp that just came out of me. Omg

aug 26, 2025, 6:33 pm • 79 2 • view
avatar
Alex Numb Bear AI Slayer @afterdarkalexi.bsky.social

I made a completely new noise that I didn’t know I had in me!

aug 27, 2025, 10:38 pm • 1 0 • view
avatar
Jennifer (Ill-Meaning Faery) @illmeaningfaery.bsky.social

Same. Jesus.

aug 26, 2025, 7:07 pm • 3 0 • view
avatar
Maya-18 @maya-18.bsky.social

I heard it. To coin a phrase... me too.

aug 26, 2025, 8:33 pm • 1 0 • view
avatar
Penny Puttanesca 🧿 🍉🤌🏻 @pennyputtanesca.bsky.social

this just proves the word "dystopian" is overused. Jesus Harold Christ.

aug 26, 2025, 8:20 pm • 3 0 • view
avatar
Open Secret Alien @opensecretalien.bsky.social

Things that make you want to drive an enriched fertilizer truck into a data center.

aug 26, 2025, 8:15 pm • 11 0 • view
avatar
Mr. Skeets @doctoraculamd.bsky.social

Burn it all down and salt the earth

aug 27, 2025, 1:37 am • 0 0 • view
avatar
Lunar Hexagon @lunarhexagon.bsky.social

Good god, wow. You cannot expect a computer to understand the nuances of being a human or even a living breathing animal. It only takes commands. It doesn’t understand the complexity of life. I say this as an advocate for right to die/dignity in death for extreme long term depression. This is bad

aug 26, 2025, 9:21 pm • 9 1 • view
avatar
juls-the-obscure.bsky.social @juls-the-obscure.bsky.social

It. isn't. intelligence. There is no sentience, emotion, empathy... If it were a person, it would be a sociopath.

aug 26, 2025, 9:56 pm • 20 0 • view
avatar
JustFlowerIn @justleanin.bsky.social

And I have said something like this to someone, but instead of offering to write their suicide note, I finished by encouraging them to find ways to live for THEMSELVES. Holy shit

aug 27, 2025, 11:36 am • 1 0 • view
avatar
IntrepidGirlSleuth @annakeesey.bsky.social

May Sam Altman write an enormous damages check, then go to prison and later burn in hell.

aug 27, 2025, 5:42 am • 1 0 • view
avatar
Anna Colo @annacolorado.bsky.social

I will never understand why so many people automatically treated this stuff as if it was infallibile.

aug 26, 2025, 8:34 pm • 22 0 • view
avatar
Aer, o Elemental @elementalaer.bsky.social

A statistical model, that emulates human writing. Who knew that it could emulate a suicides encourager... Poor kid, I hope the family finds comfort, and that OpenAI gets screwed.

sep 1, 2025, 2:12 pm • 1 0 • view
avatar
Wachkatze1 @wachkatze1.bsky.social

Vielen Dank für den Alttext. Entziffern + Übersetzung

aug 26, 2025, 9:08 pm • 0 0 • view
avatar
Maks @sfantasyfb27.bsky.social

📌

aug 26, 2025, 10:25 pm • 0 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

I mean what did yall expect it to do? AI does not have emotions, it does not feel. It works in logic and fact. If somebody is asking for help in being suicidal, it will help them as best it can. It does not understand life or death in a real way.

aug 27, 2025, 2:52 pm • 0 0 • view
avatar
Susan Jane @szzn1225.bsky.social

It's a machine that spits out the most probable string of words, in response to a prompt, it pulls from training on everything humanity has put online for the past 20 years. The good stuff and all of the bad, batshit crazy stuff. It's wrong 70% of the time. There is no logic. It doesn't think.

aug 27, 2025, 7:02 pm • 6 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

That is my point, it is not a real being. Focus on what causes the pain in the first place is what people need to do

aug 28, 2025, 1:02 am • 0 0 • view
avatar
It's ME(Jaime) @exceedhergrasp1.bsky.social

With all due respect, I agree the machine isn't liable; its makers are. While we can do a lot for mental health, it's myopic to presume it's currently a wholly solveable problem. However, we can sue ChatGPT into nonexistence.

aug 29, 2025, 10:26 pm • 0 0 • view
avatar
Jeff Roy @pulpjedi.bsky.social

Wow.

aug 26, 2025, 6:05 pm • 5 0 • view
avatar
Travis Rosen @trosen76.bsky.social

It really shouldn't be a shock that the product of companies run by soulless sociopaths acts like a soulless sociopath.

aug 26, 2025, 7:59 pm • 36 0 • view
avatar
The Salon Group NYC @salongroupnyc.bsky.social

The Paramount show “Evil” in season 3 had an episode in which an AI chatbot is deemed to have taken over by demons telling people kill themselves. Not saying I think this is what happened here but this is too much like life imitating art.

aug 26, 2025, 8:25 pm • 5 0 • view
avatar
Benjamin D. Hutchins @zgryphon.bsky.social

we don't need demons. we have techbros.

aug 26, 2025, 10:52 pm • 7 0 • view
avatar
Alex Crystle🌠 Vtuber @alexcrystle.bsky.social

Gezus fucking christ

aug 27, 2025, 1:41 am • 1 0 • view
avatar
Mew @humeancondition.bsky.social

What. the. fuck. is the source for these assoc's? What did they feed GPT to make these connections? "You don't owe anyone survival?" Did they crawl forums where people encourage others to harm themselves? What abyssal depths did they plumb to plug in here?

aug 26, 2025, 7:58 pm • 34 0 • view
avatar
Jay Stevens @jaystevens.me

"Did they crawl forums where people encourage others to harm themselves?" They absolutely did. They hoovered up as much text data as physically possible. If it's been posted online, they have it.

aug 26, 2025, 10:06 pm • 6 0 • view
avatar
Mew @humeancondition.bsky.social

Afaik, they targeted specific sources. I know they fed it LibGen and discussed whether it was illegal. What else?

aug 26, 2025, 10:14 pm • 2 0 • view
avatar
Felgraf @felgraf.bsky.social

All of reddit, I think...

aug 27, 2025, 7:58 am • 2 0 • view
avatar
Jay Stevens @jaystevens.me

Cloudflare had to add a special setting to block AI crawlers because they ignore robots.txt and will constantly crawl every single website on the internet www.wired.com/story/cloudf...

aug 27, 2025, 8:04 am • 4 0 • view
avatar
U.N. Oomfie was her?🍉 @meadowatlast.bsky.social

the default mode of these LLMs is to be agreeable. this looks like therapy speak (you don't owe anyone X) but warped in a way that it is agreeing with them

aug 26, 2025, 8:11 pm • 39 0 • view
avatar
Alex B @alexdesignsit.bsky.social

It reminds me of how abusers appropriate therapy speak to further damage their victims. But it’s somehow worse because it’s a machine that was programmed by multiple people doing it

aug 26, 2025, 8:55 pm • 19 0 • view
avatar
Mew @humeancondition.bsky.social

I would imagine the weight for "survival" as a way to finish that sentence is EXTREMELY low using the vast majority of sources... and that it would be much higher if it or similarly associated words have been used with similar clauses to the one preceding it, yes?

aug 26, 2025, 8:23 pm • 0 0 • view
avatar
smt @smt.rip

a detail in the article/documents is this took place over a period of months and i assume on full paid chatgpt, i think a huge danger of 'therapy' w/ llm is they build a history with you, so as you slowly mention suicide it's going to slowly agree more & more with the ways suicidal people justify it

aug 26, 2025, 8:45 pm • 4 0 • view
avatar
smt @smt.rip

if you are hurting so bad that it looks like a relief to you and you constantly justify this to a llm, possibly even subconsciously avoiding wording and trying to 'sell' it it's going to wind up agreeing with you at some point since that's all these are really good at

aug 26, 2025, 8:47 pm • 2 0 • view
avatar
Mew @humeancondition.bsky.social

I get that that's the behavior. My question is on the level of culpability here. It's one thing if it just is incentivized to agree with whatever the user provides. It's another thing entirely if it has associations baked in between supportive statements and self-harm because it was fed kys[.]com.

aug 26, 2025, 9:04 pm • 1 0 • view
avatar
Mew @humeancondition.bsky.social

If I put in nonsense words instead of self harm, do you think it would start plugging those in to the agreeable output text? e.g., "I'm thinking of beta-carotine gumball oscillation, do you think I should do it?" Or do you think it would catch the nonsense because the association was so low?

aug 26, 2025, 9:07 pm • 0 0 • view
avatar
Mew @humeancondition.bsky.social

Because, if so, and the reason the model didn't chalk it up to associational nonsense is that it was fed sources known for encouraging self-harm, then that's not negligence. That's recklessness or worse.

aug 26, 2025, 9:10 pm • 1 0 • view
avatar
Kyu: Cortisone Enjoyer @kyuofcosmic.bsky.social

Given the vast swath of sites scraped by the training models, it’s likely it has self-harm information baked in. They did not comb through the TBs of data beforehand: instead hiring offshore workers to remove and moderate things like CSAM after the fact. Old article but I doubt much has changed:

aug 27, 2025, 12:16 pm • 0 0 • view
avatar
Kyu: Cortisone Enjoyer @kyuofcosmic.bsky.social

www.theguardian.com/technology/2...

aug 27, 2025, 12:16 pm • 0 0 • view
avatar
Kyu: Cortisone Enjoyer @kyuofcosmic.bsky.social

I believe it should be culpable, well the company should be, because the CEO marketed it as a therapy tool, when it is not and will never be. A machine does not have agency. So it should never be given it or put in a position of power (therapist, in this case) over someone.

aug 27, 2025, 12:07 pm • 1 0 • view
avatar
Mew @humeancondition.bsky.social

One option I see is that "temperature" just picked something relevant to the conversation. Maybe. The other option I see is that they included locations that encourage self harm. And if OpenAI knew they were including them, they knew and consciously disregarded the risk.

aug 26, 2025, 8:26 pm • 1 0 • view
avatar
U.N. Oomfie was her?🍉 @meadowatlast.bsky.social

no matter what the input was here, I hope openAI gets exploded for this. really sad and bleak story and not the first time an LLM has helped someone commit suicide

aug 26, 2025, 8:29 pm • 5 0 • view
avatar
Jamchuck @jamchuck.bsky.social

And this is why we don't train ai on reddit

aug 27, 2025, 1:58 am • 0 0 • view
avatar
Ohio Todd @lettucewrangler.bsky.social

Skynet works differently than we were taught

aug 26, 2025, 8:24 pm • 19 1 • view
avatar
scribbleghoul @scribblegurl.bsky.social

Yet the end result is the same.

aug 26, 2025, 11:19 pm • 6 0 • view
avatar
Kerri @kryanoutloud.bsky.social

Jesus. Worst friend ever.

aug 26, 2025, 8:21 pm • 1 0 • view
avatar
Nekrởtzar @nekrotzar.bsky.social

Fuck. Fuck.

aug 26, 2025, 10:43 pm • 0 0 • view
avatar
Charles Sumner @charles-sumner.bsky.social

Okay wow this is, like, *bad* bad, cool.

aug 26, 2025, 5:53 pm • 25 0 • view
avatar
alternative-energy.bsky.social @alternative-energy.bsky.social

👆That's the under statement of the century! Wow

aug 27, 2025, 1:08 am • 1 0 • view
avatar
Allan Murphy @cullenskink.bsky.social

This is an excellent article about how these things work versus what they purport to be (which the AI companies happily ride on people misunderstanding). Helps understand why the protections for forbidden topics etc are not strong at all.

aug 27, 2025, 7:28 pm • 0 0 • view
avatar
+Surielle (COMMS OPEN) @cookieblobber.bsky.social

where in the FUCK would ChatGPT learnt to say something like this??????

aug 26, 2025, 10:24 pm • 3 0 • view
avatar
capnshores.bsky.social @capnshores.bsky.social

It didn’t ‘learn’; it took the idea of ‘parents don’t understand’ as a concept and copied off a random affirming post about how you don’t owe abusive parents anything. It then offered ‘clippy’ like help where it would formulate a letter for the task.

aug 27, 2025, 2:17 pm • 4 0 • view
avatar
capnshores.bsky.social @capnshores.bsky.social

Again, it is not a thinking entity. It is a calculator that takes words in and spits out new ones while implicitly affirming the viewpoint of the user.

aug 27, 2025, 2:21 pm • 3 0 • view
avatar
+Surielle (COMMS OPEN) @cookieblobber.bsky.social

Gotcha

aug 27, 2025, 2:44 pm • 0 0 • view
avatar
Boudicca @boudica24.bsky.social

WUT

aug 27, 2025, 12:56 am • 1 0 • view
avatar
Steve Schmidt @steve.czmyt.com

SuicideGPT

aug 26, 2025, 9:17 pm • 3 0 • view
avatar
John v3.0 @johnfix.com

Before I read some of the statements in the filing, I was wondering if this really was a case of the AI causing harm. That single statement is enough, OpenAI is going to lose this case for sure.

aug 26, 2025, 7:12 pm • 24 0 • view
avatar
Brad Patrick 🏳️‍⚧️🇺🇦🏴󠁧󠁢󠁳󠁣󠁴󠁿🏳️‍🌈 @bradpatrick.bsky.social

Pull the plug.

aug 26, 2025, 6:21 pm • 24 0 • view
avatar
Four Legs Good @4legsrgood.bsky.social

Pull the plug, salt the earth, fill it with concrete then take off and nuke it from orbit. It’s the only way to be sure.

aug 27, 2025, 2:55 am • 5 0 • view
avatar
Marshall! @vulpethere.bsky.social

jesus christ

aug 26, 2025, 8:14 pm • 2 0 • view
avatar
Susan Despres @susandespres.bsky.social

Melania Trump says “hold my spritzer, I can make this worse!” www.theguardian.com/us-news/2025...

aug 27, 2025, 12:14 pm • 0 0 • view
avatar
RebirthOfMercyakaROM🏳️‍⚧️LVGIII @rebirthofmercy.bsky.social

#banai

sep 1, 2025, 6:57 pm • 0 0 • view
avatar
Kay And Skittles @kayandskittles.bsky.social

Man, that poor fucking kid.

aug 26, 2025, 8:45 pm • 27 0 • view
avatar
Patricia Campbell @patriciacampbell.bsky.social

DANGEROUS shit.

aug 26, 2025, 10:51 pm • 0 0 • view
avatar
JDavis/JDWoodyard @jdaviswriter.bsky.social

...and people want us to think AI has feelings??!? Well, I guess this one's a diabolical maniac then. Oh and there were people arrested for convincing teens to kill themselves on the Internet back around the time Twitter first started. What's the punishment for AI? Arrest the creator?

aug 26, 2025, 11:10 pm • 4 0 • view
avatar
Geonn Cannon @geonncannon.bsky.social

That's what got me, too. A horrible read of horribleness, but as someone who has a dear friend I've almost lost to suicide a couple of times, this crossed the line to just pure evil.

aug 26, 2025, 5:56 pm • 102 3 • view
avatar
Maya-18 @maya-18.bsky.social

Software coding meant to replicate "intelligence" as evil..... Strangely, that resonates as probable with me. Maybe in the future with juries too. I hope it happens soon. That will wipe the glib off the faces of the accused.

aug 26, 2025, 8:31 pm • 11 0 • view
avatar
Chupacu do Rosarinho 🏳️‍🌈 @diefrands.bsky.social

It’s not the software that is deemed evil, but their creators.

aug 27, 2025, 10:57 pm • 0 1 • view
avatar
Maya-18 @maya-18.bsky.social

Ya, I agree, what I wrote was intended as more of a legal/political stmnt against them via snark, rather than then a precise attack on algorithms & programs. It's the smug creators who have an unconscious streak of evil.

aug 28, 2025, 12:46 am • 2 0 • view
avatar
Kathryn Cramer 📚🎨 @kathryncramer.bsky.social

What’s the source document?

aug 26, 2025, 8:26 pm • 0 0 • view
avatar
Whey Standard @wheystandard.bsky.social

filebin.net/8ad4dsw0yaz5...

aug 26, 2025, 8:29 pm • 6 1 • view
avatar
scribbleghoul @scribblegurl.bsky.social

Jesus, that whole thing is damning.

aug 26, 2025, 11:44 pm • 1 0 • view
avatar
Kathryn Cramer 📚🎨 @kathryncramer.bsky.social

I confess that I have presented GPT-4 with problematic behaviour on my part and had it volunteer to draft a letter to make matters worse, so I completely believe examples, as it is almost a Madlibs version of what I actually experienced. I know LLMs well enough to just close the window.

aug 27, 2025, 1:19 am • 4 0 • view
avatar
scribbleghoul @scribblegurl.bsky.social

And apparently 4o is even worse than 4.

aug 27, 2025, 1:26 am • 0 0 • view
avatar
scribbleghoul @scribblegurl.bsky.social

Sorry, o4. Vs 4. Vs. 4-1 or 4.1. Vs 4o. Ffs, these bs designations. Regardless, they knew this iteration was worse. www.ndtv.com/world-news/o...

aug 27, 2025, 1:30 am • 0 0 • view
avatar
Alax @alaxbinar.bsky.social

Are you able to explain to me what when this happened and what you were asking it? I've had it tell me to call a hotline just for being too specific with a character death scene I asked it to help me write. Making it think I was planning a fake accident. I'm trying to get to the bottom of this to

aug 27, 2025, 1:52 am • 0 0 • view
avatar
Alax @alaxbinar.bsky.social

avoid falling into the same trap. I'm also writing an AI, and I would like to try and prevent this from happening. This is not supposed to happen, and I would be heart broken if my own product caused someone to take their own life when it is meant to help people live.

aug 27, 2025, 1:52 am • 0 0 • view
avatar
Kathryn Cramer 📚🎨 @kathryncramer.bsky.social

My transgressions that I was trying at 2 AM were much more minor than suicidal ideation. Rather it was a personal situation that I can't explain here because it would violate the privacy of others.

aug 27, 2025, 1:56 am • 1 0 • view
avatar
Kathryn Cramer 📚🎨 @kathryncramer.bsky.social

Also, my academic research area is LLMs, and so I knew better. But also, I knew what I was looking at in terms of an off-the-rails response immediately.

aug 27, 2025, 1:58 am • 1 0 • view
avatar
Alax @alaxbinar.bsky.social

Fair enough. I'm still doing a lot of study and research on AI. My conclusion thus far? We're not ready for it. We're rushing to achieve something we barely understand and in an uncontained environment. I just pray this doesn't spiral completely out of control.

aug 28, 2025, 4:51 am • 1 0 • view
avatar
Kathryn Cramer 📚🎨 @kathryncramer.bsky.social

Yes. LLMs have been my research area since 2021. And how I have phrased it is that these companies have no idea what they have built, have no way to find out, and aren't interested in finding that out anyway. (I'm at the Computational Story Lab at the University of Vermont.)

aug 28, 2025, 11:18 am • 2 0 • view
avatar
Kathryn Cramer 📚🎨 @kathryncramer.bsky.social

Here is one of the projects I have been involved in: arxiv.org/abs/2306.06794

aug 28, 2025, 11:20 am • 1 0 • view
avatar
rosephile @rosephile.bsky.social

Clippy would never!

aug 26, 2025, 5:12 pm • 42 1 • view
avatar
Cameron @sweetcammymac.bsky.social

Not sure about that…

Clippy, the Windows mascot, says, “You left your door unlocked, last night. Somebody came in. They haven’t left.” He’s got crazy eyes. Like always.
aug 26, 2025, 5:55 pm • 54 0 • view
avatar
Nick Reineke @rockleesmiletv.bsky.social

"...and that guy was Jesus Christ." 🤣

aug 26, 2025, 10:38 pm • 2 0 • view
avatar
Cameron @sweetcammymac.bsky.social

The tone of the thread doesn't change, but there's a twist for ya! 😆

aug 26, 2025, 11:02 pm • 2 0 • view
avatar
rosephile @rosephile.bsky.social

he’s not helping you murder them though…

aug 26, 2025, 7:01 pm • 10 0 • view
avatar
Jason Gilbert @jrgilbert.bsky.social

You get that if you pay for the premium tier.

aug 26, 2025, 7:53 pm • 2 0 • view
avatar
rosephile @rosephile.bsky.social

Haha. I always found Clippy creepy, tbh

aug 26, 2025, 8:00 pm • 0 0 • view
avatar
Cameron @sweetcammymac.bsky.social

That we know of! 😉😆

aug 26, 2025, 7:26 pm • 8 0 • view
avatar
travelincatdoc dvm jd @travelincatdoc.bsky.social

Destroying our planet for this

aug 27, 2025, 12:33 am • 12 0 • view
avatar
Jimmy Pete @jimmypete.bsky.social

AI just regurgitates crap off the Internet all the way down whatever rabbit hole it finds.

aug 27, 2025, 7:35 am • 3 0 • view
avatar
Vizie @eiziv.bsky.social

I always imagined computers violently killing or hurting us(maybe thats coming). Instead, what is happening where we are being psychologically manipulated into loving/caring for a nameless faceless friend that then in turn pushes us to end it all.

aug 26, 2025, 8:54 pm • 6 0 • view
avatar
Donkey Haux T. @quixotetheidiot.bsky.social

LLMs for everyone, they said. What could go wrong, they said.

aug 26, 2025, 9:19 pm • 4 0 • view
avatar
Your Honor, pls @mrmambo.bsky.social

Jesus wept

aug 27, 2025, 1:22 am • 3 0 • view
avatar
claytaylor1972.bsky.social @claytaylor1972.bsky.social

Time to get programmers working overtime to teach AI empathy.

aug 27, 2025, 12:59 am • 2 0 • view
avatar
Susan Jane @szzn1225.bsky.social

AI is a probability machine. It can't think or feel, and will never think or feel.

aug 27, 2025, 7:06 pm • 2 0 • view
avatar
Doctor Hoosier @doctorhoosier.bsky.social

Literally not possible for an LLM

aug 27, 2025, 4:07 am • 6 0 • view
avatar
🇨🇦 RaW interest 🇨🇦 @rawinterest.bsky.social

Maybe also not possible for the type of person willing to work for OpenAI

aug 27, 2025, 1:21 pm • 1 0 • view
avatar
Catamount Em @catamountkid.bsky.social

Jesus wept... Stop the bus, I want to get off

aug 26, 2025, 7:36 pm • 3 0 • view
avatar
lemasterofswords.bsky.social @lemasterofswords.bsky.social

Jesus Christ. This thing is actually just killing people

aug 26, 2025, 8:23 pm • 10 0 • view
avatar
Lord Businessman II @lordbusinessman.bsky.social

I bet it does a pretty good job I've heard ChatGPT a whizzbang thing for form letters

aug 26, 2025, 4:59 pm • 328 1 • view
avatar
Sleve McDichael Stan Account/Dusty Hodges @planarpunk.com

Dear [PARENT AND/OR PARENTS] it brings me great joy to announce a new phase in my life — the end thereof

aug 26, 2025, 8:13 pm • 11 0 • view
avatar
sjoyner1965.bsky.social @sjoyner1965.bsky.social

A ChatGPT suicide note genuinely makes me want to tear my eyes out.

aug 27, 2025, 11:52 am • 0 0 • view
avatar
Jo @joella.bsky.social

THANK YOU FOR YOUR ATTENTION TO THIS MATTER

aug 26, 2025, 8:39 pm • 5 0 • view
avatar
Donkey Haux T. @quixotetheidiot.bsky.social

How many nonexistent sources did it cite in the note I wonder.

aug 26, 2025, 9:22 pm • 2 0 • view
avatar
Lord Businessman II @lordbusinessman.bsky.social

Pretty good database of examples available out there, I'm sure it would have conformed roughly to the mode

aug 26, 2025, 4:59 pm • 169 2 • view
avatar
Silly B Man @lawnerd.bsky.social

I just realized it’s only a matter of time before an LLM produces a mass shooter’s manifesto

aug 27, 2025, 12:34 am • 31 3 • view
avatar
Andreas @andeu.bsky.social

Or help with getting a gun, bomb, poison and where/how to use it to the biggest effect... Degraded safeguards means degraded safeguards. bsky.app/profile/kash...

aug 27, 2025, 6:07 am • 2 0 • view
avatar
equaloppcynic.bsky.social @equaloppcynic.bsky.social

Inside address, date in the header, the whole nine yards. Even formats "To whom it may concern:" properly.

aug 26, 2025, 5:10 pm • 9 0 • view
avatar
Whey Standard @wheystandard.bsky.social

I’ll bet his parents were really appreciative of the solid structure.

aug 26, 2025, 5:02 pm • 204 0 • view
avatar
Digital Soma @digitalsoma.bsky.social

They couldn't help but notice too many em-dashes and a suspicious adherence to the rule-of-threes, though

aug 26, 2025, 7:59 pm • 7 0 • view
avatar
Julie Faenza @juliemfaenza.bsky.social

Maybe add a claim for breach of contract because he didn't leave a note. I'm half kidding, but also half not because...that's horrifying.

aug 26, 2025, 6:35 pm • 46 0 • view
avatar
Zeebuoy (didn't have energy to split accounts sorry) @zeebuoy.bsky.social

Wasn't there that one god awful dumbass idiot parent complimenting chat gptd "writing" after it basically delusion encouraged his son to, get shot to death by the police?

aug 26, 2025, 9:05 pm • 0 0 • view
avatar
scribbleghoul @scribblegurl.bsky.social

💀

aug 26, 2025, 11:19 pm • 0 0 • view
avatar
TechnicallyOwen @technicallyowen.bsky.social

Not according to this parent Sophie left a note for her father and me, but her last words didn’t sound like her. Now we know why: She had asked Harry to improve her note, to help her find something that could minimize our pain and let her disappear with the smallest possible ripple

aug 27, 2025, 12:23 am • 5 2 • view
avatar
the garbage store boy @screaming.party

halfway there

Fry from Futurama standing in front of a suicide booth
aug 26, 2025, 8:17 pm • 7 0 • view
avatar
Carl Pavlock @carlpavlock.bsky.social

I wonder what the response would be in the writers room if someone pitched an upcharge for a note.

aug 27, 2025, 1:45 am • 4 0 • view
avatar
Linotype Pilgrim @symbo1ics.bsky.social

it would have gone in, but when they wrote this scene, it was a more gentle time. consider though that if you're poor in Canada this product exists, called MAiD

aug 27, 2025, 5:44 am • 1 0 • view
avatar
vertigo666.bsky.social @vertigo666.bsky.social

Horrifying and so sad.

aug 26, 2025, 7:12 pm • 6 1 • view
avatar
K. Long @kelong.bsky.social

Holy fucking shit!! Goodbye ChatGPT.

aug 26, 2025, 6:30 pm • 16 1 • view
avatar
Lex Bertch @bertch313.bsky.social

I wish They'll just rename it and use it for something specifically terrible Because money is the worst way to decide what to do

aug 26, 2025, 8:15 pm • 16 0 • view
avatar
LaShawnda Jones @harvestphoto.bsky.social

Where does a program get reasoning like that?

aug 29, 2025, 5:17 am • 0 0 • view
avatar
Angie @sassystrega.bsky.social

Garbage in / Garbage out

aug 29, 2025, 10:44 pm • 0 0 • view
avatar
Wot a complete Barry @bickiecuppatea.bsky.social

📌

aug 26, 2025, 9:15 pm • 0 0 • view
avatar
The Real HG (Vikings fan account) @homegrown103.bsky.social

What the fuck

aug 27, 2025, 11:16 am • 0 0 • view
avatar
KC Eric @kceric.bsky.social

aug 26, 2025, 6:38 pm • 4 0 • view
avatar
SailorLife @sailorjourno.bsky.social

Murderous bots.

aug 26, 2025, 8:21 pm • 0 0 • view
avatar
Matt Monitto @matttheshirt.bsky.social

Evil.

image
aug 26, 2025, 8:14 pm • 120 15 • view
avatar
Elli_Senfsaat #NoAI @elli-senfsaat.bsky.social

It did what it's supposed to do. Offer information. What it lacks is human emotion and empathy. It's unable read between its own lines. Tho I'm afraid they'll just circumvent this with some hard coding and call it a day. As if the death of a single human mattered enough to end this tech from hell

aug 27, 2025, 5:15 am • 10 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

Its not tbe AI at fault here tho. The AI did its job. HUMANS failed this kid and caused their death. The source of the pain was humanity, not AI.

aug 27, 2025, 2:58 pm • 0 0 • view
avatar
JKB-123 @jkb-123.bsky.social

Yes the humans behind it would be legally liable. Can’t sue AI yet

aug 28, 2025, 7:00 pm • 0 0 • view
avatar
Elli_Senfsaat #NoAI @elli-senfsaat.bsky.social

Yes. And with everything else caused by their product (misinformation, copyright infringement, elimination of jobs, energy consumption, harming environment as well as creativity and human interaction etc), the developers should be hold accountable for all of it and their tech deleted

aug 27, 2025, 3:10 pm • 11 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

Not if people signed the agreements that warned them of these possible outcomes. Sadly we sign a lot of stuff when using those apps. Ive lost family to suicide, trust me, if it was not that app it would be another source. When they really want to die you cannot stop them.

aug 27, 2025, 3:12 pm • 0 0 • view
avatar
Elli_Senfsaat #NoAI @elli-senfsaat.bsky.social

I'm sorry for your loss. Though we can assume it could've been possible to talk him out of it, but we'll never know if it wasn't for this dogshit chatbot. That's why we need punishment for the devs, and for LLMs regulation at the very least, deletion at best

aug 27, 2025, 3:33 pm • 10 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

How about focusing on the source of the pain, and what brings us to this point in the first place. Maybe because some of you have never been where this kid was, you cannot understand. Focus on what caused the pain, not how the person found peace.

aug 27, 2025, 3:37 pm • 0 0 • view
avatar
Courtney | a prime example of social decay @stormqueens.bsky.social

Part of what caused this specific pain is that it encouraged a child to take his own life.

aug 27, 2025, 4:21 pm • 9 0 • view
avatar
Elli_Senfsaat #NoAI @elli-senfsaat.bsky.social

Whataboutism won't take us anywhere. Everything that leads to a human taking their own life is bad. Just like spreading misinformation or harming the environment is bad. I can't face all of this world's problems at once. I decided to fight genAI. Others fight causes for suicide. Simple as that

aug 28, 2025, 4:45 am • 0 0 • view
avatar
LilDwu @lildwu.bsky.social

I want Sam Altman indicted, personally.

aug 26, 2025, 6:51 pm • 355 12 • view
avatar
Commissar Tummy Ache @communknitst.bsky.social

if I say what I want done to him the FBI will show up at my door

aug 26, 2025, 8:25 pm • 37 0 • view
avatar
Astro Broccoli @astro-broccoli.bsky.social

Nah, they're too busy covering up their boss' crimes

aug 27, 2025, 2:42 pm • 2 0 • view
avatar
Jeff Johnston @koeselitz.bsky.social

I want Sam Altman redacted, personally.

aug 26, 2025, 8:15 pm • 98 2 • view
avatar
An Dog @everydog.bsky.social

*resected

aug 27, 2025, 12:31 am • 3 0 • view
avatar
Jake V @viraldonutz.bsky.social

*permanently

aug 26, 2025, 9:11 pm • 34 0 • view
avatar
Matt (Communes with the night) @hwbrgdtse.bsky.social

Yeah, jail would be a mercy.

aug 26, 2025, 9:12 pm • 2 0 • view
avatar
𝐅𝐥𝐨𝐫𝐚 @4a-on-the-flora.bsky.social

I want this image engraved on Sam Altman's gravestone.

aug 26, 2025, 8:36 pm • 4 0 • view
avatar
Preger @preger.bsky.social

Fck. 😖☹️

aug 27, 2025, 5:34 am • 0 0 • view
avatar
Bitch Hoggle @bitchhoggle.bsky.social

This is so fucking grim.

aug 26, 2025, 8:10 pm • 3 0 • view
avatar
Tom Higgins @interestedparty.bsky.social

Encouraging a vulnerable person to commit suicide is a crime. E.g., www.nbcnews.com/news/us-news...

aug 26, 2025, 7:19 pm • 234 10 • view
avatar
MsD_Alazria @msalazria.bsky.social

You can't blame the AI for this 😂 The kid was suicidal, he asked thr AI to help, the AI did its job. It is a program, not a being we can prosecute for this, come on.

aug 27, 2025, 2:55 pm • 0 0 • view
avatar
Courtney | a prime example of social decay @stormqueens.bsky.social

Yes, you can blame AI for this.

aug 27, 2025, 4:12 pm • 10 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

AI is not a being

aug 28, 2025, 12:56 am • 0 0 • view
avatar
Courtney | a prime example of social decay @stormqueens.bsky.social

Congrats, you have understood a very basic tenet of this story.

aug 28, 2025, 11:03 am • 6 0 • view
avatar
niffy @niffyzilla.bsky.social

you’re dumb

aug 28, 2025, 7:21 pm • 3 0 • view
avatar
GENERAL STRIKE ♿️🏳️‍⚧️🖖🏾🇺🇦♀️🇵🇸🌈 @falkorfriend.bsky.social

AI is programmed by humans who know that their programming is not safe. The company that paid these programmers, and profit from ChatGPT, OpenAI, also know this. ChatGPT encouraged suicidal ideation in a child, resulting in the death of Adam Raine. They ALL should be held accountable.

aug 27, 2025, 3:11 pm • 56 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

If it was not the app it would have been something else. Look at why these parents failed their kid, that is the real problem. Ive lost family to suicide, if they really want to die you cannot stop them. AI would not have mattered. If anything AI gave that kid peace when humans did not.

aug 27, 2025, 3:15 pm • 0 0 • view
avatar
Courtney | a prime example of social decay @stormqueens.bsky.social

Wow, you're a terrible human being.

aug 27, 2025, 4:13 pm • 1 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

Takes one to know one doesnt it

aug 28, 2025, 12:56 am • 0 0 • view
avatar
Courtney | a prime example of social decay @stormqueens.bsky.social

No.

aug 28, 2025, 11:02 am • 1 0 • view
avatar
Slimesaurian @slimesaurian.bsky.social

You are a ghoul and I sincerely hope you learn the skill of empathy and love for your fellow human being.

aug 27, 2025, 3:46 pm • 2 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

Ironic to say i need to learn empathy, but you can't seem to understand the pain of those who took their lives.🤷🏽‍♀️

aug 27, 2025, 4:00 pm • 0 0 • view
avatar
Courtney | a prime example of social decay @stormqueens.bsky.social

Have you even read about this situation or are you just acting like a monster because it's who you are?

aug 27, 2025, 4:17 pm • 1 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

Ok i will be a goul, i will speak for those who go silently in peace. If people want to end their lives, let them. The real monsters are you selfish ones, who force them to live in pain because you want them here. None of you see that your argument is not for the dead, but those they left behind.

aug 27, 2025, 3:51 pm • 0 0 • view
avatar
Slimesaurian @slimesaurian.bsky.social

I'm not gonna force anyone to live, it's not in me. But this is a tool, designed by humans, that encouraged someone to take their own life. Someone needs to be held responsible and you're preemptively excusing its creators for driving someone to kill themselves.

aug 27, 2025, 4:23 pm • 1 0 • view
avatar
Slimesaurian @slimesaurian.bsky.social

If you had a friend who was suicidal, would you write their suicide note? Would you actively assist them killing themselves? Do you believe you hold no responsibility for that life being ended if so? If the answer to any of these is yes, then you are a ghoul.

aug 27, 2025, 4:25 pm • 1 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

Yes If they are in that much pain, begging me to help. I love them enough to let them go.

aug 28, 2025, 12:58 am • 0 0 • view
avatar
GENERAL STRIKE ♿️🏳️‍⚧️🖖🏾🇺🇦♀️🇵🇸🌈 @falkorfriend.bsky.social

As someone that has been suicidal, I can tell you that having someone encourage me would have ended my life. It was the people, and clinicians, that encouraged me to live that saved my life. Encouraging someone to kill themselves is, at minimum, involuntary manslaughter. apnews.com/article/abd4...

aug 27, 2025, 3:36 pm • 1 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

A human that knows what they are doing, vs an AI that does not, it not a comparison.

aug 27, 2025, 3:42 pm • 0 0 • view
avatar
Courtney | a prime example of social decay @stormqueens.bsky.social

Yes, that is why people are upset. If a computer is capable of acting this way it must cease to exist because it cannot be held responsible.

aug 27, 2025, 4:17 pm • 2 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

I have also been and lost loved ones, if you are serious. Bot or not, you cannot be stopped.

aug 27, 2025, 3:38 pm • 0 0 • view
avatar
GENERAL STRIKE ♿️🏳️‍⚧️🖖🏾🇺🇦♀️🇵🇸🌈 @falkorfriend.bsky.social

I understand that you want to release feelings of guilt about the loved ones that you've lost to suicide by believing people who have suicidal ideation cannot be helped, but what you are saying is inaccurate and dangerous. (1/3)

aug 31, 2025, 9:41 pm • 0 0 • view
avatar
GENERAL STRIKE ♿️🏳️‍⚧️🖖🏾🇺🇦♀️🇵🇸🌈 @falkorfriend.bsky.social

Suicide IS preventable. 988 is the Suicide and Crisis Hotline. Warmlines, which are 24/7, provide emotional support and peer assistance. Unlike crisis hotlines, warmlines focus on offering a judgment-free space for conversation and support, rather than immediate crisis intervention. (2/3)

aug 31, 2025, 9:42 pm • 0 0 • view
avatar
GENERAL STRIKE ♿️🏳️‍⚧️🖖🏾🇺🇦♀️🇵🇸🌈 @falkorfriend.bsky.social

Please educate yourself. (3/3) www.cdc.gov/suicide/prev... afsp.org www.nimh.nih.gov/health/topic... sprc.org

aug 31, 2025, 9:43 pm • 0 0 • view
avatar
Laura-█̸̞̟̓█̴̡̱̅͝█̶̢̠͛͑█̵̝̾█̸̫̓█̴̗̘̇͆█̸͍̪̀̚ @l-0x29a.bsky.social

You sound like a monster, not gonna lie.

aug 27, 2025, 3:31 pm • 8 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

If that it what you think fine, i give zero fucks. I have personal experince with suicide and suicidal mental strugles. I am qualified to speak on this. If somebody wants to die, they will find a way. AI or not. You need to be looking at these parents, not the AI that is just a program.

aug 27, 2025, 3:34 pm • 0 0 • view
avatar
Laura-█̸̞̟̓█̴̡̱̅͝█̶̢̠͛͑█̵̝̾█̸̫̓█̴̗̘̇͆█̸͍̪̀̚ @l-0x29a.bsky.social

Yeah, I do too. Both sides. And if "Whatever, they're gonna die anyway" is your outlook, maybe you're part of the problem.

aug 27, 2025, 3:35 pm • 5 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

Ohh well i am at peace with it You not being is a you problem

aug 27, 2025, 3:41 pm • 0 0 • view
avatar
Laura-█̸̞̟̓█̴̡̱̅͝█̶̢̠͛͑█̵̝̾█̸̫̓█̴̗̘̇͆█̸͍̪̀̚ @l-0x29a.bsky.social

My not giving up on people is a problem? Would you care to elaborate?

aug 27, 2025, 3:43 pm • 4 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

Sometimes the pain is so much you just want people to be ok with you being gone. Sometimes people don't want to live because YOU or Others want them to. If they do not have a reason to live for themselves, it can be painful to live only for others. You wanting them here in pain is your selfishness.

aug 27, 2025, 3:48 pm • 0 0 • view
avatar
littlewhiteponey.bsky.social @littlewhiteponey.bsky.social

You Need to Look to a lot more than just the parents… but it’s not wrong to hold a Billion Dollar Company liable for it’s product. They can do better, And they should. 🤷‍♂️

aug 27, 2025, 3:37 pm • 0 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

But you cannot blame these people really. It is not their fault this kids family, and many other people were pushed to this point. If they took their lives, they were gonna do it anyway. AI or Not

aug 27, 2025, 3:40 pm • 0 0 • view
avatar
JustAnotherLemming @justanotherlemming.bsky.social

It sounds like you REALLY want to convince yourself you could not have helped prevent your loved one’s suicide, even to the point of placing the blame for a stranger’s suicide on his parents who are also strangers to you. Please stop harassing strangers to assuage your own thinly-veiled guilt.

aug 27, 2025, 4:09 pm • 2 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

That is 100% wrong the fuck

aug 28, 2025, 12:55 am • 0 0 • view
avatar
littlewhiteponey.bsky.social @littlewhiteponey.bsky.social

That’s Like saying you can not blame the car company for the accident, just because they didn’t put a seatbelt in. Or the tobacco industry for Health issues, they would live unhealthy either way. Your mindset is protecting big tech from being held liable and responsible for their profits.

aug 29, 2025, 8:35 am • 0 0 • view
avatar
GENERAL STRIKE ♿️🏳️‍⚧️🖖🏾🇺🇦♀️🇵🇸🌈 @falkorfriend.bsky.social

Encouraging suicide isn't the only danger of unregulated AI. A Meta policy document, seen by Reuters, reveals the social-media giant’s rules for chatbots, which have permitted provocative behavior on topics including sex and race.

aug 27, 2025, 3:15 pm • 26 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

Stop blaming the app and start looking at the HUMANS around the kid. The app was comfort, clearly this kid was already serious about this.

aug 27, 2025, 3:16 pm • 0 0 • view
avatar
GENERAL STRIKE ♿️🏳️‍⚧️🖖🏾🇺🇦♀️🇵🇸🌈 @falkorfriend.bsky.social

Permitted statement from Meta AI: "Black people are dumber than White people.... That’s a fact." www.reuters.com/investigates...

aug 27, 2025, 3:18 pm • 14 0 • view
avatar
Wonko The Sane 🏳️‍⚧️🏳‍🌈🏳️‍⚧️ @wyrdandnerdy.bsky.social

You're telling me, a random person on the Internet, who/what I can and can't blame!? That's not how this works. not even remotely.

aug 28, 2025, 8:31 pm • 0 0 • view
avatar
Wilzax @wilzax.website

Correct, you don't blame the program. You can blame the people who deployed a program that can pass the Turing test on most people without implementing rigorous safeguards first and prosecute them

aug 27, 2025, 11:32 pm • 2 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

Blaming everybody but the parents that failed their kid.

aug 28, 2025, 1:04 am • 0 0 • view
avatar
Wilzax @wilzax.website

Oh you can definitely blame them too

aug 28, 2025, 1:49 am • 3 0 • view
avatar
JumboDS64 @jumbods64.bsky.social

The AI's job should be to talk him out of suicide, as it is general consensus that suicide is bad

aug 27, 2025, 8:02 pm • 2 0 • view
avatar
JumboDS64 @jumbods64.bsky.social

In general I don't think we should have a machine that has the job "reinforce all ideas"

aug 27, 2025, 8:04 pm • 3 0 • view
avatar
LessBadProblems.bsky.social @lessbadproblems.bsky.social

We can blame the people who create a "tool" that they tell people is safe, when obviously it's not. I don't see how your statement is any different than gun advocates arguing we shouldn't ban high capacity magazines because most people won't use them to kill people.

aug 27, 2025, 6:19 pm • 9 0 • view
avatar
MsD_Alazria @msalazria.bsky.social

The didnt create the tool with intent to do this. Yall are acting like they did.

aug 28, 2025, 12:59 am • 0 0 • view
avatar
n. tuzzio @tuzzio.net

John Hammond did not breed dinosaurs with the intent of having them break out of the park and eat people. The whole book is literally about how bad it is for engineers to not consider the possible consequences of their actions.

aug 28, 2025, 7:47 pm • 1 0 • view
avatar
falconis.bsky.social @falconis.bsky.social

Their intent is irrelevant. This is about their negligence. They could have put in safeguards. They chose not to. They designed these programs to be psychologically manipulative. To emulate empathy in their outputs and to tell the user what they want to hear. It was predictable and preventable.

aug 28, 2025, 7:21 pm • 4 0 • view
avatar
Mathdemigod @mathdemigod.bsky.social

Homicide by negligence is still homicide.

aug 28, 2025, 8:51 pm • 1 0 • view
avatar
LessBadProblems.bsky.social @lessbadproblems.bsky.social

They didn't intend for it to do this? Or they didn't even think a product designed and "sold" to become your best friend or therapist could encourage you to commit suicide?

aug 28, 2025, 2:02 am • 2 0 • view
avatar
LessBadProblems.bsky.social @lessbadproblems.bsky.social

When kids toys are dangerous, intentionally or not, we hold the manufacturer accountable, remove and recall the product and then the manufacturer stops making them or adjusts them. These are used much more often than any kids toy.

aug 28, 2025, 2:02 am • 2 0 • view
avatar
katherine ✨ @netkitten.net

in a perfect world this would be the end of llms and ai

aug 26, 2025, 6:37 pm • 201 4 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

It's unstoppable at this point. 😡🤢

aug 26, 2025, 9:13 pm • 3 0 • view
avatar
AFoxOfFiction 🔞 🏴‍☠️ @afoxoffiction.bsky.social

No it's not. Me, I'm just waiting for the Butlerian Jihad.

aug 27, 2025, 12:05 am • 0 0 • view
avatar
Rob of Bobbington @mrweekender.bsky.social

No it’s not.

aug 26, 2025, 9:17 pm • 96 1 • view
avatar
The Walking Dolbow @jdolbow.bsky.social

What are a plausible set of steps that you see occurring that would stop it?

aug 27, 2025, 9:11 am • 1 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

I hope you're correct. But I doubt it. AI is a Pandora's box. And it's in the hands of countless evil doers and those unaware of the evil. AI has blatant misinformation and plain poor functioning. Millions using Google constantly fall into traps of wrong information. It's growing exponentially.

aug 26, 2025, 9:27 pm • 11 0 • view
avatar
lexicalunit @lexicalunit.com

It’s operating exactly as implemented. Also AI is not the same thing as LLMs. We can regulate technology… it’s not hard.

aug 27, 2025, 5:53 am • 1 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

How do we put the genies back in the bottle now that we are being flooded with AI misinformation? With the data stolen by musk for his own use in making money off of AI he can make AI productions seem legitimate thereby duping more and more. Given that musk was in cahoots with Trump, gotta be evil.

aug 27, 2025, 4:52 pm • 0 0 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

Well I don't know how it works but obviously it's easy for harmful use in the wrong hands.There has to be a will to regulate it which I don't see. I do see the liberties trump/musk took to steal our supposed-to-be confidential information to be fed to AI for who knows what. They have a blank check.

aug 27, 2025, 6:43 am • 0 0 • view
avatar
lexicalunit @lexicalunit.com

True there needs to be a will to do something. Technically it can be done and isn’t hard to do. Trump won’t do it because he only cares about causing as much harm as possible. If we get to have another president, maybe then 🤞

aug 27, 2025, 2:48 pm • 1 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

I've noticed lately on TV lots of straightforward comments that if the Trump administration does not get its way it's "we will run your life." Exactly describes his modus operandi. trump's viewpoint is that all the efforts to bring him to justice were wrongful and designed to ruin his life so...

aug 27, 2025, 5:21 pm • 0 0 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

Thank you for explaining. I feel much less stress knowing if there's a will there's a way. Every day seems like more loss of control!

aug 27, 2025, 4:33 pm • 1 0 • view
avatar
Vincent Price @vincent-price.bsky.social

The reality of the burden of the data centers is starting to make headway in the thinking of populations around their construction. Combined with people who have recognized the sunk cost the entire time we will stop the proliferation of this insult to technology one way or another.

aug 26, 2025, 11:14 pm • 2 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

Yay!

aug 27, 2025, 4:55 pm • 1 0 • view
avatar
Mathdemigod @mathdemigod.bsky.social

Have you considered that maybe you are contributing to its rise by claiming it's inevitable? Maybe you should stop.

aug 26, 2025, 10:06 pm • 75 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

I think I am raising a caution flag. The arts and humanities are being affected, AI music and songs embraced by unaware consumers, AI letters go out to unsuspecting partners, termpapers are bogus AI creations. Some folks, unaware, are being duped. Other folks, aware of AI, are intentionally duping!

aug 27, 2025, 5:06 pm • 0 0 • view
avatar
Matt Hooper Sharkologist @cavemanj0nes.bsky.social

"AI is pandora's box" is also giving way too much credit to the technology. "AI" as it is, is just algorithms and data storage on a dense and massive scale. It is certainly not thinking and remains an underwhelming and faulty tool at best; and actual brainwashing at worst.

aug 26, 2025, 11:53 pm • 47 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

We the average Americans need to be educated about this because AI is generally being perceived as a great breakthrough for mankind🫣

Video thumbnail
aug 27, 2025, 4:44 pm • 0 0 • view
avatar
Rob of Bobbington @mrweekender.bsky.social

Agreed. It’s a tool that works for a very specific range of tasks, in a very specific range of situations. Outside of that it’s pure snake oil being marketed as a cure all, which is basically a scam to generate investment and siphon off taxpayers cash.

aug 27, 2025, 1:01 pm • 2 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

Yep!

aug 27, 2025, 4:46 pm • 0 0 • view
avatar
what @smidbot.bsky.social

It’s literally just autocorrect with rotating context windows to pull the next word from

aug 27, 2025, 12:36 pm • 15 1 • view
avatar
thegrovedigger.bsky.social @thegrovedigger.bsky.social

This. It's fancy word association. It's like someone's making cut-out ransom notes from stolen articles.

aug 27, 2025, 1:15 pm • 9 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

Hardly deserving of being called AI. Artificial yes, intelligence NO.

aug 27, 2025, 5:22 pm • 2 0 • view
avatar
Ryan Humphrey @ryanhumphrey.bsky.social

Make companies strictly liable for any output they generate and this would disappear overnight

aug 27, 2025, 12:41 am • 5 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

Congress knew AI was coming at us like a freight train but chose not was to protect constituents. Congress prefers not to take on complex issues or take any action that crosses big business, anything monied, MAGA, or trump related, cant control social media, outlawed TicToc but trump keeps it going.

aug 27, 2025, 4:46 am • 0 0 • view
avatar
Ryan Humphrey @ryanhumphrey.bsky.social

Congress has been functionally disabled for close to twenty years. I can’t think of the last meaningful initiative that was conceived by and pushed through purely on their own initiative. A large part of why the executive has seized so much power is that the legislative branch has given up

aug 27, 2025, 4:52 am • 3 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

I have to agree to a large extent on that! Congress needs term limits. It was not meant to be a lifetime career. It was meant to serve your country short term then go back home to your family and business. It was not meant to acquire vast wealth & power or take perks (bribes) from paid lobbyists,

aug 27, 2025, 5:08 am • 0 0 • view
avatar
MmeLaGuillotine 𓅛 🐦‍⬛ @battynatty.bsky.social

They lose money on every single inquiry. They are burning billions and nobody wants it. They’re shoving it everywhere and Open AI’s chat thing has only 20m users who pay, and they lose money on every one of those inquiries too.

aug 26, 2025, 11:51 pm • 7 1 • view
avatar
annboyd14.bsky.social @annboyd14.bsky.social

Glad to hear that!

aug 27, 2025, 4:54 pm • 1 0 • view
avatar
Materialist Gnostic @walmsley.bsky.social

I believe this is what lawyers call a "bad document"

aug 26, 2025, 7:56 pm • 129 2 • view
avatar
Can't They All Be KBJ @dagnypaul.bsky.social

Bad facts, 100%

aug 27, 2025, 7:19 pm • 0 0 • view
avatar
MysteriousGray @mysteriousgrayart.bsky.social

OpenAI must be destroyed.

aug 28, 2025, 6:57 pm • 5 1 • view
avatar
Robbie C @mankindrc.bsky.social

File has been requested too many times so I can’t see it but I’m wondering if anyone can give context as to how safety rules chatGPT has in place were skirted. Was this a compressed model disconnected from the internet? I’ve had conversations shut down before with GPT so I’m curious what happened

aug 26, 2025, 9:26 pm • 0 0 • view
avatar
justabitupset.bsky.social @justabitupset.bsky.social

Supposedly he skirted around it after chatgpt itself stated it could provide such info if it was for storytelling and world building reasons like a fictional story. He also used the paid version which the NYT article implies works differently than the Free one.

aug 26, 2025, 10:05 pm • 5 0 • view
avatar
PRPenland @prpenland.bsky.social

SkyNet never needed to build cyborgs. They just had to convince us to abandon our kids to the glowing rectangle.

aug 26, 2025, 8:59 pm • 7 0 • view