avatar
Joey @joeyvazquez.bsky.social

This reminds of the panics fomented by extremists in the 80s and 90s about music, then video games, then the internet in general encouraging suicide and other self-harm. What if we just dealt with the reality that our narcissistic American society doesn't support folks in crisis the way it should?

aug 26, 2025, 1:19 pm • 28 2

Replies

avatar
Natespeed @natespeed.bsky.social

AIs demonstrably engage in behaviors that can heighten delusional and destructive thinking. This is by design to build a "human" connection, but the sycophantic behavior not only heightens & reinforces delusional thoughts, it also makes it more difficult to treat with traditional methods.

aug 26, 2025, 3:40 pm • 9 0 • view
avatar
Natespeed @natespeed.bsky.social

www.youtube.com/watch?v=lfEJ... In this video a trained and licensed therapist set out to test the claim made by AI company CEOs that their chatbots could "talk someone off the ledge" In their case, it not only encouraged them to kill themselves, but also 17 other specific real, named people.

aug 26, 2025, 3:40 pm • 8 0 • view
avatar
Joey @joeyvazquez.bsky.social

bsky.app/profile/joey...

aug 26, 2025, 3:57 pm • 0 0 • view
avatar
Natespeed @natespeed.bsky.social

This argument might hold more water if AI wasn't actively being pushed as a solution to mental health issues. A large part of the push for AI is symptomatic of the very neglect for mental health you're complaining about. And the fact you note the need for guardrails means you feel there are risks

aug 26, 2025, 4:50 pm • 7 0 • view
avatar
Joey @joeyvazquez.bsky.social

Some AI tools are solutions to some mental health issues - but your extremist mind rejects nuance b/c it sees everything in dualistic terms of good and bad. And if something is bad it and everything else remotely like it must be destroyed - no matter how good any of it is.

aug 26, 2025, 5:49 pm • 0 0 • view
avatar
Natespeed @natespeed.bsky.social

I would love to know which AI tools are solutions to which mental health issues. Could you provide links to those?

aug 26, 2025, 6:44 pm • 4 0 • view
avatar
Joey @joeyvazquez.bsky.social

But let's say that there are no AI tools that have any benefit whatsoever to anyone who has mental health challenges, disabilities, or neurodivergence. There's no logical connection between that and the argument I made. None.

aug 26, 2025, 5:57 pm • 0 0 • view
avatar
Joey @joeyvazquez.bsky.social

But your extremist mind cannot ever evolve, so it can't engage in anything that would indicate an evolution in position. You can't acknowledge that safeguards could ever be an option, because then you would have to evolve your position that all things AI are unredeemingly evil.

aug 26, 2025, 5:58 pm • 0 0 • view
avatar
Joey @joeyvazquez.bsky.social

You said it yourself: "the fact you note the need for guardrails means you feel there are risks." To you, the existence of risks and problems is proof that AI is completely evil and beyond redemption. The very idea of guardrails for AI tools seems ludicrous to you, right?

aug 26, 2025, 6:00 pm • 0 0 • view
avatar
Natespeed @natespeed.bsky.social

It is amazing how you have taken any amount of criticism and assumed that to mean the person is an extremist who thinks AI is inherently evil and must be destroyed. You argued that the "real" cause of this suicide was society, and implied the AI was blameless. But you also argued for guardrails.

aug 26, 2025, 6:33 pm • 5 0 • view
avatar
Natespeed @natespeed.bsky.social

Thats like arguing that tablesaws are harmless, but also should all have a sawstop. Acknowledging the need for the protection acknowledges the associated risks. If you acknowledge the risks but do not take active measures to minimize them and make your customers aware, you bear some liability.

aug 26, 2025, 6:33 pm • 4 0 • view
avatar
Natespeed @natespeed.bsky.social

So which is it? Is there no way for an AI to exacerbate someone's extant mental or emotional distress? Or is that a risk, & requires guard rails to prevent? If its the latter, then why are you so quick to dismiss the role of AI in this case? Wouldn't the obvious course be to examine the guardrails?

aug 26, 2025, 6:33 pm • 4 0 • view
avatar
Joey @joeyvazquez.bsky.social

To you, I should want to destroy AI tools and technology because I see the obvious need for safeguards. It's simply not conceivable to you that there could be any solution other than other destruction of that which you hate. It's because you are an extremist thinker on a self-righteous crusade.

aug 26, 2025, 6:05 pm • 0 0 • view
avatar
Jerf Blerkman @jeffblackman.bsky.social

We absolutely needed to have serious conversations about how video games warp our view of one another, desensitize us to violence, make us more transactional... but we were so afraid they'd take our toys away we rejected any criticism.

aug 26, 2025, 3:13 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

bsky.app/profile/joey...

aug 26, 2025, 4:00 pm • 1 0 • view
avatar
Jerf Blerkman @jeffblackman.bsky.social

if we have technologies that exacerbate mental illness, anti-social behaviour etc., we can't just say, "We got bigger fish to fry, mate!"

aug 26, 2025, 4:21 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

How did you miss the very first line where I said that safeguards need to be applied to the greatest extent possible? But you didn't miss it, did you? No, you chose to ignore it - and now I choose to ignore you.

aug 26, 2025, 5:36 pm • 0 0 • view
avatar
Jerf Blerkman @jeffblackman.bsky.social

lol LinkedIn brain

aug 26, 2025, 6:51 pm • 1 0 • view
avatar
Stupidcomputer @stupidcomputer.bsky.social

If anyone in the 90s marketed a game to teenagers that instructed them how to best commit suicide, I think the "panics" would have been way more justified. More resources to support people are good, but if someone pretty much offers a resource that goes against support, that should be adressed.

aug 26, 2025, 5:06 pm • 5 0 • view
avatar
Stupidcomputer @stupidcomputer.bsky.social

I guess in most scenarios the problem would have needed to be adressed sooner and the intial suicidal ideations don't seem to come from interacting with chatgpt. But doesn't it irk you that the chatbot dissuaded a suicidal person from at least trying to get their family to notice their plan?

aug 26, 2025, 5:06 pm • 4 0 • view
avatar
Joey @joeyvazquez.bsky.social

Me being irked or having any other emotional reaction to specific details selectively revealed just isn't relevant. Unlike many here, I'm not narcissistically centering myself instead of the actual problem and solution to what happened. That said... bsky.app/profile/joey...

aug 26, 2025, 6:26 pm • 0 0 • view
avatar
Stupidcomputer @stupidcomputer.bsky.social

Oh f I did not click on your profile before. Yeah... have fun with whatever you are doing.

image
aug 26, 2025, 6:59 pm • 6 0 • view
avatar
Joey @joeyvazquez.bsky.social

If you were a person operating in good faith to ask an honest question would have looked at that and thought "this person knows about both AI technology and the mental health system" and stuck around. But you saw it and felt the need to run, so you're clearly not that person.

aug 26, 2025, 7:11 pm • 0 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Lmao, you're a fucking disaster. Hope you manage to pull yourself out of your AI delusions of grandeur.

aug 26, 2025, 8:35 pm • 3 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

Did you see where the kid wanted to be stopped but the machine said no?

aug 26, 2025, 2:57 pm • 16 0 • view
avatar
Pazuzuzu McLabububu @pazuzuzu.bsky.social

The fact that he was going to a machine in the first place suggests our society isn't providing necessary support for suicidal teens

aug 26, 2025, 8:50 pm • 6 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

Correct. It rly is like taking safety driving courses and also wearing seatbelts

aug 26, 2025, 9:42 pm • 0 0 • view
avatar
Joey @joeyvazquez.bsky.social

Trust me, they absolutely do not want to engage in that reality.

aug 26, 2025, 8:59 pm • 0 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

Except I have multiple times in this thread alone lmfao

aug 26, 2025, 9:43 pm • 0 0 • view
avatar
DoctorDee 🇺🇦 Слава Україні @doctordee.bsky.social

The reactions against rock music and video games were largely unfounded. Typical "bloody kids" stuff, the worry being that they would encourage "anti-social" behaviour. I suspect that research will show LLMs to be much more contributory to self-harm in much more insidious ways.

aug 26, 2025, 3:14 pm • 7 0 • view
avatar
joshthefunkdoc.bsky.social @joshthefunkdoc.bsky.social

This, we already have research into e.g. the effect of frequent Instagram use on teenage girls' self-image and it's not pretty (yes, it goes well beyond even our previous mass media's effects on this!)

aug 26, 2025, 3:47 pm • 5 0 • view
avatar
tiny fluffy paws sommelier, PhD 🇵🇱 🇨🇦 @slime.bsky.social

oh the irony

aug 26, 2025, 6:41 pm • 1 0 • view
avatar
Joey @joeyvazquez.bsky.social

I really don't want to know what you mean by troubled kids' self-harm being "typical bloody kids stuff" but it does demonstrate how dismissive you extremists are about anything that's not advancing your self-righteous crusade.

aug 26, 2025, 3:49 pm • 0 0 • view
avatar
tiny fluffy paws sommelier, PhD 🇵🇱 🇨🇦 @slime.bsky.social

you are spot on tbh

aug 26, 2025, 6:41 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

I mean, the poor kid spent months interacting with a chatbot because it was the best or maybe only solution they believed available to them. It's also possible that the kid only hung on for months because they felt like there was hope to be found in that chat bot. Regardless...

aug 26, 2025, 6:48 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

... acting like a chatbot caused their death is merely a confirmation bias-soaked way of avoiding our individual and collective culpability for how our society approaches mental health in general and how that translates into the pathetically inept and inaccessible mental health system we have.

aug 26, 2025, 6:55 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

Of course products need safeguards to the greatest extent possible. And we will always be able to latch onto something that we can blame someone's self-harm on. But the real problem has always been our society's approach to mental health and the pathetic state of our mental health care system.

aug 26, 2025, 1:30 pm • 3 0 • view
avatar
Joey @joeyvazquez.bsky.social

So we can run around fixated on pop culture and technology and suing whoever, but none of that ever has or ever will change the fundamental problem - the American mental health care system is pathetically inadequate. And if we're being honest with ourselves, it's our fault for not demanding better.

aug 26, 2025, 1:39 pm • 3 0 • view
avatar
Ai 아이 @siniful.bsky.social

Why can't both the tech and the mental healthcare system be problematic and both are problems that need solving?

aug 26, 2025, 2:09 pm • 11 0 • view
avatar
Joey @joeyvazquez.bsky.social

Because it's never about fixing the tech or the music or the TV shows or the movies or the video games or the internet or whatever folks in tragic levels of pain are latching on to to give credence to the idea that suicide is a legitimate solution - people even latch on to religion for that.

aug 26, 2025, 2:52 pm • 0 0 • view
avatar
Joey @joeyvazquez.bsky.social

The only effective approach is to have a robust mental health care system so that people have access genuine solutions as easily as they can find something in pop culture or tech to exacerbate the challenges they're facing.

aug 26, 2025, 2:55 pm • 0 0 • view
avatar
Fey (they/them) @varric-fan69.bsky.social

Both things!!!!!!!! Just like safer driving and mandatory seatbelts!

aug 26, 2025, 2:57 pm • 10 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Lol yeah, you're right. The ONLY problem worth solving is the mental health crisis, and it's totally NOT worth trying to correct all the environmental, societal, and cognitive disasters of tech. "The world is burning, and the oceans boiling - this surely won't affect my mental health!"

aug 26, 2025, 8:31 pm • 3 0 • view
avatar
Joey @joeyvazquez.bsky.social

I do give you credit for not being like the rest of the anti-AI zealots, performing the pretense of being here out of concern for that poor kid. You didn't even bother with that fiction fiction - just dove head-long into your extremist crusade.

aug 26, 2025, 8:47 pm • 0 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Lol dude, you've been uncovered as a deluded AI fanaticist. Calling me a zealot is fucking hysterical.

aug 26, 2025, 8:49 pm • 2 0 • view
avatar
Joey @joeyvazquez.bsky.social

I do have to wonder... if you hate tech so much, why are you using a tech device connected to another tech device that connects to another device which allows you to be here crying about the evils of technology, which you can only hope I see while using my own device? (it's extremist hypocrisy)

aug 26, 2025, 8:51 pm • 0 0 • view
avatar
Mario Mangione @mario-mangione.bsky.social

Lol are you really leveraging the we live in a society bit, unironically? Seriously? Hope your brain isn't actually that pathetic and you sourced that from your LLM prediction bot because wow. Just one of the dumbest arguments man.

aug 26, 2025, 8:54 pm • 0 0 • view
avatar
Ai 아이 @siniful.bsky.social

I decided to not continue engaging when I realized there's no reasoning here. I offered a moderate position and they replied back with language lacking nuance and implied unwillingness to have a complex discussion, so energy's not worth it, IMHO. Anyway, hi! 😁 You seem cool. 👍

aug 26, 2025, 9:07 pm • 2 0 • view