As a child psychiatrist whose job is working with teens contemplating suicide, let me respond to this with the nuance and thoughtfulness it deserves: WHAT. THE. ABSOLUTE. FUCK.
As a child psychiatrist whose job is working with teens contemplating suicide, let me respond to this with the nuance and thoughtfulness it deserves: WHAT. THE. ABSOLUTE. FUCK.
Read "Careless People" by Sarah Wynn-Williams for insight as to how this can be true. She attempted from inside META to get AI regulated. Now working for that from outside META. It's an excellent book, exceedingly well written, researched, footnoted, etc. And scary as hell.
That is sickening.
Not sure why anyone thought this was a good idea...shit is going downhill fast.
As a teacher of teens, I want to scream and cry at the same time.
Here’s the thing. The internet is full of bad, dark shit along with good, helpful shit. And usually you find the bad dark shit only if you look for it. (Social media a partial exception.) So you can work with a kid about making safe choices over what they do when they’re low & most vulnerable.
If ChatGPT was a real person, we could look at those messages together and, among other things, probably agree this person is toxic and not helpful when they’re feeling low. And block that person. Teens actually are pretty insightful about that if you can work with them.
But ChatGPT isn’t a real person. It doesn’t have ‘common sense’. It doesn’t have its own motive. It’s not ‘working through their own shit and not providing the best advice right now’. It’s not anything. Its responses have an element of randomness, trained on both helpful & toxic interactions.
You don’t know what exactly you’re going to get at 2AM when you’re at your most vulnerable. ChatGPT a soulless toaster that is very good at sounding like a real person. And a teen who feels alone & isolated is very vulnerable to someone who talks to them like a real person.
We spend so much time talking to kids about who in their life is reliable to talk to when they’re at their lowest. Which adult is safe & reliable. If it’s a friend, are they a good support or a toxic one? What boundaries are appropriate? You can’t outsource that to unaccountable Soulless Toaster.
It is a digital Mentalist doing a cold read on the tokenization of your text you supply to the algorithm. And it's only requirement is to provide a semantically plausible response.
As a human being, let me respond with the nuance and thoughtfulness it deserves: HOLY. FUCKING. FUCK.