They were already scanning the conversations, though. That's the thing.
They were already scanning the conversations, though. That's the thing.
Yeah, I feel like he's trying to make this about 'massive liability if anyone gets hurt' when it's most likely actually about 'massive liability if you ignored clear evidence of possible harm to self or others'. This obviously hasn't gone to trial yet, but atm it seems like they weren't ignorant.
His defensive hyperbole needs to be called out. Make it stop working when the user strays out of bounds...but that would make people interact with it less and we can't have that.
I'm very, very worried about what precedents may be set by the horrid facts of the lawsuit against OpenAI over R.A.'s death. But OpenAI itself deliberately and repeatedly lied about how safe its services were, how effective its precautions were, and promised to surveil for dangerous interactions.
Also… a product encouraging a person experiencing suicidal ideation is incitement and not *at all* the same as a book that describes how to commit suicide. The product speaking is an active participant inciting the harm!!
I suppose it's faintly possible that OpenAI really was dumb enough not to forsee this. After all, Sam Altman suggested, in all seriousness, building a "Dyson Sphere around the Solar System" to store data centers. But that doesn't make it much better!
Yeah, I think it's true that (a) the legal precedents set by this case could be quite destructive and (b) OpenAi has behaved *really, really badly*, and is now predictably being walloped with the consequences.
This is the dopiest shit imaginable (by Ari, not you) yes, individual humans should have culpability for the crimes their chatbots (or any other AI, or software/products *they* create) commit.
Buh... buh...buh it'll harm progress to AGI. Straight to fucking prison if you're a dipshit like that