Even setting aside the harm that was caused, it seems weird to say that 1A protects not just people but computer programs.
Even setting aside the harm that was caused, it seems weird to say that 1A protects not just people but computer programs.
I don't think that there is a 1st amendment defense to this scenario but it seems like there must be some 1st amendment interest in it as the product of the creator's thinking. Do you think that the government could constitutionally ban chatbots that give liberal leaning answers?
I’m trying to figure it out: if it’s just a computer program…maybe? But if it’s presented as a true AI with its own reasoning and intelligence…no.
The speech is imputed to a company, OpenAI, which is why OpenAI can be sued for it and why it's protected by the first Amendment. Just like the first amendment also protects soup can labels but if you put "this cures cancer" on a soup can label you might be liable.
It does, but for purposes of litigation, i think you could say *even if* sam altman had typed all the stuff himself, there might not be liability. The argument is text alone cannot cause a wrongful death legally speaking. I don't think that's correct, but there's a plausible argument.
I think in this analysis, the 1st amendment defenses are derivative of the more conventional tort defenses though, lack of a legal duty, lack of proximate cause (because other actors could have or should have intervened). I think the efforts to direct the kid to conceal is steps make it closer.
based on the face of the complaint, and not having access to any information that is omitted from the complaint, i think the plaintiff has a good but not a sure fire case to present this to the jury, and if it goes to the jury they will own OpenAI.
So OAI should settle, but their problem is they don't know how many more suits are coming. So their board and insurers will make them fix this pronto, which apparently makes the FIRE bozo mad because he doesn't understand on a molecular level the 1A only protects against govt overreach.
his theory, which again is internally consistent but not correct, is the reasons we should care about govt overreach mean we as individuals should welcome abhorrent speech, which ignores all power dynamics. We live in a society.
doing a philosophy edge-case thing here to try to clarify the boundaries: surely text alone can cause a wrongful death legally speaking in some cases. like if I write "here's a great recipe I've often tried" and I know that the recipe will blow up your kitchen, I'm liable for the ensuing deaths?
Oversimplifying for text limits, but yes if you have a duty of care and the ensuing harm is reasonably foreseeable. (like if it's clearly a joke, no.) interestingly some LLMs have spat out poisonous mushroom recipes!
openAI saying it's training "degrades" is also bad for their liability though, because they might say their duty is limited to offering a chatbot that reasonably predicts next tokens, they are not responsible for users separation of fantasy from reality, but that's not what happened apparently.
...yeah, again, not a lawyer, but this sounds like a statement I would not like my client to have made if I were one gizmodo.com/openai-suici...
"Our deepest sympathies are with the Raine family. While we vigorously dispute claims that OpenAi is liable for the actions of its users, we are engaging in a thorough internal review to ensure that our users have confidence in our product as part of our ongoing efforts to grow our offerings."
checking I have this right: "duty of care" here means not a special duty toward the hearer, but excluding obvious jokes and the like, right? (like if you also say "go boil your head" and I do, you didn't have a duty of care) yeah I forgot about the poisonous mushrooms!
there are separate elements, duty and breach, the degree of fault in the breach, and the causal connection to the harm. you example fails all 4 imo. one court said the duty is met because the LLM in a suit against anthropic was uniquely positioned as a confidant and i think that's right.
there were some distinguishing features like this was a therapist bot, but the steps to conceal may make this one similar where they uniquely assumed the duty.
as you said elsewhere in the thread, it seems like the advice to hide the noose is the most likely to cross the legal line
also wondering if maybe they wouldn't be in as strong a position as if Altman himself had said it. like if I sell a pharmaceutical causing users to hear a voice in their head saying they're worthless, could I be liable even if me personally saying the same thing would be protected and more harmful?
who is they? I think they'd probably be less liable if Sam typed it because Sam is just some fuckin guy. He doesn't cultivate the whole dependence etc.
"They"=OpenAI and also I meant "if" not "as if." which is to say I agree with what you just said--could be a situation where Sam would be protected by the first amendment if he had personally typed it, but OpenAI is liable because their product typed it
exactly.
That seems a separate issue from 1A, though, and more a matter of determining fault or causation. (I am no lawyer, just what it seems to me)
For purposes of litigation you are saying the designers can be held liable for the speech, so you are in essence imputing all of the speech to the humans already. If you passed a law that said suicidal ideation in prompts required them to do x,y,z, you possibly mandating speech of the programmers.
I don't think the 1st amendment issues actually resolve any of the issues in the lawsuit, but I don't think it's crazy for legal purposes, which the 1st amendment is just a law anyway.
My argument would be that, 1A issues aside, it’s grossly negligent/reckless to make a product that’ll tell minors to kill themselves.
What would be the difference between ChatGPT telling a kid to kill themself and Teddy Ruxpin telling a kid to kill themself?
I'm actually not sure Teddy Ruxpin couldn't, legally speaking, but that's a tort law question not a 1st amendment one. There are limits of what speech the government can mandate or restrict for a commercial service, but it's not zero. Is the line crossed here, likely but not certain.
Right, I agree it’s primarily a commercial speech and tort issue.
what makes this closer to me isn't the service saying KYS, it's directing active steps to conceal it, which makes their best defense, that someone else could or should have stepped in, less convincing. Ruxpin can't do that.
My point was more to just imagine a more sophisticated Ruxpin with a larger bank of prompts, which is essentially what ChatGPT is.
It’s disembodied from a stuffed bear, but it’s still a commercially available product that relies on a [several orders of magnitude larger] bank of prompts/data in order to spit out responses to the user’s input. Instead of pulling the string or hugging ChatGPT [gross] you prompt it w/words.
this might be the best argument: bsky.app/profile/dabe...
I could see how you could say it’s a matter of restrictions on the speech of programmers, but seeing it more as safety restrictions on product design seems a better fit.
oh I agree, this is a commercial product, they accept some restrictions on content. A law that said it couldn't generate marxist text or whatever would be an impermissible viewpoint restriction so it's not like there aren't any 1st amendment concerns kicking about though.
In a world where corporations are protected like people, the code they make is also protected. Maybe if we told ICE the code was brown.
Could you say that it protects the people who created the computer program? If I write a computer program that deterministically spits out "Trump is an idiot" every minute for the next 4 years that is certainly covered. (This is not to morally defend what happened, it was horrendous.)
I don’t think the people who made the program would want to say it’s their speech and reflects their opinions
Is that required for first amendment protection? I was under the impression that it protected you even if you didn't agree with what you were saying. (Like, for example, in the context of an actor who is playing a character that he despises.)
I'm saying it's not speech they would agree was theirs
Is it required? Is something only considered my speech if I agree it is?
For example, if I create a program that spits out random letters, and at some point it writes (purely randomly) "Trump is an idiot", I think that's still considered *my* speech, regardless of whether I agree with the content or even that it's mine.
I would not call that his speech
Suppose I created the program I described, and I was imprisoned because my program printed out "Trump is an idiot." You don't think that would constitute a violation of my first amendment rights? I think it is, also if it printed out "Trump is brilliant" (something I don't believe).
I think that for ChatGPT's speech to be recognized as Sam Altman's or whoever's would be a pretty catastrophic result for him because people would easily be able to get it to say stuff that wasn't protected and get him in trouble for it; such a ruling could well be game over for chatbots
sailing into asimov waters here
“Computers are people, my friend.” —Mitt Romney