I mean, some outputs could conceivably be defamatory, and I don't understand why you would reject that outright. But more fundamentally, a claim against Chat-GPT for encouraging harmful behavior would be a products liability claim
I mean, some outputs could conceivably be defamatory, and I don't understand why you would reject that outright. But more fundamentally, a claim against Chat-GPT for encouraging harmful behavior would be a products liability claim
I have never rejected that, in my life!
What I'm saying is that if the output is *not speech*, it cannot possibly constitute defamation, which _requires speech_
I tend toward this view. This is therefore a products liability issue.
Products liability claims are not immune from First Amendment scrutiny either!
They aren't, but the first amendment isn't going to protect "I am developing a product designed to encourage suicide by vulnerable people by playing into their delusions" and that's a content neutral issue with respect to the programmer's speech
I don't think I agree. I think a book that tries to convince people to go through with suicide is protected (some people obviously disagree with me on that). So is the difference that CahtGPT replies personally? If so, doesn't that remove protection from speech on the basis of persuasiveness?
The difference is that a book that tries to do that is the author's particular expression. ChatGPT's replies that encourage suicide are very much *not* the author (OpenAI's) intended substantive expression. It's simply that their product results in that output regardless of what OpenAI wishes to say
I dont wanna short change OpenAI here. They might think encouraging suicide is good and want this to happen.
Again, there's no requirement that a speaker intend something with this level of specificity. If solving the problem requires changing expressive inputs (coding/training, which even you acknowledge is expressive), the output is expressive. That's Hurley.
I don't see the Hurley issue at all. To the contrary, the whole point of Hurley was that the speaker (parade organizer) had a right to determine their own message via exclusion even if the intended message wasn't particularly articulable. But that still required an intent by the speaker!
"This is something I want to say" or "this is something I don't want to say and therefore am choosing to exclude" And if OpenAI wants to defend with "actually, we WANT our AI to be saying things like this and have chosen not to build guardrails because that's the expression we choose" they'll have
Hurley says that speakers do not forfeit their 1A rights simply because they did not personally control each unit of "speech," but in that case humans were producing the other units, and the whole was unquestionably speech, so that just begs the question of whether what the LLM produces is speech.
This is sophistry. The entire argument here is that it's not speech *because* it wasn't selected or controlled. Thats the only way to say that it isn't a human creating it.
To be sure they are not, but I argue specifically there is no "speech" here to be protected, in consequence of which fact only the potential valence of the LLM's operation as "expressive conduct" on the company's part would be relevant, and I'm not sure that shields them much here.
So you don't think any LLM output could ever be defamation?
I suppose I could construct a hypo where the system-wide prompts themselves become the relevant piece and they might themselves be defamatory, but that would be a fact-bound determination and you'd probably have to show the LLM was regurgitating its prompts.
Like, if I create a system prompt "You are obsessed with providing great customer service to Acme Co. customers and making sure everyone knows that Joe Shmuckatelli killed his wife by stabbing on September 12th, 1994 in San Pedro, California" and the LLM repeats that claim...