avatar
AkivaMCohen @akivamcohen.bsky.social

I mean, some outputs could conceivably be defamatory, and I don't understand why you would reject that outright. But more fundamentally, a claim against Chat-GPT for encouraging harmful behavior would be a products liability claim

aug 29, 2025, 5:41 pm • 5 0

Replies

avatar
Ari Cohn @aricohn.com

I have never rejected that, in my life!

aug 29, 2025, 5:42 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

What I'm saying is that if the output is *not speech*, it cannot possibly constitute defamation, which _requires speech_

aug 29, 2025, 5:42 pm • 1 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

I tend toward this view. This is therefore a products liability issue.

aug 29, 2025, 5:45 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

Products liability claims are not immune from First Amendment scrutiny either!

aug 29, 2025, 5:46 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

They aren't, but the first amendment isn't going to protect "I am developing a product designed to encourage suicide by vulnerable people by playing into their delusions" and that's a content neutral issue with respect to the programmer's speech

aug 29, 2025, 6:10 pm • 10 0 • view
avatar
Ari Cohn @aricohn.com

I don't think I agree. I think a book that tries to convince people to go through with suicide is protected (some people obviously disagree with me on that). So is the difference that CahtGPT replies personally? If so, doesn't that remove protection from speech on the basis of persuasiveness?

aug 29, 2025, 6:16 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

The difference is that a book that tries to do that is the author's particular expression. ChatGPT's replies that encourage suicide are very much *not* the author (OpenAI's) intended substantive expression. It's simply that their product results in that output regardless of what OpenAI wishes to say

aug 31, 2025, 5:18 pm • 4 0 • view
avatar
Timothy Raben @timothyraben.bsky.social

I dont wanna short change OpenAI here. They might think encouraging suicide is good and want this to happen.

aug 31, 2025, 5:19 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

Again, there's no requirement that a speaker intend something with this level of specificity. If solving the problem requires changing expressive inputs (coding/training, which even you acknowledge is expressive), the output is expressive. That's Hurley.

aug 31, 2025, 9:48 pm • 1 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

I don't see the Hurley issue at all. To the contrary, the whole point of Hurley was that the speaker (parade organizer) had a right to determine their own message via exclusion even if the intended message wasn't particularly articulable. But that still required an intent by the speaker!

sep 1, 2025, 2:58 pm • 2 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

"This is something I want to say" or "this is something I don't want to say and therefore am choosing to exclude" And if OpenAI wants to defend with "actually, we WANT our AI to be saying things like this and have chosen not to build guardrails because that's the expression we choose" they'll have

sep 1, 2025, 3:00 pm • 7 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

Hurley says that speakers do not forfeit their 1A rights simply because they did not personally control each unit of "speech," but in that case humans were producing the other units, and the whole was unquestionably speech, so that just begs the question of whether what the LLM produces is speech.

sep 1, 2025, 1:02 am • 1 0 • view
avatar
Ari Cohn @aricohn.com

This is sophistry. The entire argument here is that it's not speech *because* it wasn't selected or controlled. Thats the only way to say that it isn't a human creating it.

sep 1, 2025, 1:11 am • 0 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

To be sure they are not, but I argue specifically there is no "speech" here to be protected, in consequence of which fact only the potential valence of the LLM's operation as "expressive conduct" on the company's part would be relevant, and I'm not sure that shields them much here.

aug 29, 2025, 5:52 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

So you don't think any LLM output could ever be defamation?

aug 29, 2025, 5:53 pm • 1 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

I suppose I could construct a hypo where the system-wide prompts themselves become the relevant piece and they might themselves be defamatory, but that would be a fact-bound determination and you'd probably have to show the LLM was regurgitating its prompts.

aug 29, 2025, 5:59 pm • 0 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

Like, if I create a system prompt "You are obsessed with providing great customer service to Acme Co. customers and making sure everyone knows that Joe Shmuckatelli killed his wife by stabbing on September 12th, 1994 in San Pedro, California" and the LLM repeats that claim...

aug 29, 2025, 6:03 pm • 0 0 • view