avatar
AkivaMCohen @akivamcohen.bsky.social

I'm ... not sure that's right, Page Rank is the company's expression of how it weights various factors; the fact that it goes through an algorithm to spit out results doesn't change that, because the output itself is the goal of the algorithm. I'm not sure Chat-GPT outputs map neatly onto that

aug 29, 2025, 5:26 pm • 7 0

Replies

avatar
Ari Cohn @aricohn.com

It's not the best analogue available, but it's not entirely off-base

aug 29, 2025, 5:27 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

It's at least an interesting question and I think the macacque selfies are also an available analog.

aug 29, 2025, 5:37 pm • 1 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

I haven't given it the full mental 360 but I feel like there's a decent argument to be made that Chat-GPT *chat* in particular (not prompt-engineered substantive writing) is the First Amendment-equivalent of what a macacque selfie is for copyright purposes - outside the scope of the law

aug 29, 2025, 5:30 pm • 10 0 • view
avatar
Jay Reding @jayreding.bsky.social

That’s where I’ve come down now. Human expression is protected, the outputs of algoritms are not, at least not to the level of eliminating any products liability.

aug 29, 2025, 5:43 pm • 3 0 • view
avatar
Ari Cohn @aricohn.com

I think that's pretty obviously wrong, and wrote a brief on it: www.thefire.org/research-lea... If that's true, Congress can require all of those outputs to have racial slurs and not criticize Trump. How could that be so.

aug 29, 2025, 5:32 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

Also, none of those outputs could possibly constitute defamation, and I don't see how you maintain any other theories of liability in which harm flows from the ideas conveyed.

aug 29, 2025, 5:34 pm • 1 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

I mean, some outputs could conceivably be defamatory, and I don't understand why you would reject that outright. But more fundamentally, a claim against Chat-GPT for encouraging harmful behavior would be a products liability claim

aug 29, 2025, 5:41 pm • 5 0 • view
avatar
Ari Cohn @aricohn.com

I have never rejected that, in my life!

aug 29, 2025, 5:42 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

What I'm saying is that if the output is *not speech*, it cannot possibly constitute defamation, which _requires speech_

aug 29, 2025, 5:42 pm • 1 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

I tend toward this view. This is therefore a products liability issue.

aug 29, 2025, 5:45 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

Products liability claims are not immune from First Amendment scrutiny either!

aug 29, 2025, 5:46 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

They aren't, but the first amendment isn't going to protect "I am developing a product designed to encourage suicide by vulnerable people by playing into their delusions" and that's a content neutral issue with respect to the programmer's speech

aug 29, 2025, 6:10 pm • 10 0 • view
avatar
Ari Cohn @aricohn.com

I don't think I agree. I think a book that tries to convince people to go through with suicide is protected (some people obviously disagree with me on that). So is the difference that CahtGPT replies personally? If so, doesn't that remove protection from speech on the basis of persuasiveness?

aug 29, 2025, 6:16 pm • 0 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

To be sure they are not, but I argue specifically there is no "speech" here to be protected, in consequence of which fact only the potential valence of the LLM's operation as "expressive conduct" on the company's part would be relevant, and I'm not sure that shields them much here.

aug 29, 2025, 5:52 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

So you don't think any LLM output could ever be defamation?

aug 29, 2025, 5:53 pm • 1 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

No, it can't - because *requiring* particular outputs would require compelling the company to design those outputs into the machine, which would make it compelled speech. That doesn't preclude products liability claims against a product which has speech as an output

aug 29, 2025, 5:39 pm • 13 0 • view
avatar
jag @i-am-j.ag

It sounds like your first intuitive thoughts are along the lines of what portion of the output is due to training/tuning/guardrails and what portion of the output is in response to input tokens provided by the user? Or am I misunderstanding?

aug 29, 2025, 5:49 pm • 1 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

I think people are confusing the subject and object of the various speech/rights/liabilities at issue, and understandably so - it's complex. *ChatGPT* has no rights, or speech, or liability at all *OpenAI* has free speech rights in the content of the program it writes; it cannot be forced to

aug 29, 2025, 6:13 pm • 12 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

program a machine to say anything specific, or to include any specific thing in its programming. But ChatGPT is a *product* and laws that provide liability for harm caused by unreasonably dangerous products are content neutral and not tied to the speech involved (the programming choices)

aug 29, 2025, 6:14 pm • 14 0 • view
avatar
jag @i-am-j.ag

I get that distinction. OpenAI can't be compelled to put certain guardrails in place - e.g. Ari's example of "don't criticize Trump." But if the output is not expressive, do you think OpenAI could then be held liable if those guardrails were successfully bypassed and ChatGPT said Trump was Hitler?

aug 29, 2025, 6:17 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

No. It's a products liability claim so the question would be "were you negligent in the guardrails you included/failed to include". Same way a car manufacturer can have liability for some injuries caused by cars but not all of them, depending on the specifics of the design and its flaws, if any

aug 31, 2025, 5:20 pm • 2 0 • view
avatar
Ari Cohn @aricohn.com

I don't think it's "not tied to the speech involved." Without the speech involved, there would be no harm here. It's why "we're just suing Facebook for promoting radicalizing content that led to a racist shooting" lawsuits fail. It ultimately *isn't* content-neutral!

aug 29, 2025, 6:18 pm • 2 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

No. Because the "speech involved" is *the programming of ChatGPT*; the output isn't itself speech.

aug 29, 2025, 6:19 pm • 8 0 • view
avatar
Ari Cohn @aricohn.com

I think that is working backwards in a way that makes no sense. If this was true, Hurley would have been decided differently.

aug 29, 2025, 6:22 pm • 3 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

Why do you say that?

aug 31, 2025, 5:20 pm • 1 0 • view
avatar
Dag. @3fecta.bsky.social

Tesla's autopilot has defective code that harms people. ChatGPT has defective code that harms people. I don't think we want to live in a world where a plagiarism machine churning 0's and 1's is treated with personhood in regards to speech. It's one thing to defend offensive speech. Defective code?

aug 29, 2025, 6:22 pm • 4 0 • view
avatar
Ari Cohn @aricohn.com

There are important doctrinal differences that separate Tesla autopilot from a chatbot

aug 29, 2025, 6:23 pm • 7 0 • view
avatar
Julie Faenza @juliemfaenza.bsky.social

This is exactly where I land on the issue. It can say whatever it wants. I don't want to hold Open AI liable for it saying certain things. I think it is reasonably foreseeable providing information on things that involve suicide, constructing ghost guns, child abuse, & those kinds of things 1/x

aug 29, 2025, 6:21 pm • 2 0 • view
avatar
Julie Faenza @juliemfaenza.bsky.social

can be unreasonably dangerous. We have the Second Amendment but we have the red flag laws (or other laws that allow for the government to prohibit gun ownership on showing of concern about mental illness). Also, the fact that OpenAI designed the thing to recognize conversation about suicide 2/x

aug 29, 2025, 6:21 pm • 2 0 • view
avatar
Ari Cohn @aricohn.com

I think that acknowledging that liability should attach "for giving information on [x, y, z] contradicts the "it can say what it wants" opening

aug 29, 2025, 6:25 pm • 0 0 • view
avatar
Julie Faenza @juliemfaenza.bsky.social

and respond to it by urging someone to get help or redirecting or providing resources tells me OpenAI absolutely knew this was a potential outcome of interaction with its product. The fact that they did sloppy testing simply to release it ahead of Microsoft (I think it was them) just adds 3/

aug 29, 2025, 6:21 pm • 3 0 • view
avatar
Julie Faenza @juliemfaenza.bsky.social

more layers of liability to the situation. That, and that the company promised that the product would be able to detect prompts about suicide and redirect. Combine that with the lack of testing and that's pretty damning. 4/4

aug 29, 2025, 6:21 pm • 3 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

There's no first amendment right to program an algorithm to hack into a bank and steal funds, or into government computers and launch missiles or misdirect emergency workers resulting in death. There's no free speech right to SWAT someone, or to commit fraud, etc.

aug 29, 2025, 6:15 pm • 8 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

That the product here *outputs words* and causes harm via those words does not mean that the harm-causing words are immune from generating negligence liability for the manufacturer due to the First Amendment.

aug 29, 2025, 6:17 pm • 9 0 • view
avatar
jag @i-am-j.ag

SWATing somebody, committing fraud, or robbing a bank are all crimes. That seems like incitement would cover that. But suicide isn't a crime - so how do you see that working? (You know I know you know way more than I do about this - I'm asking in earnest to learn.)

aug 29, 2025, 6:19 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

It would be different if the outputted words were selected by the manufacturer; that would be the *manufacturer's* speech and therefore 1A protected. But that's not what ChatGPT does. So the output is a product, not speech, and subject to the same products liability analysis as any other product

aug 29, 2025, 6:18 pm • 11 0 • view
avatar
Ari Cohn @aricohn.com

Outputted words don't have to be selected to be protected!

aug 29, 2025, 6:19 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

But that's not an argument that means the output is not speech. And as you know, people have made that arguments about books too, and it hasn't worked out so well for them!

aug 29, 2025, 5:41 pm • 0 0 • view
avatar
Anchorage Man @sethpartnow.bsky.social

That’s a very silly comparison.

aug 29, 2025, 5:44 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

It's not, if you've read the cases and understand why the courts have ruled the way that they have.

aug 29, 2025, 5:47 pm • 0 0 • view
avatar
Anchorage Man @sethpartnow.bsky.social

In terms of the “conduct” at issue a person or people put words together in a certain order in one situation and didn’t in the other. A book is to spoken speech relative to LLM output in the same way as Brian Scalabrine is to LeBron relative to a good high school player.

aug 29, 2025, 5:53 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

Now THAT's a silly analogy

aug 29, 2025, 6:08 pm • 3 0 • view
avatar
Anchorage Man @sethpartnow.bsky.social

Unlike a book, the output of an LLM model contains some of the form but only accidentally any of the substance of human communication.

aug 29, 2025, 10:42 pm • 1 0 • view
avatar
Anchorage Man @sethpartnow.bsky.social

I think I mentioned in 1 iteration of this convo that I can certainly see good arguments for punishing or restricting LLM/coding-generates outputs requiring a higher standard than rational basis, but I think we’ve lost the plot from both a technical & philosophical standpoint if we equate to speech

aug 29, 2025, 10:48 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

I at least understand, given his background, how he got there.

aug 29, 2025, 6:09 pm • 0 0 • view
avatar
Anchorage Man @sethpartnow.bsky.social

Which part, the basketball, the law degree, or the data science? I just wholly reject the notion that the output of a model which produces text due to statistical processes belongs on the same continuum as human language. Of course I think it’s ridiculous that corporate comms should count, so…

aug 29, 2025, 10:40 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

But if the output itself is not speech, it cannot be compelled speech. So the output has to be speech.

aug 29, 2025, 5:41 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

The output isn't the problem. The compelled speech is at the programming stage

aug 29, 2025, 6:06 pm • 2 0 • view
avatar
Ari Cohn @aricohn.com

I don't think that works. That coding wouldn't be inherently expressive if the output isn't expressive. That notwithstanding, that's exactly why the output is expressive -- because the coding/training is an expressive input that shapes the output.

aug 29, 2025, 6:08 pm • 1 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

The coding is expressive when it's intended to generate a particular expressive output (if I'm creating a machine designed to say "Ari Cohn sucks" or "based on the factors I choose these are the best" that's expressive). If it isn't intended to generate any specific/particular output, it's not

aug 31, 2025, 5:16 pm • 1 0 • view
avatar
Jake Knanishu @cantorsparadise.bsky.social

I don’t think this is right. Coding is always expressive (see, e.g., the FLSA’s creative professional exemption). A law prohibiting people from developing chatbots would violate 1A. But they are so far removed from “the press” that 1A wouldn’t necessarily prohibit regulating chatbots as a product

aug 31, 2025, 5:25 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

I think that's a tenuous line

aug 31, 2025, 9:49 pm • 0 0 • view