avatar
AkivaMCohen @akivamcohen.bsky.social

It would be different if the outputted words were selected by the manufacturer; that would be the *manufacturer's* speech and therefore 1A protected. But that's not what ChatGPT does. So the output is a product, not speech, and subject to the same products liability analysis as any other product

aug 29, 2025, 6:18 pm • 11 0

Replies

avatar
Ari Cohn @aricohn.com

Outputted words don't have to be selected to be protected!

aug 29, 2025, 6:19 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

Selection of expressive inputs (coding, training materials) affects the message conveyed (obviously), so even with your more limited understanding, this kind of liability would require modifying expression (see Hurley)

aug 29, 2025, 6:21 pm • 1 0 • view
avatar
David Brody @dbrody.bsky.social

But the legal regulation (product liability law says don’t offer a product likely to result in death) is content neutral. So even if this is speech, and even if it was not commercial speech, the question is whether products liability is a reasonable time, place, and manner regulation. Which it is.

aug 29, 2025, 6:27 pm • 3 0 • view
avatar
Ari Cohn @aricohn.com

Products liability does not extend to the words and ideas contained in a product

aug 29, 2025, 6:28 pm • 2 0 • view
avatar
David Brody @dbrody.bsky.social

I don’t see why, categorically, it cannot. Suppose a toaster user manual tells the user to insert a fork while it’s turned on to pull out toast. The manufacturer absolutely could be held liable. The manual is part of the product; it comes in the box.

aug 29, 2025, 6:33 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

This is more like the book that tells people a very specific diet to try which gives people heart disease.

aug 29, 2025, 6:39 pm • 0 0 • view
avatar
David Brody @dbrody.bsky.social

Ok, so then I think we have some agreement that product liability can apply to words and ideas (the toaster user manual) but we don’t want it to apply to some expression (books generally). Correct? So then we’re just trying to figure out line drawing?

aug 29, 2025, 6:46 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

I don't know if that's right. There's one case against Sony re: instructions but its a negligent publication case not prods liability. And I think there's another case that rejects it on an instruction manual, but I'm trying to find it.

aug 29, 2025, 6:51 pm • 0 0 • view
avatar
David Brody @dbrody.bsky.social

There’s a whole doctrine of product liability around failure to warn—which is speech. Manufacturers are effectively compelled to speak warnings in some settings because of the risk of product liability if they don’t.

aug 29, 2025, 6:54 pm • 1 0 • view
avatar
Informema @informema.bsky.social

I don’t think a user manual is the right analogy here b/c that could be reviewed prior to publication. A customer service rep would seem more analogous. If a CSR for an industrial company gave incorrect information to a customer that led to injury, would the company not face liability?

aug 29, 2025, 6:41 pm • 2 0 • view
avatar
Ari Cohn @aricohn.com

Interesting analogy, not sure it works though. Assuming encouraging suicide is unprotected for the sake of argument, if a CSR did that randomly, it would be outside the scope of employment, which raises other issues.

aug 29, 2025, 6:44 pm • 1 0 • view
avatar
David Brody @dbrody.bsky.social

No but if a CSR said “use the product this way” bc of stupidity, not a frolic, and injury resulted, then the corp would be liable.

aug 29, 2025, 6:47 pm • 0 0 • view
avatar
David Brody @dbrody.bsky.social

The point is that the speech of a corp agent can result in product liability. So then the question becomes not “can product liability apply to words/ideas?” but “when and how is product liability applied to speech, and when and how isn’t it?”

aug 29, 2025, 6:51 pm • 0 0 • view
avatar
jag @i-am-j.ag

I think the operative question then becomes "duty of care," right? A CSR rep for a manufacturer who is there to help a consumer with that manufacturer's product has a certain duty of care. What duty of care are you stipulating OpenAI has?

aug 29, 2025, 6:52 pm • 0 0 • view
avatar
David Brody @dbrody.bsky.social

Duty of care is a separate legal question from whether and how the First Amendment applies.

aug 29, 2025, 6:54 pm • 0 0 • view
avatar
Informema @informema.bsky.social

Lawyers shouldn’t give legal advice to non-clients, or doctors medical advice to non-patients. Therapists shouldn’t provide therapy to non-patients. To the extent the LLM vendors are advertising their products as taking the place of these jobs it seems to me they are asking for liability.

aug 29, 2025, 6:53 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

No arguments from me on anything here! It's the dumbest, most insane, emblematic of tech company excess and fart-sniffing, shit I can conceive of. Anyone who says "this chatbot should literally be your therapist" should face the scorn of all humanity.

aug 29, 2025, 6:59 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

Like if we want every book to be subject to products liability because someone read it and did something bad that hurt them/someone else, then at least we'd be consistent here. But that's not a world I wanna be anywhere near.

aug 29, 2025, 6:29 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

It should where the words and ideas are not the chosen expression of the company putting out the product

aug 31, 2025, 5:21 pm • 5 0 • view
avatar
Ari Cohn @aricohn.com

I think the artificial distinction between the code and the output also raises another problem for this theory. If the code is expressive such that the government couldn't say "AI must output racial sluts," then liability for code that doesn't prevent recommending suicide is obviously a 1A issue.

aug 31, 2025, 10:17 pm • 1 0 • view
avatar
ltponcho.bsky.social @ltponcho.bsky.social

tbh, that'd be some pretty surprising ai output

aug 31, 2025, 10:21 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

Slurs not sluts lol

aug 31, 2025, 10:29 pm • 2 0 • view
avatar
Lorraine Poirier @annateresa.bsky.social

I truly envy people whose rare typos only make their writing more interesting.

aug 31, 2025, 10:47 pm • 4 0 • view
avatar
Ari Cohn @aricohn.com

Though the latter would not be surprise using Grok

aug 31, 2025, 10:47 pm • 3 0 • view
avatar
David Brody @dbrody.bsky.social

I think the product liability is more complicated than just the outputted content on its own. It’s the totality of the circumstances that breach a duty of care: anthropomorphizing the AI, representing that it is safe and expert for this purpose, the failure to sufficiently warn of risks of harm, etc

aug 31, 2025, 11:20 pm • 3 0 • view
avatar
David Brody @dbrody.bsky.social

And for the totality of these circumstances, the speech content is unprotected because it is instrumental to the unlawful conduct, almost like a con or fraud. If it was content on its own (like a search result) then it’s different in kind — and it wouldn’t induce reliance in a reasonable person.

aug 31, 2025, 11:20 pm • 2 0 • view
avatar
David Brody @dbrody.bsky.social

The illegality isn’t “the chatbot said he should kill himself and he did it.” The illegality is “the company offered an inherently dangerous product and failed to mitigate the reasonably foreseeable harm or adequately warn the user.” The speech is evidence of the illegality, not the whole of it.

aug 31, 2025, 11:22 pm • 4 0 • view
avatar
Ari Cohn @aricohn.com

But the reasonably foreseeable harm IS the content! You just can't separate them.

aug 31, 2025, 11:26 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

That's a twisting of the speech integral to criminal conduct rule

aug 31, 2025, 11:28 pm • 0 0 • view
avatar
David Brody @dbrody.bsky.social

It’s not just for criminal conduct; it’s any unlawful conduct. See eg civil fraud.

aug 31, 2025, 11:33 pm • 0 0 • view