avatar
AkivaMCohen @akivamcohen.bsky.social

There's no first amendment right to program an algorithm to hack into a bank and steal funds, or into government computers and launch missiles or misdirect emergency workers resulting in death. There's no free speech right to SWAT someone, or to commit fraud, etc.

aug 29, 2025, 6:15 pm • 8 0

Replies

avatar
AkivaMCohen @akivamcohen.bsky.social

That the product here *outputs words* and causes harm via those words does not mean that the harm-causing words are immune from generating negligence liability for the manufacturer due to the First Amendment.

aug 29, 2025, 6:17 pm • 9 0 • view
avatar
jag @i-am-j.ag

SWATing somebody, committing fraud, or robbing a bank are all crimes. That seems like incitement would cover that. But suicide isn't a crime - so how do you see that working? (You know I know you know way more than I do about this - I'm asking in earnest to learn.)

aug 29, 2025, 6:19 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

It would be different if the outputted words were selected by the manufacturer; that would be the *manufacturer's* speech and therefore 1A protected. But that's not what ChatGPT does. So the output is a product, not speech, and subject to the same products liability analysis as any other product

aug 29, 2025, 6:18 pm • 11 0 • view
avatar
Ari Cohn @aricohn.com

Outputted words don't have to be selected to be protected!

aug 29, 2025, 6:19 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

Selection of expressive inputs (coding, training materials) affects the message conveyed (obviously), so even with your more limited understanding, this kind of liability would require modifying expression (see Hurley)

aug 29, 2025, 6:21 pm • 1 0 • view
avatar
David Brody @dbrody.bsky.social

But the legal regulation (product liability law says don’t offer a product likely to result in death) is content neutral. So even if this is speech, and even if it was not commercial speech, the question is whether products liability is a reasonable time, place, and manner regulation. Which it is.

aug 29, 2025, 6:27 pm • 3 0 • view
avatar
Ari Cohn @aricohn.com

Products liability does not extend to the words and ideas contained in a product

aug 29, 2025, 6:28 pm • 2 0 • view
avatar
David Brody @dbrody.bsky.social

I don’t see why, categorically, it cannot. Suppose a toaster user manual tells the user to insert a fork while it’s turned on to pull out toast. The manufacturer absolutely could be held liable. The manual is part of the product; it comes in the box.

aug 29, 2025, 6:33 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

This is more like the book that tells people a very specific diet to try which gives people heart disease.

aug 29, 2025, 6:39 pm • 0 0 • view
avatar
David Brody @dbrody.bsky.social

Ok, so then I think we have some agreement that product liability can apply to words and ideas (the toaster user manual) but we don’t want it to apply to some expression (books generally). Correct? So then we’re just trying to figure out line drawing?

aug 29, 2025, 6:46 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

I don't know if that's right. There's one case against Sony re: instructions but its a negligent publication case not prods liability. And I think there's another case that rejects it on an instruction manual, but I'm trying to find it.

aug 29, 2025, 6:51 pm • 0 0 • view
avatar
Informema @informema.bsky.social

I don’t think a user manual is the right analogy here b/c that could be reviewed prior to publication. A customer service rep would seem more analogous. If a CSR for an industrial company gave incorrect information to a customer that led to injury, would the company not face liability?

aug 29, 2025, 6:41 pm • 2 0 • view
avatar
Ari Cohn @aricohn.com

Interesting analogy, not sure it works though. Assuming encouraging suicide is unprotected for the sake of argument, if a CSR did that randomly, it would be outside the scope of employment, which raises other issues.

aug 29, 2025, 6:44 pm • 1 0 • view
avatar
David Brody @dbrody.bsky.social

No but if a CSR said “use the product this way” bc of stupidity, not a frolic, and injury resulted, then the corp would be liable.

aug 29, 2025, 6:47 pm • 0 0 • view
avatar
Informema @informema.bsky.social

Lawyers shouldn’t give legal advice to non-clients, or doctors medical advice to non-patients. Therapists shouldn’t provide therapy to non-patients. To the extent the LLM vendors are advertising their products as taking the place of these jobs it seems to me they are asking for liability.

aug 29, 2025, 6:53 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

Like if we want every book to be subject to products liability because someone read it and did something bad that hurt them/someone else, then at least we'd be consistent here. But that's not a world I wanna be anywhere near.

aug 29, 2025, 6:29 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

It should where the words and ideas are not the chosen expression of the company putting out the product

aug 31, 2025, 5:21 pm • 5 0 • view
avatar
Ari Cohn @aricohn.com

I think the artificial distinction between the code and the output also raises another problem for this theory. If the code is expressive such that the government couldn't say "AI must output racial sluts," then liability for code that doesn't prevent recommending suicide is obviously a 1A issue.

aug 31, 2025, 10:17 pm • 1 0 • view
avatar
ltponcho.bsky.social @ltponcho.bsky.social

tbh, that'd be some pretty surprising ai output

aug 31, 2025, 10:21 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

Slurs not sluts lol

aug 31, 2025, 10:29 pm • 2 0 • view
avatar
David Brody @dbrody.bsky.social

I think the product liability is more complicated than just the outputted content on its own. It’s the totality of the circumstances that breach a duty of care: anthropomorphizing the AI, representing that it is safe and expert for this purpose, the failure to sufficiently warn of risks of harm, etc

aug 31, 2025, 11:20 pm • 3 0 • view
avatar
David Brody @dbrody.bsky.social

And for the totality of these circumstances, the speech content is unprotected because it is instrumental to the unlawful conduct, almost like a con or fraud. If it was content on its own (like a search result) then it’s different in kind — and it wouldn’t induce reliance in a reasonable person.

aug 31, 2025, 11:20 pm • 2 0 • view