That seems a separate issue from 1A, though, and more a matter of determining fault or causation. (I am no lawyer, just what it seems to me)
That seems a separate issue from 1A, though, and more a matter of determining fault or causation. (I am no lawyer, just what it seems to me)
For purposes of litigation you are saying the designers can be held liable for the speech, so you are in essence imputing all of the speech to the humans already. If you passed a law that said suicidal ideation in prompts required them to do x,y,z, you possibly mandating speech of the programmers.
I don't think the 1st amendment issues actually resolve any of the issues in the lawsuit, but I don't think it's crazy for legal purposes, which the 1st amendment is just a law anyway.
My argument would be that, 1A issues aside, it’s grossly negligent/reckless to make a product that’ll tell minors to kill themselves.
What would be the difference between ChatGPT telling a kid to kill themself and Teddy Ruxpin telling a kid to kill themself?
I'm actually not sure Teddy Ruxpin couldn't, legally speaking, but that's a tort law question not a 1st amendment one. There are limits of what speech the government can mandate or restrict for a commercial service, but it's not zero. Is the line crossed here, likely but not certain.
Right, I agree it’s primarily a commercial speech and tort issue.
what makes this closer to me isn't the service saying KYS, it's directing active steps to conceal it, which makes their best defense, that someone else could or should have stepped in, less convincing. Ruxpin can't do that.
My point was more to just imagine a more sophisticated Ruxpin with a larger bank of prompts, which is essentially what ChatGPT is.
It’s disembodied from a stuffed bear, but it’s still a commercially available product that relies on a [several orders of magnitude larger] bank of prompts/data in order to spit out responses to the user’s input. Instead of pulling the string or hugging ChatGPT [gross] you prompt it w/words.
the commercial and legal incentives are at cross purposes here. Saying you have the legal right to put out the most evil LLM you can is maybe right but you'll pay more for lost business than one lawsuit claim. the fact they conceded training "degraded" (how it encouraged stops to hide) is bad.
Yeah I’m fascinated how this all ends up shaking out. Particularly because it might very well end up creating an entirely new market of potential personal injury suits.
if they went out and said this is the use case, they would stand a better chance of having the lawsuit dismissed since they'd be much more clearly not assuming a duty of care towards users. they promote safety so hoisted by own petard.
this might be the best argument: bsky.app/profile/dabe...
Yeah I agree with that.
I could see how you could say it’s a matter of restrictions on the speech of programmers, but seeing it more as safety restrictions on product design seems a better fit.
oh I agree, this is a commercial product, they accept some restrictions on content. A law that said it couldn't generate marxist text or whatever would be an impermissible viewpoint restriction so it's not like there aren't any 1st amendment concerns kicking about though.