avatar
AkivaMCohen @akivamcohen.bsky.social

The difference is that a book that tries to do that is the author's particular expression. ChatGPT's replies that encourage suicide are very much *not* the author (OpenAI's) intended substantive expression. It's simply that their product results in that output regardless of what OpenAI wishes to say

aug 31, 2025, 5:18 pm • 4 0

Replies

avatar
Timothy Raben @timothyraben.bsky.social

I dont wanna short change OpenAI here. They might think encouraging suicide is good and want this to happen.

aug 31, 2025, 5:19 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

Again, there's no requirement that a speaker intend something with this level of specificity. If solving the problem requires changing expressive inputs (coding/training, which even you acknowledge is expressive), the output is expressive. That's Hurley.

aug 31, 2025, 9:48 pm • 1 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

I don't see the Hurley issue at all. To the contrary, the whole point of Hurley was that the speaker (parade organizer) had a right to determine their own message via exclusion even if the intended message wasn't particularly articulable. But that still required an intent by the speaker!

sep 1, 2025, 2:58 pm • 2 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

"This is something I want to say" or "this is something I don't want to say and therefore am choosing to exclude" And if OpenAI wants to defend with "actually, we WANT our AI to be saying things like this and have chosen not to build guardrails because that's the expression we choose" they'll have

sep 1, 2025, 3:00 pm • 7 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

a defense. But they can't fall backwards into 1A protection without some measure of expressive intent

sep 1, 2025, 3:01 pm • 6 0 • view
avatar
Ari Cohn @aricohn.com

I don't think that's right. If true, is the parade not expressive if the organizers try to exclude certain messages but fail, resulting in an unintended viewpoint being disseminated? The fact that they are trying to eliminate the output in the first place means the result is inherently expressive.

sep 1, 2025, 5:19 pm • 0 0 • view
avatar
AkivaMCohen @akivamcohen.bsky.social

No, the fact that they're attempting to eliminate the output means they have no 1A interest in keeping the output

sep 1, 2025, 6:40 pm • 1 0 • view
avatar
Ari Cohn @aricohn.com

I don't think "protection depends on what the ideas are" is going to be a winning argument. Besides this is all undercut by the desire to impose liability because they advertise a "truth machine." "providing useful answers to your prompts" is expressive intent even if an answer isn't useful.

sep 1, 2025, 6:44 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

It doesn't matter omif the answer is ultimately useful or not. The desire that it be is enough.

sep 1, 2025, 6:45 pm • 0 0 • view
avatar
Ari Cohn @aricohn.com

Put another way: failure to communicate the message has not been sufficient to strip expression of protection

sep 1, 2025, 6:52 pm • 0 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

Hurley says that speakers do not forfeit their 1A rights simply because they did not personally control each unit of "speech," but in that case humans were producing the other units, and the whole was unquestionably speech, so that just begs the question of whether what the LLM produces is speech.

sep 1, 2025, 1:02 am • 1 0 • view
avatar
Ari Cohn @aricohn.com

This is sophistry. The entire argument here is that it's not speech *because* it wasn't selected or controlled. Thats the only way to say that it isn't a human creating it.

sep 1, 2025, 1:11 am • 0 0 • view
avatar
Ari Cohn @aricohn.com

If you want to talk about question begging, this is actually it. Your rationale presumes the conclusion.

sep 1, 2025, 1:14 am • 0 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

The question of control is relevant to the question of authorship. This is related but *not identical to* the question of whether the output is speech, because, I would argue, authorless speech *is not* speech for 1A purposes.

sep 1, 2025, 1:59 am • 2 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

You say the question of authorship is unimportant, because, in Hurley, diluted authorship did not vitiate the protections afforded the speaker: but there the diluting solvent was speech by other human speakers, not speech by machines Thus, the question remains whether the machine is speaking.

sep 1, 2025, 1:59 am • 1 0 • view
avatar
Ari Cohn @aricohn.com

You're engaged in circular reasoning

sep 1, 2025, 2:01 am • 0 0 • view
avatar
Ari Cohn @aricohn.com

It is not authorless. The creators of the AI have many expressive inputs into the system that are directly intended to shape the message. Changing the inputs changes the outputs.

sep 1, 2025, 2:03 am • 0 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

AI companies create system-prompts and training data: these give a starting point for LLM behavior, but everything after that is a statistical response to user input, no longer the speech of the company or, I would argue, speech at all.

sep 1, 2025, 2:13 am • 1 0 • view
avatar
Ari Cohn @aricohn.com

You might argue that, but you're wrong

sep 1, 2025, 2:15 am • 1 0 • view
avatar
Ari Cohn @aricohn.com

Wanted to come back to this now that I'm not multitasking and on my phone. I appreciate that you took the time to go read and digest the case and construct an argument. That's more than most folks will put into it so I wanted to specifically call that out. However (cont.)

sep 1, 2025, 4:44 am • 1 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

If you're a fan of Douglas Adams, you may recall the bit in Hitchhiker's Guide about the colossal sign advertising a company which then became partially buried, such that the portions of the letters remaining above ground seemed to spell out an insult: that kind of erosion is happening here.

sep 1, 2025, 2:13 am • 1 0 • view
avatar
Madeline Sophia Luther 🇨🇦🇺🇦🇬🇱 @vinminen.bsky.social

What concerns me is that, because of the way LLMs work, it does not seem possible for the companies to make them wholly safe from this kind of behavior. System-prompts and training data will inevitably decay in salience as user input grows. They inevitably come to mirror the user.

sep 1, 2025, 2:13 am • 1 0 • view