avatar
Pwnallthethings @pwnallthethings.bsky.social

Sure, but LLMs don't have personhood in the same way that a company's web servers don't, but that doesn't mean the text those web servers post online aren't speech of /someone/ (ie the owner/operator who causes it to operate)

aug 28, 2025, 5:43 pm • 8 0

Replies

avatar
Sam Brunson @smbrnsn.bsky.social

Fair. Though I suspect that OpenAI is going to (have to, for PR reasons if none other) claim they were literally unable to control what their AI said/did.

aug 28, 2025, 5:45 pm • 2 0 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

I have no doubt that they will try and claim that users have waived liability in what it says, but disclaiming it is their speech would be to concede that the government can regulate their product to say or not say specific things, so I would be surprised if they argue that

aug 28, 2025, 5:48 pm • 7 0 • view
avatar
Nicholas Weaver @ncweaver.skerry-tech.com

Going to be hard to claim the victim waived liability since he was a minor...

aug 28, 2025, 5:50 pm • 4 0 • view
avatar
tedgehring.bsky.social @tedgehring.bsky.social

There strikes me as being some tension in saying it’s a product that has its own intelligence and reasoning vs. its output is just our free speech. Consumers think they’re getting the first product. But it really is just the other.

aug 28, 2025, 11:43 pm • 0 0 • view
avatar
Michael Anderson @michaelander45.bsky.social

Im not sure it will matter as a legal matter, but in any non gen-AI case the relevant text itself can always be directly attributable to an individual or group of individuals under the control of the enterprise. This is expressly not possible with the LLM probability machines.

aug 28, 2025, 5:46 pm • 4 0 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

It is tho? The output of the LLM is attributable to large numbers of human decisions made by the company (what to train on, how to process that information, what LLM algorithms to use, etc). Its output is a product of affirmative human decisions, not random

aug 28, 2025, 5:50 pm • 15 0 • view
avatar
Michael Anderson @michaelander45.bsky.social

Yes it's not random, BUT the explicit output of any input prompt cannot be attributed directly to individuals or groups given the probabilistic nature of the operation.

aug 28, 2025, 5:53 pm • 3 0 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

Sure, but I'm not sure how you distinguish that from, say, a search engine, where the output is also a function of the collected data set and the user's input terms

aug 28, 2025, 5:54 pm • 11 0 • view
avatar
Snarky Robot @snarkyrobot.bsky.social

The closest comparison is to a fairly well-known case: the macaque selfie copyright lawsuit The US Copyright Office’s report on the incident is over 1,200 pages. It’s probably not worth reading it all, but the key takeaway is this: neither the monkey nor the photographer have rights to the picture.

aug 28, 2025, 6:02 pm • 1 0 • view
avatar
Snarky Robot @snarkyrobot.bsky.social

Self-correction: the compendium that includes the discussion of the case is 1,200 pages, but the full compendium isn’t about the case alone.

aug 28, 2025, 6:06 pm • 1 0 • view
avatar
Snarky Robot @snarkyrobot.bsky.social

The macaque couldn’t have copyright, because it’s not a person. Fairly straightforward legal analysis. The photographer alone did not have sufficient authorship. Setting up a remote device that COULD lead to a given picture is insufficient to generate a claim of copyright. The pic is public domain.

aug 28, 2025, 6:08 pm • 1 0 • view
avatar
Snarky Robot @snarkyrobot.bsky.social

There is no human creation in an LLM output. Humans were involved in setting up a machine capable of producing a near-infinite potential outputs. The same as the photographer. But he did not compose the shot, choose the angle, and so on. A non-human was the author of the actual output, not the human

aug 28, 2025, 6:13 pm • 1 0 • view
avatar
Snarky Robot @snarkyrobot.bsky.social

It doesn’t matter how many humans set up the machine. They can and should have any patents relevant to their setup. They can and should have trade secrets protection. They have protections for the code they themselves wrote. But they did not (and indeed CANNOT) author the output. The non-human did.

aug 28, 2025, 6:15 pm • 1 0 • view
avatar
PeoriaBummer @peoriabummer.bsky.social

This could imply a 230 defense might be work in this case. Do you think that will fly?

aug 28, 2025, 6:34 pm • 0 0 • view
avatar
felinecannonball @felinecannon.bsky.social

It is probably best to think of a LLM like a profit-maximizing search engine or a social media engagement algorithm. It’s clear owners have multiple goals — engagement, profit, utility (or perception of) for certain tasks, pushing ideas of the owner, avoiding litigation over IP, defamation, misuse,…

aug 28, 2025, 7:16 pm • 0 0 • view
avatar
Horrified, Lydia @lfillingham.bsky.social

The thing about Gen AI even at this point, is no one in the company can know or intend what it will say. That's fundamentally different from other company speech. I am not sure what difference that makes, but it's true. They can make decisions to make it say things, some decisions to make it not say

aug 28, 2025, 7:15 pm • 0 0 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

Same basic issue as a search engine or recommendation algorithm tho. I just don't see LLMs as uniquely novel on this

aug 28, 2025, 7:17 pm • 4 0 • view
avatar
Horrified, Lydia @lfillingham.bsky.social

Have Google search results (NOT google AI search results) ever been considered company speech? I think the answer is yes, bc Google has a lot of control over the results, they get a lot of money bc of that. And if Google search results point to a pro bulimia site, they know how to ban it.

aug 28, 2025, 7:59 pm • 1 0 • view
avatar
Horrified, Lydia @lfillingham.bsky.social

But to the extent that they don't control it, what is Google speech? Everything they link to is someone else's speech.

aug 28, 2025, 7:59 pm • 0 0 • view
avatar
Horrified, Lydia @lfillingham.bsky.social

But encouraging suicide? They obviously cannot control that, or they would be. It's terrible for the company, tho obviously not on the level that the family faces.

aug 28, 2025, 7:15 pm • 0 0 • view
avatar
Michael Anderson @michaelander45.bsky.social

Yeah search is probably the closest analog and personally I dont think you could call the results the "speech" of the company but YMMV

aug 28, 2025, 6:02 pm • 3 0 • view
avatar
Sam Brunson @smbrnsn.bsky.social

So even if we're not there yet, let's do the law school hypothetical thing: imagine an AI that is actually and truly autonomous, that can create speech outside of its training. Does it get 1A protection? Or does that only apply to speech that can be tied to people?

aug 28, 2025, 6:05 pm • 0 0 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

I think Congress could decide, rightly or wrongly, to create legal personhood for AIs, at which point, sure, yes, and until then no.

aug 28, 2025, 6:06 pm • 8 0 • view
avatar
Chris Petersen @cpetersen-cs.bsky.social

Would that open them up to "AI abandonment" or even "AI-icide" charges? Capitalists gonna capital, and keeping every model around forever won't be in the cards...

aug 28, 2025, 7:05 pm • 0 0 • view
avatar
Connor Lynch @connorlynch.bsky.social

If they’re truly autonomous, do they actually choose to sue or raise first amendment defenses on their own behalf? They might just choose not to.

aug 28, 2025, 6:19 pm • 4 0 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

Who knows. Just that folks should be careful of blending the unknowable philosophical question of "is this a pseudo-person /deserving/ of person-like rights" (which is imo fairly tiresome convo) and then following that through to believing it has legal person-like rights, which is currently does not

aug 28, 2025, 6:24 pm • 9 0 • view
avatar
Phasmatic @phasmatic.bsky.social

We can't talk about rights without correctly assigning responsibility. I think from first principles our machines cannot accept responsibility, and people cannot use fancy machines and computer programs to *evade* responsibility. Some real person has to be answerable.

aug 28, 2025, 6:34 pm • 1 0 • view
avatar
Connor Lynch @connorlynch.bsky.social

Correct, but both are separate questions from “is the output potentially protected speech” and I think the answer is unequivocally yes, just not protected speech *of the LLM* (as opposed to the humans that made and operate it)

aug 28, 2025, 6:30 pm • 11 0 • view
avatar
Retronymous @retronymous.com

it's still commercial speech

aug 28, 2025, 6:06 pm • 1 0 • view
avatar
Cathy Gellis @cathygellis.bsky.social

You absolutely can, and courts have done so. In the same way that you can ask a person for information and it is speech for them to answer.

aug 28, 2025, 6:25 pm • 0 0 • view
avatar
bryson @brysonm.bsky.social

except the prompter

aug 28, 2025, 6:27 pm • 0 0 • view