Now THAT's a silly analogy
Now THAT's a silly analogy
Unlike a book, the output of an LLM model contains some of the form but only accidentally any of the substance of human communication.
I think I mentioned in 1 iteration of this convo that I can certainly see good arguments for punishing or restricting LLM/coding-generates outputs requiring a higher standard than rational basis, but I think we’ve lost the plot from both a technical & philosophical standpoint if we equate to speech
I at least understand, given his background, how he got there.
Which part, the basketball, the law degree, or the data science? I just wholly reject the notion that the output of a model which produces text due to statistical processes belongs on the same continuum as human language. Of course I think it’s ridiculous that corporate comms should count, so…
I think not putting a bright line between expression intentionally produced by a human being and the “expressive actions” of some (to this point not-remotely-sentient) automated process will lead to some very dark places.
I think that Ari's argument is that not protecting the automated processes under 1A will likely lead to a world where the outputs of LLMs are regulated and directed by the government which is an even darker place.
Saying they aren’t “speech” isn’t the same as saying they don’t require some degree of protection, so to the extent that’s his counter to my disagreement, it’s a false dichotomy. I think the scenario you suggest isn’t hard to comp to a viewpoint restriction.
And, like LLMs SHOULD be regulated. I can’t imagineer a perfect legal regime to apply to them off the top of my head, but I think calling them “speech” is way too permissive and approach.
For one thing, how would you go about proving the necessary knowledge and intent for various torts or penalties that could apply to certain speech when the words in question are produced by something that has neither knowledge of anything or the ability to have any sort of intention?
Data-washing is already a pretty serious problem - it couldn’t have been discriminatory because our objective model made the decisions don’t you see! - and going whole hog on speech protections would make the issue exponentially worse.
“Well you trained it to be racist” “Prove it” And now we have a right without a remedy which is no right at all.
I don't disagree with you at all - I don't know what the solution is! Regulation? Maybe However, the folks at OpenAI know these outcomes are possible. Absent of any regulation they can prevent this