Should it be suitable for everything people might use it for? Or, should users be on notice (and assume the risk) that it might not be suitable for any particular use? Or are there different answers for different use cases? And how do we know?
Should it be suitable for everything people might use it for? Or, should users be on notice (and assume the risk) that it might not be suitable for any particular use? Or are there different answers for different use cases? And how do we know?
You should have known that our product would disregard the safety measures we constantly talk about, ignore our representations about its capabilities, and know that it will try to kill you.
One slippery thing about a lot of the big chatbots is that their marketed and advertised use cases tend to be either quite vague or quite trivial. The user interfaces also don't give us an awful lot to go on.
The more specific the commitments get for particular applications, the easier it is to assess suitability and punish failure. It's also easy to identify in hindsight specific things that we *don't* want chatbots to do.
the difficulty is, is you can have a ton of unreadable disclaimers but if your product feels like it has general competence, that will trump. honestly if i ruled the world i'd require design kludginess. which i believe @jamesftierney.bsky.social has been thinking about for securities trading apps
like, you know. your pword has to be 20 letters with 5 special characters and two upper case and two lower case, and you have to type it in manually with every two screens, plus have two factor authentication chatbots should be required to use unnecessary vowels and begin each sentence with "like"
But it's hard to find generalized, positive principles for how chatbots are *supposed* to behave. And we don't have a lot of neat or compelling analogies to draw on.
The reasonable person standard is right there! (Joke.)
As a matter of policy, I think the lesson from the pathologies of a quarter century of largely unregulated and somewhat open-ended Internet platforms is that companies should bear the costs of turning a technology on the world without any real vision on how it should or might be used.
i dunno, are they really the least cost avoider? or would it be cheaper to just not have any children, anywhere, ever?
But it's not especially clear to me that old-school tort law is going to be a workable vehicle for that principle.
Likely true, law will have to adapt; but, right now, my money is on the plaintiff and Florida’s strict product liability laws in the character dot ai case.
I think the second one should be the default for anything that ain't explicitly advertised to effectively and competently do what the user's trying to make it do. The question then would be "is OpenAI advertising ChatGPT to be an effective and competent therapist?".