avatar
Blake E. Reid @chup.blakereid.org

Should it be suitable for everything people might use it for? Or, should users be on notice (and assume the risk) that it might not be suitable for any particular use? Or are there different answers for different use cases? And how do we know?

aug 28, 2025, 10:08 pm • 2 0

Replies

avatar
RI Smith @rismith.bsky.social

You should have known that our product would disregard the safety measures we constantly talk about, ignore our representations about its capabilities, and know that it will try to kill you.

aug 28, 2025, 10:12 pm • 1 0 • view
avatar
Blake E. Reid @chup.blakereid.org

One slippery thing about a lot of the big chatbots is that their marketed and advertised use cases tend to be either quite vague or quite trivial. The user interfaces also don't give us an awful lot to go on.

aug 28, 2025, 10:13 pm • 2 0 • view
avatar
Blake E. Reid @chup.blakereid.org

The more specific the commitments get for particular applications, the easier it is to assess suitability and punish failure. It's also easy to identify in hindsight specific things that we *don't* want chatbots to do.

aug 28, 2025, 10:13 pm • 3 0 • view
avatar
Ann M. Lipton @annmlipton.bsky.social

the difficulty is, is you can have a ton of unreadable disclaimers but if your product feels like it has general competence, that will trump. honestly if i ruled the world i'd require design kludginess. which i believe @jamesftierney.bsky.social has been thinking about for securities trading apps

aug 28, 2025, 10:18 pm • 2 0 • view
avatar
Ann M. Lipton @annmlipton.bsky.social

like, you know. your pword has to be 20 letters with 5 special characters and two upper case and two lower case, and you have to type it in manually with every two screens, plus have two factor authentication chatbots should be required to use unnecessary vowels and begin each sentence with "like"

aug 28, 2025, 10:20 pm • 1 0 • view
avatar
Blake E. Reid @chup.blakereid.org

But it's hard to find generalized, positive principles for how chatbots are *supposed* to behave. And we don't have a lot of neat or compelling analogies to draw on.

aug 28, 2025, 10:17 pm • 1 0 • view
avatar
Kendra Albert @kendraserra.bsky.social

The reasonable person standard is right there! (Joke.)

aug 28, 2025, 10:21 pm • 2 0 • view
avatar
Blake E. Reid @chup.blakereid.org

As a matter of policy, I think the lesson from the pathologies of a quarter century of largely unregulated and somewhat open-ended Internet platforms is that companies should bear the costs of turning a technology on the world without any real vision on how it should or might be used.

aug 28, 2025, 10:20 pm • 6 0 • view
avatar
Matthew Cortland (they) @matthewcort.land

i dunno, are they really the least cost avoider? or would it be cheaper to just not have any children, anywhere, ever?

aug 28, 2025, 10:24 pm • 0 0 • view
avatar
Blake E. Reid @chup.blakereid.org

But it's not especially clear to me that old-school tort law is going to be a workable vehicle for that principle.

aug 28, 2025, 10:23 pm • 4 0 • view
avatar
Dave Johnson @dwjohnson.bsky.social

Likely true, law will have to adapt; but, right now, my money is on the plaintiff and Florida’s strict product liability laws in the character dot ai case.

aug 28, 2025, 10:27 pm • 0 0 • view
avatar
RyNo @yellowapple.us

I think the second one should be the default for anything that ain't explicitly advertised to effectively and competently do what the user's trying to make it do. The question then would be "is OpenAI advertising ChatGPT to be an effective and competent therapist?".

aug 28, 2025, 10:20 pm • 0 0 • view