I don't think there are, at least where the substance of the chat isn't selected for by the author of the product
I don't think there are, at least where the substance of the chat isn't selected for by the author of the product
Sure there are, it's in the aeronautical charts line of cases
Citation?
From Winter v. GP Putnam, here are cites to some of the existing cases at that time; there are further ones but the reasoning of the 9th here makes there relevance explicit so wanted to provide that.
That passage suggests that software is potentially a "product" to the extent that it doesn't give the result intended by the designer. Since OpenAI does try to prevent harmful output — and anyway wouldn't want to admit that it intended this output — that puts them in a bad position here. No?
No it suggests that if its a specialized technical tool, that's the case
That’s just it, the substance of a driving AI/ML system is always selected by the designer, it’s trained to a specific outcome. It’s very different, technically, from an LLM, the exception you just proposed doesn’t actually exist.