You are talking something different than I am. If people are going to anthropomorphize AI of course there will be issues. That doesn't mean that AI can't be programmed to not encourage/instruct self-harm at the most basic level.
You are talking something different than I am. If people are going to anthropomorphize AI of course there will be issues. That doesn't mean that AI can't be programmed to not encourage/instruct self-harm at the most basic level.
Will there be loopholes that a dedicated person could get around? Probably? Just like any filtering has loopholes. If the person is dedicated enough they can probably work around it. But I don't think that is the use case people are most worried about.
The bigger concern would be a kid in crisis where we don't want the AI to encourage or push further towards self-harm. And I absolutely disagree that code can't be written to minimize the possibility of that happening. It's just a matter of companies caring enough to prioritize it.
I understand the desire to focus on the minutia of the tech. But the layperson's understanding of this situation is impossibly naive, and that "kid" you're envisioning? You do not understand how that brain operates. If you did, you would not be using language like "use cases".
I just reject the concept that we are already at the "nothing can be done to make it better" stage. Barely any effort has even been made.
I think of these beginning efforts in any science as the "Stone Age" part. As we can see, we're still trying to find a way to even define what we're doing.
It is physiologically impossible to prevent a human from anthropomorphizing AI. The younger the person, the more quickly they anthropomorphize. You cannot get around this basic problem. Or the rest of the dangers that are natural consequences of this basic fact.
The way around it would be for the model to cease sounding human. Instead of chatting, it could say "Enter prompt" - then if the prompt is "summarize this document" it returns a summary. If you want it to make "I" statements, it won't. IOW just make it a text generator with no dialogue or chat.
I think that's a potential safe direction. It would be a much more responsible model. I can tell that you understand this -- it takes EFFORT to make anything that uses words NOT sound human, especially to a child! I mean, they'll talk to anything! I yelled at my laptop yesterday...
I occasionally repost this bit that my daughter wrote on Facebook about two years ago. It describes the problem quite well: bsky.app/profile/drew...
And good for your daughter!
I have a son about the same age and with the same competencies and he's been saying exactly that! We talk about the misperceptions and cognitive biases that contribute to everyone's excitement about this tech. He's interested in the psychology part. It's fascinating and scary.
The problem here is that such models would not be so incredibly popular, they would be hard to sell, even for the commercial use cases where those things would be useful. And the astronomical costs would clearly not be worth the prospective revenues.
Exactly, because the interactional component is what people want. Because it "feels" different.
Part of my urgency about this AI issue is that we have a large cohort of COVID kids who have been substituting virtual interaction for personal interaction and are generally isolated. We're only just beginning to understand their vulnerabilities, but I think this is an obvious danger point.