This kind of thing has been seen in every big commercial model, it seems. Caveat Emptor. Note: Contrary to what it says, Gemini does not have "intentions" in the humanlike sense of the word.
This kind of thing has been seen in every big commercial model, it seems. Caveat Emptor. Note: Contrary to what it says, Gemini does not have "intentions" in the humanlike sense of the word.
Argumentative AI is possible due to training data bias. Videos that are actually aimed at fun, such as argumentative episode from monty python picked up as a normal human interaction. It is perfectly plausible that models pick up the behaviour. www.youtube.com/watch?v=ohDB...
Looks like this call was escalated to a supervisor.
This must be a problem in the training too: if models prefer to made up an email instead of telling the user they couldn't find it is because during training that strategy got more reward. Probably because during training they don't check if it really exists or they get fooled by the model sometimes