Are you trying to say AI only hallucinates when people "use it wrong" or am I misunderstanding?
Are you trying to say AI only hallucinates when people "use it wrong" or am I misunderstanding?
No. I'm trying to say that the likelihood of a hallucination greatly increases when a user doesn't know how to use the AI. Thinking like a software engineer has, in my experience, brought better results. This makes sense when you consider that software engineers are very specific in wording things.
I'm curious what the source for that is.
Which part? (Note, specificity... LOL)
That the likelihood of hallucination goes up when the user doesn't know how to use AI. I thought hallucination was specifically not a prompting issue, otherwise it'd be easy to solve with user education.
It's mostly a user issue. Yes, AI can get some stuff wrong even with good prompts, but the likelihood is way lower. I've spent the last 4 years working with AI and learning how to work with it, and even today I get a wrong response once in a while. But using research models lowers even that rate.
Man, we're still going with "Trust me, bro" in response to a request for sources in this the year of our Lord 2025? DW about it, you've told me all I need to hear.
🤷♂️ Ok... Whatever.