That's called "guardrails". And all the models you can use without your own hardware are using lots and lots of guardrails. They do this to avoid getting sued by the government. Political narratives are the biggest reason AI can get so much wrong.
That's called "guardrails". And all the models you can use without your own hardware are using lots and lots of guardrails. They do this to avoid getting sued by the government. Political narratives are the biggest reason AI can get so much wrong.
You're really not explaining this in a way that makes it any better but since you're not defending ai I guess that doesn't matter (not being sarcastic)
What I'm pointing out is that it's not AI that is the problem. It's humans either corrupting the info in the models or humans not using AI correctly. Nobody is complaining about hammers, but if I give a classroom full of 1st graders hammers, we call would agree that's going to end badly, right?
That's where we disagree.
Not that we shouldn't give hammers to kids lol but that humans are the problem not AI.
What is dangerous about hammers is giving them to kids without educating them about them and how to use them. There's really no difference between that and AI. If people are just given a tool they don't know how to use, of course the results aren't going to be great. See the point?
Sometimes the tool is a flamethrower, destructive and bad for the environment.
Yes, as most of the tools tend to be.
Guns are also tools. This is why we need gun control in the US & why other countries have already figured out itβs not worth it. You can use it to feed yourself, but itβs being loaded with hollow point bullets & itβs unregulated. As long as weβre using metaphors. Tools are what they are used to do.
Gun control opinions aside, yes, this is the point. Not everyone needs every tool. But if you're going to use a tool, you should know how to use it or go in knowing you're still learning it and not blame the tool if you can't use it right.
So you just don't give a shit about material reality in this discussion? People by and large will not learn how to use this tech "the right way" and people are already causing harm to the world and themselves by using it wrong. Your argument kinda turns flaccid at that point.
Your argument is putting the entire burden of doing the right thing on the consumer. That's like saying that if manufacturers didn't make EVs, it would be on the consumer to build EVs themselves to avoid burning gas. Your entire argument is moot.
Are you trying to say AI only hallucinates when people "use it wrong" or am I misunderstanding?
No. I'm trying to say that the likelihood of a hallucination greatly increases when a user doesn't know how to use the AI. Thinking like a software engineer has, in my experience, brought better results. This makes sense when you consider that software engineers are very specific in wording things.
I'm curious what the source for that is.
Which part? (Note, specificity... LOL)
That the likelihood of hallucination goes up when the user doesn't know how to use AI. I thought hallucination was specifically not a prompting issue, otherwise it'd be easy to solve with user education.
It's mostly a user issue. Yes, AI can get some stuff wrong even with good prompts, but the likelihood is way lower. I've spent the last 4 years working with AI and learning how to work with it, and even today I get a wrong response once in a while. But using research models lowers even that rate.
Man, we're still going with "Trust me, bro" in response to a request for sources in this the year of our Lord 2025? DW about it, you've told me all I need to hear.
π€·ββοΈ Ok... Whatever.
So you were just lying earlier when you said you weren't defending AI. Thanks for wasting our time here.
It's still not a defense of AI any more than a defense of hammers. You're conflating "AI hallucinates because users don't know what they're doing" with "AI doesn't hallucinate". My point isn't a defense but keeping the record straight.
We *could all agree...