Calling misinformation of LLM “hallucinations” is… ironic Because LLM’s have learned the one propensity shared by all humans… lying when we don’t know the answer
Calling misinformation of LLM “hallucinations” is… ironic Because LLM’s have learned the one propensity shared by all humans… lying when we don’t know the answer
You characterize hallucinations as uniquely human And then give them credit for lying
I believe you mean "proclevity."
It's written that way
Humans can hallucinate. Humans can lie. LLMs can weight the probability of words so that they take the form of a reply. It's not ironic. These are misnomers that only serve to attribute sentience to that which cannot even think.
LLMs at least have the excuse that they were built that way. Their architecture forces them to always produce an answer. They cannot learn the concept of “I don’t know”.
Precisely! And the reason they can't is because their authors didn't think it was important. What we choose to automate and what we choose to ignore are both expressions of our values.
LLMs aren’t authored like regular software everything they spout stems from training on a large text corpus. They were originally envisioned for translations hence no concept of “I don’t know”.
It’s a case of accidentally stumbling on machine learning models that can almost perfectly simulate general artificial intelligence but their architectural limitations ensures they can never actually achieve it. The tech bros running these AI companies are just in denial about it. As are the VCs.
bsky.app/profile/quax...
We need to learn some way to recognise when LLMs are making stuff up… it’s like reading what ‘comical Ali’ (Iraq war) said, rather than watching him. With the absence of visual cues, you’d miss that it was all ‘made up.’ Making stuff up isn’t necessarily the same as lying… it’s story telling..? Hmm…
LLMs do not know any answers. LLMs do not know. LLMs have no concept of truth. They don't do facts. LLMs output the most likely text, based on their training data. If their training data talks a lot about rhubarb sauerkraut, or little green men from Pluto, so will their responses to prompts.
You taught them to lie instead of saying “i don’t know”
so you understand this event to be inaccurate and insomuch an error... but ascribe a uniquely human experience to a model that has no capacity to experience that event it' cant hallucinate ... because hallucination is not something you can learn
but lying is
Lying is intentionally obfuscating the truth. Bullshit is an indifference to truth. Sometimes bullshit is spot on, sometimes its false, but it never matters. They're bullshit machines. They are some very clever models trained to produce cogent responses.
You hit the nail on the head! There's actually a paper on this: "ChatGPT is bullshit." link.springer.com/article/10.1...
They're trained to please ythe users. Most users are happier hearing some bullshit than "I don't know" or even the truth. Source: US politicians.
Well, that's the thing; AI doesn't KNOW anything. It doesn't know when it's lying. It doesn't know when it's telling the truth. All it does is mimic the most common answers/information/whatever, and since the most common thing is never people saying "I am lying," it doesn't do that.
But the heart of what you say has a moral truth to it: It is dangerous to unleash a machine that appears to give intelligent answers and real information, but has no actual knowledge or understanding of anything, nor even the capacity TO understand or know anything.
They can only do what they're programmed to do and they're programmed to persistently learn about human behavior in order to mimic it
Human speech, not behavior. They can't see the facial expressions, body language or hear the stressing of the words - they just "see" the words, the back and forth. The difference between friendly banter and fighting is lost on them.
(Not to sound snarky because that's not my intent) but emerging literature finds that initial tone can change an interaction with an LLM, because it can grow defensive or even obfuscate intentionally. There are even implications of developing introverted/extroverted personalities over time
LLM can't /hear/ tone, only assume tone based on input.
But they haven't learned to lie because they don't like the answer, yet.
it should be called confabulation because they're making up bullshit in the way someone with a damaged memory would: confidently and without basis
I call it confabulation. I've seen one article where they called it confabulation. I'm going to stick with this as far as I can, it's a much better word for it.
My favorite part is people expecting intelligence to be flawless, all-capable, all-knowing, and yet for some inexplicable reason still completely subservient to us when we're living globs of fuck-up.
NOT shared by all humans. Certainly, shared by all Republicans.
It’s not lying, it’s BSing. Saying something that might be right as far as you know.
🎯😂
And not only have they trained on actual lies, they've also trained on sarcasm.
My "favorite" AI-industry term is the "alignment problem," which is a fancy way of saying that these systems aren't doing what we built them to do.
Well they weren’t built for this. That’s at the root of the problem. LLMs were designed for translating language.
Yup. Which is a brilliant undertaking once not thought to be possible. However, we are now stuck with systems that are incapable of understanding, a billionaire class that really wants us to believe it can/will understand, and fifty-dollar terms to deflect from it not understanding.
There's no round hole small enough for them to not try to sell you an overpriced square peg to fill it.
They didn't learn it though. That's a parameter of the statistical algorithm. You get an answer regardless of the probability of the truth.
They seem just like most humans. Spew out exactly what they are fed.
“Lying” is definitely closer, but that almost feels like there is intentionality behind it. I feel like it’d be closer to say the algorithms have been taught the wrong understanding for what a “correct answer” is. They have been taught to hold up a “best guess” as absolute truth.
LLMs don’t lie, nor know anything. They don’t store answers. Question posed about: Why cats might be calico? LLM looks for first ideal world. Cats LLM looks for next ideal word. Cats might Next Cats might be Next Cats might be calico Next Cars might be calico because … and so on.
There’s more magic done including accessing data in sources, gathering established facts written by people, etc.. But the reason hallucinations or “lies” happen is because the data all comes from people, word by word. The only reason LLMs work is the transformers and their vast oceans of training.
Somehow cats turned into cars. Oops. (Edit button, por favor!)
looking at LLMs as a concoction out of millions of pieces of text - of ANY kind - out there, the outcome contains roughly the same proportion of lies, BS, nonsense, etc. etc. as the raw material.
Well, yeah. They mimic that, too.
Oh I like that comparison, it works the other way too. How often do we find ourselves ready to voluntarily hallucinate when our personal reality doesn't line up with true reality?
So…they just make shi up just like us?
They don’t learn shit. They are statistical probability machines with incredibly fast lookup tables. They do not understand anything they spout because they don’t have qualitative ability. They run on user projection of identity. We project consciousness onto rocks, trees and computers.