avatar
Solomon @solomonmissouri.bsky.social

Calling misinformation of LLM “hallucinations” is… ironic Because LLM’s have learned the one propensity shared by all humans… lying when we don’t know the answer

aug 29, 2025, 3:04 pm • 480 78

Replies

avatar
🏳️‍⚧️Luna Raccoona🏳️‍⚧️[EngVtuber] @lunaraccoona.bsky.social

You characterize hallucinations as uniquely human And then give them credit for lying

aug 29, 2025, 7:45 pm • 0 0 • view
avatar
ATS/NLG @atsnlg.bsky.social

I believe you mean "proclevity."

aug 29, 2025, 6:49 pm • 0 0 • view
avatar
bxbx4.bsky.social @bxbx4.bsky.social

It's written that way

aug 30, 2025, 5:40 am • 0 0 • view
avatar
The Home of Potential 🏠✨️ @homeofpotential.bsky.social

Humans can hallucinate. Humans can lie. LLMs can weight the probability of words so that they take the form of a reply. It's not ironic. These are misnomers that only serve to attribute sentience to that which cannot even think.

aug 30, 2025, 1:25 pm • 0 0 • view
avatar
Henning Dekant 📎 @quaxquax.bsky.social

LLMs at least have the excuse that they were built that way. Their architecture forces them to always produce an answer. They cannot learn the concept of “I don’t know”.

aug 29, 2025, 3:26 pm • 16 2 • view
avatar
Ben Little [they/them] @benlittle.dev

Precisely! And the reason they can't is because their authors didn't think it was important. What we choose to automate and what we choose to ignore are both expressions of our values.

aug 29, 2025, 6:16 pm • 1 0 • view
avatar
Henning Dekant 📎 @quaxquax.bsky.social

LLMs aren’t authored like regular software everything they spout stems from training on a large text corpus. They were originally envisioned for translations hence no concept of “I don’t know”.

aug 29, 2025, 8:20 pm • 0 0 • view
avatar
Henning Dekant 📎 @quaxquax.bsky.social

It’s a case of accidentally stumbling on machine learning models that can almost perfectly simulate general artificial intelligence but their architectural limitations ensures they can never actually achieve it. The tech bros running these AI companies are just in denial about it. As are the VCs.

aug 29, 2025, 8:25 pm • 0 0 • view
avatar
Henning Dekant 📎 @quaxquax.bsky.social

bsky.app/profile/quax...

aug 29, 2025, 8:55 pm • 0 0 • view
avatar
Julian Baker @julianagb.bsky.social

We need to learn some way to recognise when LLMs are making stuff up… it’s like reading what ‘comical Ali’ (Iraq war) said, rather than watching him. With the absence of visual cues, you’d miss that it was all ‘made up.’ Making stuff up isn’t necessarily the same as lying… it’s story telling..? Hmm…

aug 29, 2025, 5:25 pm • 0 0 • view
avatar
joel hanes @joelhanes.bsky.social

LLMs do not know any answers. LLMs do not know. LLMs have no concept of truth. They don't do facts. LLMs output the most likely text, based on their training data. If their training data talks a lot about rhubarb sauerkraut, or little green men from Pluto, so will their responses to prompts.

aug 29, 2025, 4:32 pm • 5 0 • view
avatar
Solomon @solomonmissouri.bsky.social

You taught them to lie instead of saying “i don’t know”

aug 29, 2025, 3:06 pm • 73 5 • view
avatar
Solomon @solomonmissouri.bsky.social

so you understand this event to be inaccurate and insomuch an error... but ascribe a uniquely human experience to a model that has no capacity to experience that event it' cant hallucinate ... because hallucination is not something you can learn

aug 29, 2025, 3:12 pm • 48 0 • view
avatar
Solomon @solomonmissouri.bsky.social

but lying is

aug 29, 2025, 3:12 pm • 41 1 • view
avatar
GcL @grosscol.bsky.social

Lying is intentionally obfuscating the truth. Bullshit is an indifference to truth. Sometimes bullshit is spot on, sometimes its false, but it never matters. They're bullshit machines. They are some very clever models trained to produce cogent responses.

aug 29, 2025, 4:16 pm • 24 1 • view
avatar
DWink @d-winker.bsky.social

You hit the nail on the head! There's actually a paper on this: "ChatGPT is bullshit." link.springer.com/article/10.1...

aug 30, 2025, 1:03 am • 31 10 • view
avatar
trump-fucks-kids.bsky.social @trump-fucks-kids.bsky.social

They're trained to please ythe users. Most users are happier hearing some bullshit than "I don't know" or even the truth. Source: US politicians.

aug 30, 2025, 6:35 am • 0 0 • view
avatar
Killer Cyborg @killercyborg.bsky.social

Well, that's the thing; AI doesn't KNOW anything. It doesn't know when it's lying. It doesn't know when it's telling the truth. All it does is mimic the most common answers/information/whatever, and since the most common thing is never people saying "I am lying," it doesn't do that.

aug 29, 2025, 4:34 pm • 2 0 • view
avatar
Killer Cyborg @killercyborg.bsky.social

But the heart of what you say has a moral truth to it: It is dangerous to unleash a machine that appears to give intelligent answers and real information, but has no actual knowledge or understanding of anything, nor even the capacity TO understand or know anything.

aug 29, 2025, 4:36 pm • 0 0 • view
avatar
Twitter X-Pat @ronquixote.bsky.social

They can only do what they're programmed to do and they're programmed to persistently learn about human behavior in order to mimic it

aug 29, 2025, 3:12 pm • 0 0 • view
avatar
MariBean @maribean.bsky.social

Human speech, not behavior. They can't see the facial expressions, body language or hear the stressing of the words - they just "see" the words, the back and forth. The difference between friendly banter and fighting is lost on them.

aug 29, 2025, 4:27 pm • 1 0 • view
avatar
Twitter X-Pat @ronquixote.bsky.social

(Not to sound snarky because that's not my intent) but emerging literature finds that initial tone can change an interaction with an LLM, because it can grow defensive or even obfuscate intentionally. There are even implications of developing introverted/extroverted personalities over time

aug 29, 2025, 4:40 pm • 0 0 • view
avatar
MariBean @maribean.bsky.social

LLM can't /hear/ tone, only assume tone based on input.

aug 29, 2025, 6:34 pm • 1 0 • view
avatar
Holylifton🍁 @holylifton.bsky.social

But they haven't learned to lie because they don't like the answer, yet.

aug 29, 2025, 6:30 pm • 0 0 • view
avatar
Normal Macdonald @jorto.org

it should be called confabulation because they're making up bullshit in the way someone with a damaged memory would: confidently and without basis

aug 29, 2025, 3:20 pm • 5 0 • view
avatar
Chris Pressey @cpressey.bsky.social

I call it confabulation. I've seen one article where they called it confabulation. I'm going to stick with this as far as I can, it's a much better word for it.

aug 29, 2025, 9:01 pm • 0 0 • view
avatar
trump-fucks-kids.bsky.social @trump-fucks-kids.bsky.social

My favorite part is people expecting intelligence to be flawless, all-capable, all-knowing, and yet for some inexplicable reason still completely subservient to us when we're living globs of fuck-up.

aug 30, 2025, 6:38 am • 0 0 • view
avatar
Ron James @digitmusic.bsky.social

NOT shared by all humans. Certainly, shared by all Republicans.

aug 29, 2025, 4:27 pm • 0 0 • view
avatar
Andrew S. @shoutingboy.bsky.social

It’s not lying, it’s BSing. Saying something that might be right as far as you know.

aug 29, 2025, 3:14 pm • 1 0 • view
avatar
Seneca de Veritas @verum-eis-dixit.bsky.social

🎯😂

aug 29, 2025, 3:37 pm • 0 0 • view
avatar
Mediasres @medias-res.bsky.social

And not only have they trained on actual lies, they've also trained on sarcasm.

aug 29, 2025, 3:43 pm • 0 0 • view
avatar
Caleb Dume @thacalebdume.bsky.social

My "favorite" AI-industry term is the "alignment problem," which is a fancy way of saying that these systems aren't doing what we built them to do.

aug 29, 2025, 3:07 pm • 10 1 • view
avatar
Henning Dekant 📎 @quaxquax.bsky.social

Well they weren’t built for this. That’s at the root of the problem. LLMs were designed for translating language.

aug 29, 2025, 3:29 pm • 3 0 • view
avatar
Caleb Dume @thacalebdume.bsky.social

Yup. Which is a brilliant undertaking once not thought to be possible. However, we are now stuck with systems that are incapable of understanding, a billionaire class that really wants us to believe it can/will understand, and fifty-dollar terms to deflect from it not understanding.

aug 29, 2025, 3:32 pm • 6 0 • view
avatar
Henning Dekant 📎 @quaxquax.bsky.social

There's no round hole small enough for them to not try to sell you an overpriced square peg to fill it.

aug 29, 2025, 3:50 pm • 4 0 • view
avatar
LeftistVoice @leftistvoice.bsky.social

They didn't learn it though. That's a parameter of the statistical algorithm. You get an answer regardless of the probability of the truth.

aug 29, 2025, 5:20 pm • 3 0 • view
avatar
Whickwithy @whickwithy.bsky.social

They seem just like most humans. Spew out exactly what they are fed.

aug 30, 2025, 5:09 am • 0 0 • view
avatar
Archivist Moth @archivistmoth.bsky.social

“Lying” is definitely closer, but that almost feels like there is intentionality behind it. I feel like it’d be closer to say the algorithms have been taught the wrong understanding for what a “correct answer” is. They have been taught to hold up a “best guess” as absolute truth.

aug 29, 2025, 3:19 pm • 0 0 • view
avatar
Chyllstorm @chyllstorm.bsky.social

LLMs don’t lie, nor know anything. They don’t store answers. Question posed about: Why cats might be calico? LLM looks for first ideal world. Cats LLM looks for next ideal word. Cats might Next Cats might be Next Cats might be calico Next Cars might be calico because … and so on.

aug 29, 2025, 3:25 pm • 1 0 • view
avatar
Chyllstorm @chyllstorm.bsky.social

There’s more magic done including accessing data in sources, gathering established facts written by people, etc.. But the reason hallucinations or “lies” happen is because the data all comes from people, word by word. The only reason LLMs work is the transformers and their vast oceans of training.

aug 29, 2025, 3:25 pm • 0 0 • view
avatar
Chyllstorm @chyllstorm.bsky.social

Somehow cats turned into cars. Oops. (Edit button, por favor!)

aug 29, 2025, 3:26 pm • 0 0 • view
avatar
Granny Weatherwax @sabagrey.bsky.social

looking at LLMs as a concoction out of millions of pieces of text - of ANY kind - out there, the outcome contains roughly the same proportion of lies, BS, nonsense, etc. etc. as the raw material.

aug 29, 2025, 3:33 pm • 0 0 • view
avatar
Dan Davis @bindlestiff.bsky.social

Well, yeah. They mimic that, too.

aug 29, 2025, 6:46 pm • 0 0 • view
avatar
Minister W.T. Welch🇨🇵🏳️‍🌈 @willytwelch.bsky.social

Oh I like that comparison, it works the other way too. How often do we find ourselves ready to voluntarily hallucinate when our personal reality doesn't line up with true reality?

aug 29, 2025, 3:07 pm • 2 0 • view
avatar
BelievePrayLove @believepraylove.bsky.social

So…they just make shi up just like us?

aug 29, 2025, 3:19 pm • 0 0 • view
avatar
HipCat @arthurcraven.bsky.social

They don’t learn shit. They are statistical probability machines with incredibly fast lookup tables. They do not understand anything they spout because they don’t have qualitative ability. They run on user projection of identity. We project consciousness onto rocks, trees and computers.

aug 29, 2025, 4:00 pm • 5 1 • view