avatar
Transmental Me @transmentalme.bsky.social

So it lies only about twice that of humans. Research has shown that about 7% of all human communication is a lie. Or do you believe humans never lie? Even if they didn't lie at all, how often do you think they're confidently wrong? Or cause accidents due to mistakes?

aug 6, 2025, 7:31 am • 1 0

Replies

avatar
Josh @jman4747.bsky.social

People lying 7% in general doesn’t mean that research assistants lie 7% of the time! People don’t lie equally across all types of communications.

aug 6, 2025, 12:42 pm • 16 0 • view
avatar
Transmental Me @transmentalme.bsky.social

7% of total communication means in general yes.

aug 6, 2025, 4:09 pm • 0 0 • view
avatar
mjfgates @mjfgates.bsky.social

I think that if I fucked up payroll 15% of the time, I would be out on my ass *immediately.* Also true of pretty much any other business function.

aug 6, 2025, 9:45 am • 42 0 • view
avatar
Transmental Me @transmentalme.bsky.social

I guess that's why so many people get fired for being bad at their job then.

aug 6, 2025, 4:08 pm • 0 0 • view
avatar
badly-drawn bee 🐝 @soapachu.bsky.social

"only"

aug 6, 2025, 12:57 pm • 4 0 • view
avatar
Transmental Me @transmentalme.bsky.social

So then do you feel people already lie a lot? Because yes, it's only twice that of a human, which is about 7% of total communication is a lie.

aug 6, 2025, 4:55 pm • 0 0 • view
avatar
badly-drawn bee 🐝 @soapachu.bsky.social

Even assuming that 7% was consistent across all forms of human communication (it's not), embracing technology which is twice as mendacious seems really fucking stupid.

aug 6, 2025, 5:07 pm • 3 0 • view
avatar
Transmental Me @transmentalme.bsky.social

7% of total communication is 7% of total communication, that is in fact the average across all forms of communication (it is). If we break down specific areas of communication that number can be significantly higher or lower.

aug 6, 2025, 5:11 pm • 0 0 • view
avatar
Transmental Me @transmentalme.bsky.social

As for embracing the technology, you're ignoring that each major model has improved on this metric by half. If that holds for the next generation of models coming this month, that would make them either as likely or less likely to lie than a human. See how the argument falls apart?

aug 6, 2025, 5:49 pm • 0 0 • view
avatar
Marsanges @marsanges.bsky.social

then why do we need to compound human fallibility with AI fallibility? It would be nice if it were better than us or would correct our weaknesses. Instead it just plays into, and greatly enhances our weaknesses (e. g. psychological ones).

aug 6, 2025, 11:55 am • 8 0 • view
avatar
Transmental Me @transmentalme.bsky.social

It generally is better than most people when given the right context.

aug 6, 2025, 4:10 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

only twice on it's best day is still a bit much!

aug 6, 2025, 8:11 am • 5 0 • view
avatar
Transmental Me @transmentalme.bsky.social

No, the chart shows the newer models on the left and the older models on the right. Imagine that, the person who posted the chart was being intentionally misleading. Also known as lying.

aug 6, 2025, 4:11 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

I didn't reference any numbers, so take your disagreement to which ever reference they gave and take it up with them directly. You didn't cite your 7%. I was merely suggesting that yes, twice that is bad.

aug 6, 2025, 4:40 pm • 1 0 • view
avatar
Transmental Me @transmentalme.bsky.social

No you said on its best day, these are all different models. You could have just Googled how often people lie and found it, I'm not trying to prove anything to you. The OP cited their statement and it was still intentionally misleading because none of you read and understood the cited data.

aug 6, 2025, 4:52 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

Considering various models have fabricated data every time i use them i have experience of it being fairly highly crap but ok. If it's worst day is 15 it's still fairly terrible. And people transferring data don't generally lie, they may make mistakes. It regularly lies to me.

aug 6, 2025, 4:57 pm • 1 0 • view
avatar
Antiqueight @antiqueight.bsky.social

Technically it's every session when i sit down to use them. Eventually I either give up or get what i needed. Earlier models were actually better than now.

aug 6, 2025, 4:59 pm • 0 0 • view
avatar
Transmental Me @transmentalme.bsky.social

I completely disagree. Early models, as shown in the chart, were statistically and quantifiably much worse. I'm glad you're experimenting with AI, but you clearly want to blame it for its failure rather than take equal ownership of your partnership in that failure. AI isn't always right.

aug 6, 2025, 5:06 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

I have worked with it a fair bit and was enthusiastic as it would simplify things extensively. But I've found llms just aren't trustworthy for handling data. AIs can be great. I haven't been impressed with llms for data.

aug 6, 2025, 5:09 pm • 0 0 • view
avatar
Transmental Me @transmentalme.bsky.social

Considering various models have allowed me to build entire applications and research topics with cited materials I can verify, I have experience of it being fairly great but ok. If its worst day is 15, it's not far off from a person and you must think people are kinda terrible instead of fairly so.

aug 6, 2025, 5:03 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

15 is a lot worse than a person if a person is 7.

aug 6, 2025, 5:10 pm • 1 0 • view
avatar
Transmental Me @transmentalme.bsky.social

It is only a lot worse depending on the tolerance to error. Twice as bad doesn't mean bad. If the tolerance for error is 30% it is still far beyond acceptable. If the tolerance of poop shoveling is 30% and AI had an error rate of 15% and can self correct, why would you force a human to shovel shit?

aug 6, 2025, 5:14 pm • 0 0 • view
avatar
Transmental Me @transmentalme.bsky.social

Another possibility is that you're just not good at using AI, it isn't generally intelligent and how you prompt it and what context it is given matters a lot. The same goes for a human, what you ask, how you ask and what data they have greatly matters. People will pretend to know things they don't.

aug 6, 2025, 5:03 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

That doesn't explain why it used to be better and since I've had ai experts try to create prompts to solve it once it started this and failed or that i can eventually get it to work on occasion but same data same prompt another day gives a different answer

aug 6, 2025, 5:06 pm • 0 0 • view
avatar
Transmental Me @transmentalme.bsky.social

That boils down to you not understanding what you're using. It didn't used to be better, it quite literally used to be horrible. Newer models are significantly better in general. As for why it generated different results, LLMs are not deterministic in nature.

aug 6, 2025, 5:10 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

Literally used the same llms different versions. Used to make life easier, now it doesn't. Out walking the dog so not going to dig into the data.

aug 6, 2025, 5:13 pm • 0 0 • view
avatar
Transmental Me @transmentalme.bsky.social

If you'd like to discuss that, we should move somewhere without a character limit as this isn't conducive to a real discussion.

aug 6, 2025, 5:10 pm • 0 0 • view
avatar
r @recombobulating.bsky.social

as if you have to see something as perfect and pure and deserving of absolute authority and AI is better for that job than people could be

aug 6, 2025, 11:23 am • 1 0 • view
avatar
who cares @maaaatttt.bsky.social

I think people lie predictably. They do so out of fear, or anger, or compassion. AI follows no such logic.

aug 6, 2025, 12:13 pm • 7 0 • view
avatar
Transmental Me @transmentalme.bsky.social

They don't, but in general they do believe they're white lies.

aug 6, 2025, 4:16 pm • 0 0 • view