avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

No. I'm trying to say that the likelihood of a hallucination greatly increases when a user doesn't know how to use the AI. Thinking like a software engineer has, in my experience, brought better results. This makes sense when you consider that software engineers are very specific in wording things.

aug 6, 2025, 7:36 pm • 0 0

Replies

avatar
Stone Cold Jane Austen 🍉 @cofactorstrudel.bsky.social

I'm curious what the source for that is.

aug 6, 2025, 11:10 pm • 0 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

Which part? (Note, specificity... LOL)

aug 7, 2025, 12:46 am • 0 0 • view
avatar
Stone Cold Jane Austen 🍉 @cofactorstrudel.bsky.social

That the likelihood of hallucination goes up when the user doesn't know how to use AI. I thought hallucination was specifically not a prompting issue, otherwise it'd be easy to solve with user education.

aug 7, 2025, 2:21 am • 1 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

It's mostly a user issue. Yes, AI can get some stuff wrong even with good prompts, but the likelihood is way lower. I've spent the last 4 years working with AI and learning how to work with it, and even today I get a wrong response once in a while. But using research models lowers even that rate.

aug 7, 2025, 3:41 am • 1 0 • view
avatar
Stone Cold Jane Austen 🍉 @cofactorstrudel.bsky.social

Man, we're still going with "Trust me, bro" in response to a request for sources in this the year of our Lord 2025? DW about it, you've told me all I need to hear.

aug 7, 2025, 3:50 am • 0 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

🤷‍♂️ Ok... Whatever.

aug 7, 2025, 6:25 am • 0 0 • view