avatar
cara ara~ @hyperfekt.net

Makes all what possible?

sep 1, 2025, 11:25 am • 0 0

Replies

avatar
Leftover woman @rosactrl.bsky.social

What makes these things to work the way they do is the random component, all the outputs have a degree of “hallucination”, so the possibility of getting bullshit is always there. I could have run the prompt 1000 and maybe I would have gotten the right answer

sep 1, 2025, 11:29 am • 0 0 • view
avatar
cara ara~ @hyperfekt.net

Sure it is, but if it were to be correct a large enough fraction of the time why would it matter? Humans can drop the ball too, and there's nothing perfectly reliable in this world.

sep 1, 2025, 12:14 pm • 0 0 • view
avatar
Leftover woman @rosactrl.bsky.social

Well that’s the complaint, in my experience if makes me lose more time because the effect it has on me, I trust it, because everyone says I should! Just to yet again discover its bullshiting me, so maybe I should do my job instead, as I am the one being paid for it

sep 1, 2025, 12:23 pm • 1 0 • view
avatar
cara ara~ @hyperfekt.net

If you can't find a way that works for you I absolutely agree it doesn't make sense to keep trying. Just sharing that in my personal experience it is principlally possible to get LLMs be useful in these kinds of things.

sep 1, 2025, 1:17 pm • 0 0 • view
avatar
Leftover woman @rosactrl.bsky.social

Yes, and that’s reasonable. My negative bias comes from this implicit pressure I feel in the air to use them regardless

sep 1, 2025, 8:12 pm • 1 0 • view