Makes all what possible?
Makes all what possible?
What makes these things to work the way they do is the random component, all the outputs have a degree of “hallucination”, so the possibility of getting bullshit is always there. I could have run the prompt 1000 and maybe I would have gotten the right answer
Sure it is, but if it were to be correct a large enough fraction of the time why would it matter? Humans can drop the ball too, and there's nothing perfectly reliable in this world.
Well that’s the complaint, in my experience if makes me lose more time because the effect it has on me, I trust it, because everyone says I should! Just to yet again discover its bullshiting me, so maybe I should do my job instead, as I am the one being paid for it
If you can't find a way that works for you I absolutely agree it doesn't make sense to keep trying. Just sharing that in my personal experience it is principlally possible to get LLMs be useful in these kinds of things.
Yes, and that’s reasonable. My negative bias comes from this implicit pressure I feel in the air to use them regardless