If the ATM were wrong 15% of the time I wouldn't use it for banking transactions. But I'm not using AI for banking. If an AI could make lottery, stock market, or slot machine predictions with 85% accuracy would you use it?
If the ATM were wrong 15% of the time I wouldn't use it for banking transactions. But I'm not using AI for banking. If an AI could make lottery, stock market, or slot machine predictions with 85% accuracy would you use it?
the affirmation machine that can't even model the present is going to model the future LMAO bridge-sellers are having a field day with this stuff
But people aren’t using it to make predictions, they are using it to find, organize and present information, *which is the thing it is bad at*. And, also, knock-on effects.
The phrasing of your own response carries its refutation.
LLMs are only the tip of the iceberg. Not even the tip, just a little chunk off to one side. AI is not an app, it's a coding approach. I don't know why so many people use LLMs as the exemplar for AI because it isn't. It just happens to be one that's easily available and thus frequently used.
It is literally what is being sold as “AI”, the magic technology. If you’re arguing for machine learning, which is actually useful in some problem domains, don’t chime in when people are talking about LLMs, which everyone talking about “AI” right now is.
If they are talking about LLMs then they should be specific about it because LLMs in no way exemplify the whole field of AI. It makes no sense to "define" AI as LLMs, that's like making assertions about the universe by talking only about the moon.
That’s explicitly a problem with people pushing llms, who literally advertise it as ai If everyone else in the conversation knows what they are talking about except you, the convo isn’t the problem
You wanna defend the honor of machine learning against confusion with the really fancy Markov chains, take it up with the LLM boosters who are selling it as AI Doing it here just makes it look like you’re trying to defend LLMs by conflating them with shit that actually does things
Saying that an imperfect tool can still be useful is not the same as defending LLMs. I'm trying to point out that AI is a broad field (much much more than LLMs), AI is nascent and still developing, and AI is not well understood by the general public.
At some point you should probably try to realize that "the broad field" is Not, and was Never, what was being criticized here. Because the amount of time you continue to fail to do that does not help your cause, at all.
When people use the term "AI" they need to be aware that it is a broad field. It's not a machine, it's not a specific app, it's not specifically LLMs. If the subject of discussion is specifically LLMs then that needs to be clear too, otherwise misunderstandings are inevitable.
Dude, when you see someone complaining about the brain-melting qualities of useless tools that are horribly inaccurate, it is up to you to figure out that they are Not, and Never Were, talking about cancer-identifying pattern-seeking software. Otherwise, you keep hurting *your own stated purpose*
Having just done a quick skim of his reply feed, I have to say his actual purpose is trying to defuse criticism of the “AI” biz through topic conflation, “it’ll get better” arguments, etc.
Its not rocket science to figure that out, and wading into a complaint about LLMs and the marketing of the term AI to argue *some other things* might be useful is a really, really, really bad way to get *your own stated point* across
But you're not using AI for making lottery, stock market, or slot machine predictions.
The point I was making is that some tools need to be accurate to be useful (rulers, ATMs, clocks) but not all tools need to be 100% accurate to be useful. A tool that could predict the stock market or winning lottery numbers, or even some aspects of weather, with 85% accuracy would be useful.
AI is not one of those tools. In fact it is the opposite. If it is not accurate, much like a ruler, it is actively harmful.
Once again, that's a people problem. Unrealistic expectation and lack of education.
Everything is a people problem. But the people pushing AI are pushing false narratives about usefulness and accuracy. If I bought a ruler that turned out to only work 75% of the time and when I complained to the manufacturer they said "well, you had false expectations," would that be okay?
(especially when the manufacturer, and every manufacturer, was pushing me to use one of the new rulers arguing it would save me time?)
I haven't been hearing a lot of claims about accuracy. Who is making them? Can you give some examples (credible ones)?
I could ask ChatGPT...
So wait, now you're saying it isn't a problem that AI isn't accurate because no one is CLAIMING AI is accurate?
If you read my post again, you'll see that I was asking a question not making an assertion.
Then have the courage of your convictions and actually say something.
I hope you're joking.
It is incredibly easy to look up the thing you asked, and you asking a question implying a lack of credible sources reads as disingenuous at best Your lack of information is here is human error. Yours.
I wanted your point of view. I don't know where you are hearing those claims. Of course I can look it up but then I'm talking to a machine instead of getting your personal perspective.
“AI” cannot and will never be able to do that.
“If unicorns lived in your shoes and could grant wishes, would you buy more shoes?” Is how you sound rn.