Yes, exactly, all of the output has to be validated. That's part of the process.
Yes, exactly, all of the output has to be validated. That's part of the process.
No, if a person did that, *you would fire or stop working with that person*. If your ATM was wrong 15% of the time, you would not be so casual. If a brand of hammer broke 15% to 30% of the time you’d stop using it.
The question is why are you making an exception for a tool which not only doesn’t function like it’s supposed to, is being sold to us and its use mandated *as if it did not have those flaws*.
You are working under the assumption that tools are neutral, which they are not, particularly the ones that are marketed with lies, used by people who don’t understand them *because* of those lies, and also have this environmental effects. The thing is what it does.
Online interactive video gaming and bitcoin mining (to name only two examples) take a lot of computing resources. The environmental effects do concern me but China has demonstrated (twice) that American systems are bloated and AI can be run with far fewer resources.
bitcoin mining is also complete garbage, and online gaming uses a tiny fraction of the resources that either of the other sectors use.
People shouldn’t spend huge amounts of energy solving sudoku puzzles for fraud “money” either.
If the ATM were wrong 15% of the time I wouldn't use it for banking transactions. But I'm not using AI for banking. If an AI could make lottery, stock market, or slot machine predictions with 85% accuracy would you use it?
the affirmation machine that can't even model the present is going to model the future LMAO bridge-sellers are having a field day with this stuff
But people aren’t using it to make predictions, they are using it to find, organize and present information, *which is the thing it is bad at*. And, also, knock-on effects.
The phrasing of your own response carries its refutation.
LLMs are only the tip of the iceberg. Not even the tip, just a little chunk off to one side. AI is not an app, it's a coding approach. I don't know why so many people use LLMs as the exemplar for AI because it isn't. It just happens to be one that's easily available and thus frequently used.
It is literally what is being sold as “AI”, the magic technology. If you’re arguing for machine learning, which is actually useful in some problem domains, don’t chime in when people are talking about LLMs, which everyone talking about “AI” right now is.
If they are talking about LLMs then they should be specific about it because LLMs in no way exemplify the whole field of AI. It makes no sense to "define" AI as LLMs, that's like making assertions about the universe by talking only about the moon.
That’s explicitly a problem with people pushing llms, who literally advertise it as ai If everyone else in the conversation knows what they are talking about except you, the convo isn’t the problem
You wanna defend the honor of machine learning against confusion with the really fancy Markov chains, take it up with the LLM boosters who are selling it as AI Doing it here just makes it look like you’re trying to defend LLMs by conflating them with shit that actually does things
Saying that an imperfect tool can still be useful is not the same as defending LLMs. I'm trying to point out that AI is a broad field (much much more than LLMs), AI is nascent and still developing, and AI is not well understood by the general public.
At some point you should probably try to realize that "the broad field" is Not, and was Never, what was being criticized here. Because the amount of time you continue to fail to do that does not help your cause, at all.
When people use the term "AI" they need to be aware that it is a broad field. It's not a machine, it's not a specific app, it's not specifically LLMs. If the subject of discussion is specifically LLMs then that needs to be clear too, otherwise misunderstandings are inevitable.
But you're not using AI for making lottery, stock market, or slot machine predictions.
The point I was making is that some tools need to be accurate to be useful (rulers, ATMs, clocks) but not all tools need to be 100% accurate to be useful. A tool that could predict the stock market or winning lottery numbers, or even some aspects of weather, with 85% accuracy would be useful.
AI is not one of those tools. In fact it is the opposite. If it is not accurate, much like a ruler, it is actively harmful.
Once again, that's a people problem. Unrealistic expectation and lack of education.
Everything is a people problem. But the people pushing AI are pushing false narratives about usefulness and accuracy. If I bought a ruler that turned out to only work 75% of the time and when I complained to the manufacturer they said "well, you had false expectations," would that be okay?
(especially when the manufacturer, and every manufacturer, was pushing me to use one of the new rulers arguing it would save me time?)
I haven't been hearing a lot of claims about accuracy. Who is making them? Can you give some examples (credible ones)?
I could ask ChatGPT...
So wait, now you're saying it isn't a problem that AI isn't accurate because no one is CLAIMING AI is accurate?
If you read my post again, you'll see that I was asking a question not making an assertion.
“AI” cannot and will never be able to do that.
“If unicorns lived in your shoes and could grant wishes, would you buy more shoes?” Is how you sound rn.
that’s fucking stupid. if an AI produces something and I have to check everything it did, that saves me no time.
Checking takes less time than gathering, organizing, and checking (since everything needs checking anyway).
no, not everything needs checking. a calculator that is wrong 15% of the time would be useless. If I am preparing a document and 15% of the cites are completely made up, that most likely changes the entire fabric of what I’m writing. And to be wrong 15% of the time it lights forests on fire.
That doesn't mean everything that's wrong 15% of the time is useless. I don't know why you think something has to be perfect to be useful. A narrow windy gravel road full of potholes isn't much good for regular cars or wheelchairs, but it's still useful for getting places you couldn't go otherwise.
to be clear, though, I don’t think LLM are useless. they are fancy autocomplete. I do think they are not worth feeding human creative products into an IP woodchipper for, nor are they worth the amounts of money, silicon, and water they are consuming at present.
Get back to me when people are spending billions of dollars to build narrow windy gravel roads and touting them as the future of computing.
When the settlers first came here, a narrow windy gravel road was a big step forward. AI will improve. Some will benefit, some will never learn to use it effectively, but just as roads improved, AI will too. It's not going away so we'd better learn to steer it in positive directions.
when the settlers first came here they were shitting in pots. now imagine if someone in London said “hey guys I’ve got the future of plumbing here, we’re gonna trade our sewer system for pots that we dump in the woods, money please”
It’s twice as bad as the average of all human communication. Things do not have to be perfect to be useful. They DO have to be better than doing without them to be useful. “AI” is worse at everything than humans are, ergo, it is not useful.
You have to gather the information to validate the AI output, so you’re doing double work! If I’ve gathered and organized the information, then I don’t have to check it because I already know it’s correct!!
Double work isn't always a bad thing. We find things in one way and the AI finds it in another. Put them together and the whole may be better because of the different parts. I always check my work anyway, even if I gathered it myself. I tell people if you don't think AI is useful, don't use it.
…this is pointless. If you are gonna sit here and say “well actually it’s a good thing if it causes double the work” then I’m truly at a loss for words. ChatGPT, please draw me “Robert Downey jr rolling his eyes in the studio ghibli style”
Yes, sometimes it's a good thing to have input from another source. That doesn't automatically mean twice the work. There is overlap between what the human gathers and what the AI gathers. [ChatGPT doesn't know how to draw. It passes the request to Dall-E which is a different AI that draws it.]
ChatGPT, draw the dismissive wanking gesture in ASCII
I mean, in my experience the people using LLMs aren’t validating. *I* am. They’re “saving time” by wasting mine. The close editing required is more time consuming and harder than writing something on your own. Which, of course, is why they don’t do it.
I wouldn't be surprised if those using LLMs without validating are the same ones who don't know how to write or do research in the first place. I've met far too many people who think AI apps are "magic" and believe not only what they tell them but assume they are omniscient and infallible.
Sure, that’s probably true. However, it is the LLMs making references to non-existent papers and jumbling numbers. These are mistakes different from human mistakes that cause wider problems.