People lie. Shall we get rid of them as useless? It's up to US to use our judgment when using tools like AI, just as it's up to US to use our judgment when listening to assertions by other people.
People lie. Shall we get rid of them as useless? It's up to US to use our judgment when using tools like AI, just as it's up to US to use our judgment when listening to assertions by other people.
Turning off humans is called murder and turning off machines is called saving power
Would you keep an employee around that lies twice as much as the average human?
Only in CEO positions, going by the news.
the whole fucking point of AI is to replace human judgement with an automated patriarch the people at the top have given up on the species because it keeps refusing to give them what they want
No, that's not the point of AI. That's not why it was developed. It's entrepreneurs and greedy corporate interests that want to replace human judgment. You have to make a distinction between the technology and how people use it (or want to abuse it). They are separate issues.
It was developed by those people for the reasons I described. I wasn't saying it created itself. The technology doesn't exist at all without humans. It's always connected to them. And I don't have to follow whatever rhetorical rules/social norms you think apply here.
No, the early developers of AI did not do it for those reasons. It was developed to help humans with education and creativity back in the 1970s and 1980s. How people are implementing it now is different from the vision the original pioneers had.
I am talking about AI as a superior replacement for human judgement. If that's not what that was then it isn't relevant to this conversation.
when people lie, they do it for a reason. an associate at a law firm is not going to make up cites while writing a brief because that makes no fucking sense. the problem with AI hallucinations is that you have no idea where the lie is, so all of the output has to be validated.
Yes, exactly, all of the output has to be validated. That's part of the process.
No, if a person did that, *you would fire or stop working with that person*. If your ATM was wrong 15% of the time, you would not be so casual. If a brand of hammer broke 15% to 30% of the time you’d stop using it.
The question is why are you making an exception for a tool which not only doesn’t function like it’s supposed to, is being sold to us and its use mandated *as if it did not have those flaws*.
You are working under the assumption that tools are neutral, which they are not, particularly the ones that are marketed with lies, used by people who don’t understand them *because* of those lies, and also have this environmental effects. The thing is what it does.
Online interactive video gaming and bitcoin mining (to name only two examples) take a lot of computing resources. The environmental effects do concern me but China has demonstrated (twice) that American systems are bloated and AI can be run with far fewer resources.
bitcoin mining is also complete garbage, and online gaming uses a tiny fraction of the resources that either of the other sectors use.
People shouldn’t spend huge amounts of energy solving sudoku puzzles for fraud “money” either.
If the ATM were wrong 15% of the time I wouldn't use it for banking transactions. But I'm not using AI for banking. If an AI could make lottery, stock market, or slot machine predictions with 85% accuracy would you use it?
the affirmation machine that can't even model the present is going to model the future LMAO bridge-sellers are having a field day with this stuff
But people aren’t using it to make predictions, they are using it to find, organize and present information, *which is the thing it is bad at*. And, also, knock-on effects.
The phrasing of your own response carries its refutation.
LLMs are only the tip of the iceberg. Not even the tip, just a little chunk off to one side. AI is not an app, it's a coding approach. I don't know why so many people use LLMs as the exemplar for AI because it isn't. It just happens to be one that's easily available and thus frequently used.
It is literally what is being sold as “AI”, the magic technology. If you’re arguing for machine learning, which is actually useful in some problem domains, don’t chime in when people are talking about LLMs, which everyone talking about “AI” right now is.
If they are talking about LLMs then they should be specific about it because LLMs in no way exemplify the whole field of AI. It makes no sense to "define" AI as LLMs, that's like making assertions about the universe by talking only about the moon.
That’s explicitly a problem with people pushing llms, who literally advertise it as ai If everyone else in the conversation knows what they are talking about except you, the convo isn’t the problem
You wanna defend the honor of machine learning against confusion with the really fancy Markov chains, take it up with the LLM boosters who are selling it as AI Doing it here just makes it look like you’re trying to defend LLMs by conflating them with shit that actually does things
Saying that an imperfect tool can still be useful is not the same as defending LLMs. I'm trying to point out that AI is a broad field (much much more than LLMs), AI is nascent and still developing, and AI is not well understood by the general public.
But you're not using AI for making lottery, stock market, or slot machine predictions.
The point I was making is that some tools need to be accurate to be useful (rulers, ATMs, clocks) but not all tools need to be 100% accurate to be useful. A tool that could predict the stock market or winning lottery numbers, or even some aspects of weather, with 85% accuracy would be useful.
AI is not one of those tools. In fact it is the opposite. If it is not accurate, much like a ruler, it is actively harmful.
Once again, that's a people problem. Unrealistic expectation and lack of education.
Everything is a people problem. But the people pushing AI are pushing false narratives about usefulness and accuracy. If I bought a ruler that turned out to only work 75% of the time and when I complained to the manufacturer they said "well, you had false expectations," would that be okay?
(especially when the manufacturer, and every manufacturer, was pushing me to use one of the new rulers arguing it would save me time?)
I haven't been hearing a lot of claims about accuracy. Who is making them? Can you give some examples (credible ones)?
“AI” cannot and will never be able to do that.
“If unicorns lived in your shoes and could grant wishes, would you buy more shoes?” Is how you sound rn.
that’s fucking stupid. if an AI produces something and I have to check everything it did, that saves me no time.
Checking takes less time than gathering, organizing, and checking (since everything needs checking anyway).
no, not everything needs checking. a calculator that is wrong 15% of the time would be useless. If I am preparing a document and 15% of the cites are completely made up, that most likely changes the entire fabric of what I’m writing. And to be wrong 15% of the time it lights forests on fire.
That doesn't mean everything that's wrong 15% of the time is useless. I don't know why you think something has to be perfect to be useful. A narrow windy gravel road full of potholes isn't much good for regular cars or wheelchairs, but it's still useful for getting places you couldn't go otherwise.
to be clear, though, I don’t think LLM are useless. they are fancy autocomplete. I do think they are not worth feeding human creative products into an IP woodchipper for, nor are they worth the amounts of money, silicon, and water they are consuming at present.
Get back to me when people are spending billions of dollars to build narrow windy gravel roads and touting them as the future of computing.
When the settlers first came here, a narrow windy gravel road was a big step forward. AI will improve. Some will benefit, some will never learn to use it effectively, but just as roads improved, AI will too. It's not going away so we'd better learn to steer it in positive directions.
when the settlers first came here they were shitting in pots. now imagine if someone in London said “hey guys I’ve got the future of plumbing here, we’re gonna trade our sewer system for pots that we dump in the woods, money please”
It’s twice as bad as the average of all human communication. Things do not have to be perfect to be useful. They DO have to be better than doing without them to be useful. “AI” is worse at everything than humans are, ergo, it is not useful.
You have to gather the information to validate the AI output, so you’re doing double work! If I’ve gathered and organized the information, then I don’t have to check it because I already know it’s correct!!
Double work isn't always a bad thing. We find things in one way and the AI finds it in another. Put them together and the whole may be better because of the different parts. I always check my work anyway, even if I gathered it myself. I tell people if you don't think AI is useful, don't use it.
…this is pointless. If you are gonna sit here and say “well actually it’s a good thing if it causes double the work” then I’m truly at a loss for words. ChatGPT, please draw me “Robert Downey jr rolling his eyes in the studio ghibli style”
Yes, sometimes it's a good thing to have input from another source. That doesn't automatically mean twice the work. There is overlap between what the human gathers and what the AI gathers. [ChatGPT doesn't know how to draw. It passes the request to Dall-E which is a different AI that draws it.]
ChatGPT, draw the dismissive wanking gesture in ASCII
I mean, in my experience the people using LLMs aren’t validating. *I* am. They’re “saving time” by wasting mine. The close editing required is more time consuming and harder than writing something on your own. Which, of course, is why they don’t do it.
I wouldn't be surprised if those using LLMs without validating are the same ones who don't know how to write or do research in the first place. I've met far too many people who think AI apps are "magic" and believe not only what they tell them but assume they are omniscient and infallible.
Sure, that’s probably true. However, it is the LLMs making references to non-existent papers and jumbling numbers. These are mistakes different from human mistakes that cause wider problems.
I don't "play" with AI. I don't even use LLMs that much. I'm a software developer. I see this from a historical perspective. The software is primitive compared to what it will be in five years. It's also not going away. People need to understand it better so it doesn't end up running them.
“AI” is a toy. It is not useful nor time-saving, it’s just shiny and entertaining enough to foolish people that they mistakenly think they’re “saving time”, even though they are, on average, 20% slower at tasks when using it.
I’m a historian with a PhD and twenty years of experience in teaching a d research as a professor. You are making some big assumptions about what software can do in the future based on a flawed premise about how technology develops over time. Sure there might be more sophisticated software in
Five years. But there also might not be enough difference to justify that kind of software in economic terms. Another perspective is that the productivity gains made by first hardware and software over the past thirty years have reached a Pareto limit. All the easy gains have been achieved, more
incremental gains in productivity have become more expensive. At some point the value is not going to be there and the next technological revolution is going to happen in some completely different field. For now the TechBros have unlimited stacks of money they can set on fire to make billions.
I also have degrees up the wazoo (too many) and years of experience in the software field. The AI we have now is nascent. I'll bet you lunch on it and I'm not talking about LLMs specifically, that's only one small specific application of AI, I'm talking about the field in general.
Nothing grows forever. Everything regresses to the mean sooner or later. It’s the law of thermodynamics.
Yes, eventually, but I don't think we're there yet. There are so many AI applications in the sciences that are in progress or waiting to be coded.
Jfc this is a stupid argument lmao
How would you say it?
Seems apt to have to explain to an AI defender the difference between sentient beings and a language model
I don't see bots and humans as equivalent at all (not even close). I was challenging the previous post's concept of what is useful and what is not useful. The point was that a tool doesn't have to be 100% correct to be useful as long as the user is aware of its limitations.
AI isn’t useful enough to justify all the havoc it’s causing is the fucking point
It's helping in the science fields. It's being used for image analysis, simulations and modeling, and for filtering data.
That’s certainly the PR spin. In the meantime it’s eroding truth, history, art, the environment, human intelligence, etc. Sorry but we’re not in a context where it’s few maybe beneficial use cases matter and I honestly see it as bad faith to bring them up in the face of all the harm it does
I don't mean to make light of what you said but when I read this I though for a moment you were talking about our current government. "In the meantime it’s eroding truth, history, art, the environment, human intelligence, etc."
Well yeah there’s a reason this government is all in on AI
Lmao lol hahhahahahaha
Eat dirt
Sure Bob lies about his reports but only 1/6th of the time, and he's stealing from the company and he puts fish in the microwave but don't you think firing him is drastic?
My point was that AI is a tool. It's up to the human to choose the right tool and to understand what it's good for and what it's not good for. Early automobiles and early airplanes had many limitations, but they improved (just as AI is improving) and even then they were useful in the right hands.
and you thought you could convey the point that "AI is a tool" by comparing AI to people? if I assume you know what you're talking about, I would say you're dehumanizing people and treat them as tools.
I can tell you from having been in the workplace a long time that people lie in reports a lot more than 1/16th of the time and even if they're not lying a lot of them are incompetent (hired because they're a relative of the boss or... whatever). Early airplanes were not perfect, but they improved.
Who cares if he lies? Everyone lies I'm lying right now The important thing is Bob can get better How do I know? Because 100 yrs ago Nancy sucked but she got better
If any one person consistently lied to you while insisting they were correct as much as AI then you would stop listening to them
An LLM isn't a person. It's a software app coded by people. All software has bugs and limitations. I wouldn't have the same expectations of a software app as I do of people. What we need is education so people have reasonable expectations instead of thinking it's magic (which some of them do).
You are the one who compared AI to a person. I hope you realize how ridiculous of a comparison that was now.
The operative concept was usefulness versus what was not useful. It was an analogy. I don't consider AI apps and people to be equivalent.
People are people. Computers are tools. If a tool is unreliable you don't use it because it makes your job harder.
Whether it's unreliable depends how you use it. A hammer is great for inserting nails, not so good at inserting screws. If people assume they can use AI without understanding its strengths and weaknesses, it's not much different from assuming they can fly a plane without knowing how.
Silly. It's just a hammer that misses the nail a high percentage of the time. If your point is "it's not useless, it's just useless for all the things people are/are being encouraged to use it for," then sure, that's a distinction without a difference.
Jfc. Beyond the fact that if someone I knew lied even 15% of the time I would probably cease to associate with them, my understanding is that the whole point of AI is to surrender your judgement for the sake of convenience
I have no idea why people want to do that but they do. I hate to say this, but people are lazy and are very quick to abdicate thinking and sit back and push buttons. I know this because I tutor computer literacy in my free time. The problem isn't AI, it's a useful tool, the problem is humans.